Spaces:
Sleeping
Sleeping
Upload folder using huggingface_hub
Browse files- .github/workflows/deploy_to_hf.yml +2 -2
- README.md +44 -12
- improvisation_lab/application/interval_practice/web_interval_app.py +6 -9
- improvisation_lab/domain/composition/melody_composer.py +1 -1
- improvisation_lab/presentation/interval_practice/web_interval_view.py +14 -15
- tests/domain/composition/test_melody_composer.py +1 -2
- tests/service/test_interval_practice_service.py +2 -2
.github/workflows/deploy_to_hf.yml
CHANGED
@@ -21,8 +21,8 @@ jobs:
|
|
21 |
|
22 |
- name: Install Poetry
|
23 |
run: |
|
24 |
-
curl -sSL https://install.python-poetry.org | python3 -
|
25 |
-
echo "$
|
26 |
|
27 |
- name: Export requirements.txt
|
28 |
run: poetry export -f requirements.txt --output requirements.txt --without-hashes
|
|
|
21 |
|
22 |
- name: Install Poetry
|
23 |
run: |
|
24 |
+
curl -sSL https://install.python-poetry.org | POETRY_VERSION=1.7.1 python3 -
|
25 |
+
echo "$HOME/.local/bin" >> $GITHUB_PATH
|
26 |
|
27 |
- name: Export requirements.txt
|
28 |
run: poetry export -f requirements.txt --output requirements.txt --without-hashes
|
README.md
CHANGED
@@ -12,7 +12,7 @@ license: mit
|
|
12 |
---
|
13 |
# Improvisation Lab
|
14 |
|
15 |
-
A Python package for
|
16 |
|
17 |
## Try it out! 🚀
|
18 |
<a href="https://huggingface.co/spaces/atsushieee/improvisation-lab" target="_blank">
|
@@ -21,16 +21,25 @@ A Python package for generating musical improvisation melodies based on music th
|
|
21 |
|
22 |
Watch the demo in action:
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
https://github.com/user-attachments/assets/fa6e11d6-7b88-4b77-aa6e-a67c0927353d
|
25 |
|
26 |
Experience Improvisation Lab directly in your browser! Our interactive demo lets you:
|
27 |
|
28 |
-
- Generate melodic phrases based on chord progressions
|
29 |
- Practice your pitch accuracy in real-time
|
30 |
- Get instant visual guidance for hitting the right notes
|
31 |
|
32 |
-
|
|
|
33 |
|
|
|
|
|
34 |
- Performance might vary depending on server availability
|
35 |
- If you encounter any issues, try refreshing the page or coming back later
|
36 |
- For consistent performance, consider running the package locally
|
@@ -38,7 +47,17 @@ Note: The demo runs on Hugging Face Spaces' free tier, which means:
|
|
38 |
|
39 |
## Features
|
40 |
|
41 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
- Support for multiple scale types:
|
43 |
- Major
|
44 |
- Natural minor
|
@@ -57,6 +76,7 @@ Note: The demo runs on Hugging Face Spaces' free tier, which means:
|
|
57 |
- Real-time pitch detection with FCPE (Fast Context-aware Pitch Estimation)
|
58 |
- Web-based and direct microphone input support
|
59 |
|
|
|
60 |
## Prerequisites
|
61 |
|
62 |
- Python 3.11 or higher
|
@@ -97,7 +117,7 @@ poetry run python main.py --app_type console
|
|
97 |
|
98 |
The application can be customized through `config.yml` with the following options:
|
99 |
|
100 |
-
#### Audio Settings
|
101 |
- `sample_rate`: Audio sampling rate (default: 44100 Hz)
|
102 |
- `buffer_duration`: Duration of audio processing buffer (default: 0.2 seconds)
|
103 |
- `note_duration`: How long to display each note during practice (default: 3 seconds)
|
@@ -108,7 +128,12 @@ The application can be customized through `config.yml` with the following option
|
|
108 |
- `f0_max`: Maximum frequency for the pitch detection algorithm (default: 880 Hz)
|
109 |
- `device`: Device to use for the pitch detection algorithm (default: "cpu")
|
110 |
|
111 |
-
####
|
|
|
|
|
|
|
|
|
|
|
112 |
- `selected_song`: Name of the song to practice
|
113 |
- `chord_progressions`: Dictionary of songs and their progressions
|
114 |
- Format: `[scale_root, scale_type, chord_root, chord_type, duration]`
|
@@ -123,13 +148,20 @@ The application can be customized through `config.yml` with the following option
|
|
123 |
|
124 |
## How It Works
|
125 |
|
126 |
-
### Melody Generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
The melody generation follows these principles:
|
128 |
-
1. Notes are selected based on their relationship to the current chord and scale
|
129 |
-
2. Chord tones have more freedom in movement
|
130 |
-
3. Non-chord tones are restricted to moving to adjacent scale notes
|
131 |
-
4. Phrases are connected naturally by considering the previous note
|
132 |
-
5. All generated notes stay within the specified scale
|
133 |
|
134 |
### Real-time Feedback
|
135 |
Pitch Detection Demo:
|
|
|
12 |
---
|
13 |
# Improvisation Lab
|
14 |
|
15 |
+
A Python package for practicing musical improvisation through exercises. This package allows users to generate and practice melodic phrases based on music theory principles, offering real-time pitch detection for immediate feedback. Whether you're following chord progressions or practicing intervals, Improvisation Lab helps you improve your musical skills while adhering to musical rules.
|
16 |
|
17 |
## Try it out! 🚀
|
18 |
<a href="https://huggingface.co/spaces/atsushieee/improvisation-lab" target="_blank">
|
|
|
21 |
|
22 |
Watch the demo in action:
|
23 |
|
24 |
+
### Interval Practice: Demo
|
25 |
+
|
26 |
+
https://github.com/user-attachments/assets/6a475cf0-9a82-4103-8316-7d4485b07c2e
|
27 |
+
|
28 |
+
### Piece Practice: Demo
|
29 |
+
|
30 |
https://github.com/user-attachments/assets/fa6e11d6-7b88-4b77-aa6e-a67c0927353d
|
31 |
|
32 |
Experience Improvisation Lab directly in your browser! Our interactive demo lets you:
|
33 |
|
34 |
+
- Generate melodic phrases based on chord progressions or intervals
|
35 |
- Practice your pitch accuracy in real-time
|
36 |
- Get instant visual guidance for hitting the right notes
|
37 |
|
38 |
+
### Web Interface Features
|
39 |
+
- **Tab Switching**: Easily switch between Interval Practice and Piece Practice using tabs in the web interface. This allows you to seamlessly transition between different practice modes without leaving the page.
|
40 |
|
41 |
+
### Note
|
42 |
+
The demo runs on Hugging Face Spaces' free tier, which means:
|
43 |
- Performance might vary depending on server availability
|
44 |
- If you encounter any issues, try refreshing the page or coming back later
|
45 |
- For consistent performance, consider running the package locally
|
|
|
47 |
|
48 |
## Features
|
49 |
|
50 |
+
- Web-based and direct microphone input support
|
51 |
+
- Real-time pitch detection with FCPE (Fast Context-aware Pitch Estimation)
|
52 |
+
- Provides real-time feedback on pitch accuracy
|
53 |
+
|
54 |
+
### Interval Practice: Features
|
55 |
+
- Focuses on practicing musical intervals.
|
56 |
+
- Users can select the interval and direction (up or down) to practice.
|
57 |
+
|
58 |
+
### Piece Practice: Features
|
59 |
+
- Allows users to select a song and practice its chord progressions.
|
60 |
+
- Generate melodic phrases based on scales and chord progressions.
|
61 |
- Support for multiple scale types:
|
62 |
- Major
|
63 |
- Natural minor
|
|
|
76 |
- Real-time pitch detection with FCPE (Fast Context-aware Pitch Estimation)
|
77 |
- Web-based and direct microphone input support
|
78 |
|
79 |
+
|
80 |
## Prerequisites
|
81 |
|
82 |
- Python 3.11 or higher
|
|
|
117 |
|
118 |
The application can be customized through `config.yml` with the following options:
|
119 |
|
120 |
+
#### Common Audio Settings
|
121 |
- `sample_rate`: Audio sampling rate (default: 44100 Hz)
|
122 |
- `buffer_duration`: Duration of audio processing buffer (default: 0.2 seconds)
|
123 |
- `note_duration`: How long to display each note during practice (default: 3 seconds)
|
|
|
128 |
- `f0_max`: Maximum frequency for the pitch detection algorithm (default: 880 Hz)
|
129 |
- `device`: Device to use for the pitch detection algorithm (default: "cpu")
|
130 |
|
131 |
+
#### Interval Practice Settings
|
132 |
+
- `interval`: The interval to practice
|
133 |
+
- Example: For a minor second descending interval, the interval value is -1
|
134 |
+
- `num_problems`: The number of problems to practice
|
135 |
+
|
136 |
+
#### Piece Practice Settings
|
137 |
- `selected_song`: Name of the song to practice
|
138 |
- `chord_progressions`: Dictionary of songs and their progressions
|
139 |
- Format: `[scale_root, scale_type, chord_root, chord_type, duration]`
|
|
|
148 |
|
149 |
## How It Works
|
150 |
|
151 |
+
### Interval Practice: Melody Generation
|
152 |
+
The interval practice focuses on improving interval recognition and singing accuracy:
|
153 |
+
1. Users select the interval and direction (up or down) to practice.
|
154 |
+
2. The application generates a series of problems based on the selected interval.
|
155 |
+
3. Real-time feedback is provided to help users match the target interval.
|
156 |
+
4. The practice session can be customized with the number of problems and note duration.
|
157 |
+
|
158 |
+
### Piece Practice: Melody Generation
|
159 |
The melody generation follows these principles:
|
160 |
+
1. Notes are selected based on their relationship to the current chord and scale.
|
161 |
+
2. Chord tones have more freedom in movement.
|
162 |
+
3. Non-chord tones are restricted to moving to adjacent scale notes.
|
163 |
+
4. Phrases are connected naturally by considering the previous note.
|
164 |
+
5. All generated notes stay within the specified scale.
|
165 |
|
166 |
### Real-time Feedback
|
167 |
Pitch Detection Demo:
|
improvisation_lab/application/interval_practice/web_interval_app.py
CHANGED
@@ -43,7 +43,7 @@ class WebIntervalPracticeApp(BasePracticeApp):
|
|
43 |
self.results_table: List[List[Any]] = []
|
44 |
self.progress_timer: float = 0.0
|
45 |
self.is_auto_advance = False
|
46 |
-
self.note_duration =
|
47 |
|
48 |
def _process_audio_callback(self, audio_data: np.ndarray):
|
49 |
"""Process incoming audio data and update the application state.
|
@@ -82,13 +82,11 @@ class WebIntervalPracticeApp(BasePracticeApp):
|
|
82 |
self.update_results_table()
|
83 |
self.current_note_idx += 1
|
84 |
if self.current_note_idx >= len(self.phrases[self.current_phrase_idx]):
|
85 |
-
self.current_note_idx =
|
86 |
self.current_phrase_idx += 1
|
87 |
if self.current_phrase_idx >= len(self.phrases):
|
88 |
self.current_phrase_idx = 0
|
89 |
-
self.base_note = self.phrases[self.current_phrase_idx][
|
90 |
-
self.current_note_idx
|
91 |
-
].value
|
92 |
|
93 |
def handle_audio(self, audio: Tuple[int, np.ndarray]) -> Tuple[str, str, str, List]:
|
94 |
"""Handle audio input from Gradio interface.
|
@@ -143,11 +141,10 @@ class WebIntervalPracticeApp(BasePracticeApp):
|
|
143 |
num_notes=number_problems, interval=semitone_interval
|
144 |
)
|
145 |
self.current_phrase_idx = 0
|
146 |
-
self.current_note_idx =
|
147 |
self.is_running = True
|
148 |
|
149 |
-
|
150 |
-
self.base_note = present_note
|
151 |
|
152 |
if not self.audio_processor.is_recording:
|
153 |
self.text_manager.initialize_text()
|
@@ -210,4 +207,4 @@ class WebIntervalPracticeApp(BasePracticeApp):
|
|
210 |
result,
|
211 |
]
|
212 |
|
213 |
-
self.results_table.
|
|
|
43 |
self.results_table: List[List[Any]] = []
|
44 |
self.progress_timer: float = 0.0
|
45 |
self.is_auto_advance = False
|
46 |
+
self.note_duration = 3.0
|
47 |
|
48 |
def _process_audio_callback(self, audio_data: np.ndarray):
|
49 |
"""Process incoming audio data and update the application state.
|
|
|
82 |
self.update_results_table()
|
83 |
self.current_note_idx += 1
|
84 |
if self.current_note_idx >= len(self.phrases[self.current_phrase_idx]):
|
85 |
+
self.current_note_idx = 1
|
86 |
self.current_phrase_idx += 1
|
87 |
if self.current_phrase_idx >= len(self.phrases):
|
88 |
self.current_phrase_idx = 0
|
89 |
+
self.base_note = self.phrases[self.current_phrase_idx][0].value
|
|
|
|
|
90 |
|
91 |
def handle_audio(self, audio: Tuple[int, np.ndarray]) -> Tuple[str, str, str, List]:
|
92 |
"""Handle audio input from Gradio interface.
|
|
|
141 |
num_notes=number_problems, interval=semitone_interval
|
142 |
)
|
143 |
self.current_phrase_idx = 0
|
144 |
+
self.current_note_idx = 1
|
145 |
self.is_running = True
|
146 |
|
147 |
+
self.base_note = self.phrases[0][0].value
|
|
|
148 |
|
149 |
if not self.audio_processor.is_recording:
|
150 |
self.text_manager.initialize_text()
|
|
|
207 |
result,
|
208 |
]
|
209 |
|
210 |
+
self.results_table.insert(0, new_result)
|
improvisation_lab/domain/composition/melody_composer.py
CHANGED
@@ -87,5 +87,5 @@ class MelodyComposer:
|
|
87 |
melody = []
|
88 |
for base_note in base_notes:
|
89 |
target_note = self.note_transposer.transpose_note(base_note, interval)
|
90 |
-
melody.append([base_note, target_note
|
91 |
return melody
|
|
|
87 |
melody = []
|
88 |
for base_note in base_notes:
|
89 |
target_note = self.note_transposer.transpose_note(base_note, interval)
|
90 |
+
melody.append([base_note, target_note])
|
91 |
return melody
|
improvisation_lab/presentation/interval_practice/web_interval_view.py
CHANGED
@@ -87,7 +87,7 @@ class WebIntervalPracticeView(WebPracticeView):
|
|
87 |
)
|
88 |
self.note_duration_box = gr.Number(
|
89 |
label="Note Duration (seconds)",
|
90 |
-
value=
|
91 |
)
|
92 |
|
93 |
self.generate_melody_button = gr.Button("Generate Melody")
|
@@ -97,6 +97,15 @@ class WebIntervalPracticeView(WebPracticeView):
|
|
97 |
with gr.Row():
|
98 |
self.phrase_info_box = gr.Textbox(label="Problem Information", value="")
|
99 |
self.pitch_result_box = gr.Textbox(label="Pitch Result", value="")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
self.results_table = gr.DataFrame(
|
101 |
headers=[
|
102 |
"Problem Number",
|
@@ -110,9 +119,7 @@ class WebIntervalPracticeView(WebPracticeView):
|
|
110 |
label="Result History",
|
111 |
)
|
112 |
|
113 |
-
self.
|
114 |
-
self.end_practice_button = gr.Button("End Practice")
|
115 |
-
|
116 |
self._add_buttons_callbacks()
|
117 |
|
118 |
# Add Tone.js script
|
@@ -186,21 +193,13 @@ class WebIntervalPracticeView(WebPracticeView):
|
|
186 |
outputs=[self.base_note_box, self.phrase_info_box, self.pitch_result_box],
|
187 |
)
|
188 |
|
189 |
-
def
|
190 |
"""Create the audio input section."""
|
191 |
-
audio_input = gr.Audio(
|
192 |
-
label="Audio Input",
|
193 |
-
sources=["microphone"],
|
194 |
-
streaming=True,
|
195 |
-
type="numpy",
|
196 |
-
show_label=True,
|
197 |
-
)
|
198 |
-
|
199 |
# Attention: have to specify inputs explicitly,
|
200 |
# otherwise the callback function is not called
|
201 |
-
audio_input.stream(
|
202 |
fn=self.on_audio_input,
|
203 |
-
inputs=audio_input,
|
204 |
outputs=[
|
205 |
self.base_note_box,
|
206 |
self.phrase_info_box,
|
|
|
87 |
)
|
88 |
self.note_duration_box = gr.Number(
|
89 |
label="Note Duration (seconds)",
|
90 |
+
value=3.0,
|
91 |
)
|
92 |
|
93 |
self.generate_melody_button = gr.Button("Generate Melody")
|
|
|
97 |
with gr.Row():
|
98 |
self.phrase_info_box = gr.Textbox(label="Problem Information", value="")
|
99 |
self.pitch_result_box = gr.Textbox(label="Pitch Result", value="")
|
100 |
+
"""Create the audio input section."""
|
101 |
+
self.audio_input = gr.Audio(
|
102 |
+
label="Audio Input",
|
103 |
+
sources=["microphone"],
|
104 |
+
streaming=True,
|
105 |
+
type="numpy",
|
106 |
+
show_label=True,
|
107 |
+
)
|
108 |
+
self.end_practice_button = gr.Button("End Practice")
|
109 |
self.results_table = gr.DataFrame(
|
110 |
headers=[
|
111 |
"Problem Number",
|
|
|
119 |
label="Result History",
|
120 |
)
|
121 |
|
122 |
+
self._add_audio_callbacks()
|
|
|
|
|
123 |
self._add_buttons_callbacks()
|
124 |
|
125 |
# Add Tone.js script
|
|
|
193 |
outputs=[self.base_note_box, self.phrase_info_box, self.pitch_result_box],
|
194 |
)
|
195 |
|
196 |
+
def _add_audio_callbacks(self):
|
197 |
"""Create the audio input section."""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
# Attention: have to specify inputs explicitly,
|
199 |
# otherwise the callback function is not called
|
200 |
+
self.audio_input.stream(
|
201 |
fn=self.on_audio_input,
|
202 |
+
inputs=self.audio_input,
|
203 |
outputs=[
|
204 |
self.base_note_box,
|
205 |
self.phrase_info_box,
|
tests/domain/composition/test_melody_composer.py
CHANGED
@@ -94,7 +94,7 @@ class TestMelodyComposer:
|
|
94 |
|
95 |
# Check the length of the melody
|
96 |
assert len(melody) == len(base_notes)
|
97 |
-
assert len(melody[0]) ==
|
98 |
|
99 |
# Check the structure of the melody
|
100 |
for i, base_note in enumerate(base_notes):
|
@@ -103,4 +103,3 @@ class TestMelodyComposer:
|
|
103 |
base_note, interval
|
104 |
)
|
105 |
assert melody[i][1] == transposed_note
|
106 |
-
assert melody[i][2] == base_note
|
|
|
94 |
|
95 |
# Check the length of the melody
|
96 |
assert len(melody) == len(base_notes)
|
97 |
+
assert len(melody[0]) == 2
|
98 |
|
99 |
# Check the structure of the melody
|
100 |
for i, base_note in enumerate(base_notes):
|
|
|
103 |
base_note, interval
|
104 |
)
|
105 |
assert melody[i][1] == transposed_note
|
|
tests/service/test_interval_practice_service.py
CHANGED
@@ -13,9 +13,9 @@ class TestPiecePracticeService:
|
|
13 |
config = Config()
|
14 |
service = IntervalPracticeService(config)
|
15 |
melody = service.generate_melody(num_notes=10, interval=2)
|
16 |
-
# 10 notes, each with
|
17 |
assert len(melody) == 10
|
18 |
-
assert all(len(note_group) ==
|
19 |
assert all(
|
20 |
isinstance(note, Notes) for note_group in melody for note in note_group
|
21 |
)
|
|
|
13 |
config = Config()
|
14 |
service = IntervalPracticeService(config)
|
15 |
melody = service.generate_melody(num_notes=10, interval=2)
|
16 |
+
# 10 notes, each with 2 parts (base, transposed)
|
17 |
assert len(melody) == 10
|
18 |
+
assert all(len(note_group) == 2 for note_group in melody)
|
19 |
assert all(
|
20 |
isinstance(note, Notes) for note_group in melody for note in note_group
|
21 |
)
|