![Create React App Officially Deprecated Amid React 19 Compatibility Issues](https://cdn.sanity.io/images/cgdhsj6q/production/04fa08cf844d798abc0e1a6391c129363cc7e2ab-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Create React App Officially Deprecated Amid React 19 Compatibility Issues
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
A Python package for birdsong synthesis and bioacoustic analysis
WaveSongs implements the motor gestures model for birdsong developed by Gabo Mindlin to generate synthetic birdsongs through numerical optimization [1, 2] . By leveraging fundamental frequency (FF) and spectral content index (SCI) as key parameters, the package solves a minimization problem using SciPy and performs audio analysis with librosa.
Validated against field recordings of Zonotrichia Capensis, Ocellated Tapaculo, and Mimus Gilvus, the model achieves <5% relative error in FF reconstruction compared to empirical data.
Clone the repository:
git clone https://github.com/wavesongs/wavesongs
cd wavesongs
Set up a virtual environment (choose one method):
venv
python -m venv venv
# Activate on Linux/macOS
source venv/bin/activate
# Activate on Windows
.\venv\Scripts\activate
conda create -n wavesongs python=3.12
conda activate wavesongs
Install dependencies:
pip install -r requirements.txt
Install WaveSongs in editable mode:
pip install -e .
Explore the Tutorial 1 Notebook to generate synthetic birdsongs and explore the model plots.
Here is an example of simple code to generate and display a sythetic audio. First, start by loading the wavesongs package:
# select matplotlib backend for notebook, enable interactive plots, just works on notebooks
%matplotlib ipympl
from wavesongs.utils.paths import ProjDirs # Project files manager
from wavesongs.objects.syllable import Syllable # Syllable objects
from wavesongs.utils import plots # Display plots
Then, create a project directory manager, select a region of interest, and define the song for study. You can display it with the plots functions.
proj_dirs = ProjDirs(audios="./assets/audio", results="./assets/results")
# Region of Interest
tlim = (0.8798, 1.3009)
# Define the syllable
copeton_syllable_0 = Syllable(
proj_dirs=proj_dirs, file_id="574179401", obj=copeton_syllable,
tlim=tlim, type="intro-down", no_syllable="0", sr=44100
)
copeton_syllable_0.acoustical_features(
umbral_FF=1.4, NN=256, ff_method="yin", flim=(1e2, 2e4)
)
# Display the syllable's spectrogram
plots.spectrogram_waveform(copeton_syllable_0, ff_on=True, save=True)
copeton_syllable_0.play() # just work on notebooks
https://github.com/user-attachments/assets/d15e7433-5f4c-451f-85aa-d4d53525029f
Now, let's find the optimal values to generate a comparable syllable, with errors below 5 % or even 1%.
from wavesongs.model import optimizer
optimal_z = optimizer.optimal_params(
syllable=copeton_syllable_0, Ns=10, full_output=False
)
print(f"\nOptimal z values:\n\t{optimal_z}")
Computing a0*...
Optimal values: a_0=0.0010, t=0.51 min
Computing b0*, b1*, and b2*...
Optimal values: b_0=-0.2149, b_2=1.2980, t=13.77 min
Optimal values: b_1=1.0000, t=5.69 min
Time of execution: 19.97 min
Optimal z values:
{'a0': 0.00105, 'b0': -0.21491, 'b1': 1.0, 'b2': 1.29796}
With the optimal values, define and dislpay the synthetic syllable:
# Define the synthetic syllable
synth_copeton_syllable_0 = copeton_syllable_0.solve(z=optimal_z, method="best")
plots.spectrogram_waveform(synth_copeton_syllable_0, ff_on=True, save=True)
# Display the socre variables
plots.scores(copeton_syllable_0, synth_copeton_syllable_0, save=False)
plots.motor_gestures(synth_copeton_syllable_0, save=False)
plots.syllables(copeton_syllable_0, synth_copeton_syllable_0, save=False)
synth_copeton_syllable_0.play() # just work on notebooks
https://github.com/user-attachments/assets/66ca1630-0ad0-43fc-bb56-cb397064ecd3
For advanced usage (e.g., custom gestures, parameter tuning, data measures, etc), check the other tutorials: Spectrum Measures or Synthetic Songs. More details can be found in the Documentation.
Pre-processed field recordings from Xeno Canto and eBird are included in ./assets/audio
. To use custom recordings place .wav
or .mp3
files in ./assets/audio/
or define the audios path with the ProjDirs
class.
WaveSongs is licensed under the GNU General Public License v3.0.
If this work contributes to your research, please cite:
@software{aguilera_wavesongs_2025,
author = {Aguilera Novoa, Sebastián},
title = {WaveSongs: Computational Birdsong Synthesis},
year = {2025},
publisher = {GitHub},
journal = {GitHub Repository},
url = {https://github.com/wavesongs/wavesongs}
}
We welcome contributions! See our roadmap:
scikit-maad
To report issues or suggest features, open a GitHub Issue.
[1] Mindlin, G. B., & Laje, R. (2005). The Physics of Birdsong. Springer. DOI
[2] Amador, A., et al. (2013). Elemental gesture dynamics in song premotor neurons. Nature. DOI
FAQs
A python package for birdsongs creation and data extraction.
We found that wavesongs demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.