Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
@magenta/music
Advanced tools
This JavaScript implementation of Magenta's musical note-based models uses TensorFlow.js for GPU-accelerated inference.
Complete documentation is available at https://tensorflow.github.io/magenta-js/music.
For the Python TensorFlow implementations, see the main Magenta repo.
Here are a few applications built with @magenta/music
:
You can also try our hosted demos for each model and have a look at the demo code.
We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!
OnsetsAndFrames implements Magenta's piano transcription model for converting raw audio to MIDI in the browser. While it is somewhat flexible, it works best on solo piano recordings. The algorithm takes half the duration of audio to run on most browsers, but due to a Webkit bug, audio resampling will make this it significantly slower on Safari.
Demo Application: Piano Scribe
MusicRNN implements Magenta's LSTM-based language models. These include MelodyRNN, DrumsRNN, ImprovRNN, and PerformanceRNN.
Demo Application: Neural Drum Machine
MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, chord-conditioned multi-track models, and drum performance "humanizations" with [GrooVAE][https://g.co/magenta/groovae].
Demo Application: Endless Trios
Piano Genie is a VQ-VAE model that that maps 8-button input to a full 88-key piano in real time.
Demo Application: Piano Genie
There are several ways to get magentamusic.js
in your JavaScript project,
either in the browser, or in Node:
<script>
tagThis has all the models and all the core library helpers all bundled into one file. This is the simplest way to use Magenta.js.
To use this bundle, add the following code to an HTML file:
<html>
<head>
<!-- Load @magenta/music -->
<script src="https://cdn.jsdelivr.net/npm/@magenta/music@^1.0.0"></script>
<script>
// Instantiate model by loading desired config.
const model = new mm.MusicVAE(
'https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/trio_4bar');
const player = new mm.Player();
function play() {
player.resumeContext(); // enable audio
model.sample(1)
.then((samples) => player.start(samples[0], 80));
}
</script>
</head>
<body><button onclick="play()"><h1>Play Trio</h1></button></body>
</html>
Open up that html file in your browser (or click here for a hosted version) and the code will run. Click the "Play Trio" button to hear 4-bar trios that are randomly generated by MusicVAE.
It's also easy to add the ability to download MIDI for generated outputs, which is demonstrated in this example.
See our demos for example usage.
We have also split all the models and the core library into smaller ES6 bundles (not ESModules, unfortunately 😢), so that you can use a model independent of the rest of the
library. These bundles don't package Tone.js
or TensorFlow.js
(since
there would be a risk of downloading multiple copies on the same page). Here is an example:
<html>
<head>
...
<!-- You need to bring your own Tone.js for the player, and tfjs for the model -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/tone/13.8.21/Tone.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/1.2.8/tf.min.js"></script>
<!-- Core library, since we're going to use a player -->
<script src="https://cdn.jsdelivr.net/npm/@magenta/music@^1.0.0/es6/core.js"></script>
<!--Model we want to use -->
<script src="https://cdn.jsdelivr.net/npm/@magenta/music@^1.0.0/es6/music_vae.js"></script>
</head>
<script>
// Each bundle exports a global object with the name of the bundle.
const player = new core.Player();
//...
const mvae = new music_vae.MusicVAE('https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/mel_2bar_small');
mvae.initialize().then(() => {
//...
});
</script>
</html>
You can use MagentaMusic.js in your project using yarn
(by calling yarn add @magenta/music
) or npm
(by calling npn install --save @magenta/music
).
The node-specific bundles (that don't transpile the CommonJS modules) are under
@magenta/music/node
. For example:
const model = require('@magenta/music/node/music_vae');
const core = require('@magenta/music/node/core');
// These hacks below are needed because the library uses performance and fetch which
// exist in browsers but not in node. We are working on simplifying this!
const globalAny: any = global;
globalAny.performance = Date;
globalAny.fetch = require('node-fetch');
// Your code:
const model = new mode.MusicVAE('/path/to/checkpoint');
const player = new core.Player();
model
.initialize()
.then(() => model.sample(1))
.then(samples => {
player.resumeContext();
player.start(samples[0])
});
yarn install
to install dependencies.
yarn test
to run tests.
yarn build
to produce the different bundled versions.
yarn run-demos
to build and serve the demos, with live reload.
(Note: the default behavior is to build/watch all demos - specific demos can be built by passing a comma-separated list of specific demo names as follows: yarn run-demos --demos=transcription,visualizer
)
Since MagentaMusic.js does not support training models, you must use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.
Several pre-trained MusicRNN and MusicVAE checkpoints are hosted on GCS. The full list can is available in this table and can be accessed programmatically via a JSON index at https://goo.gl/magenta/js-checkpoints-json.
More information is available at https://goo.gl/magenta/js-checkpoints.
To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.
This tool is dependent on tfjs-converter, which you must first install using pip install tensorflowjs
. Once installed, you can execute the script as follows:
../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir
There are additional flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call ../scripts/checkpoint_converter.py -h
to list the available options.
The model configuration should be placed in a JSON file named config.json
in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example config.json
file might look like:
{
"type": "MusicRNN",
"dataConverter": {
"type": "MelodyConverter",
"args": {
"minPitch": 48,
"maxPitch": 83
}
},
"chordEncoder": "PitchChordEncoder"
}
This configuration corresponds to a chord-conditioned melody MusicRNN model.
There are several SoundFonts that you can use with the mm.SoundFontPlayer
,
for more realistic sounding instruments:
Instrument | URL | License |
---|---|---|
Piano | salamander | Audio samples from Salamander Grand Piano |
Multi | sgm_plus | Audio samples based on SGM with modifications by John Nebauer |
Percussion | jazz_kit | Audio samples from Jazz Kit (EXS) by Lithalean |
You can explore what each of them sounds like on this demo page.
FAQs
Make music with machine learning, in the browser.
The npm package @magenta/music receives a total of 1,074 weekly downloads. As such, @magenta/music popularity was classified as popular.
We found that @magenta/music demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 6 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.