Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@magenta/music

Package Overview
Dependencies
Maintainers
3
Versions
69
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@magenta/music

Make music in the browser with machine learning.

  • 1.1.2
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
1.3K
increased by27.05%
Maintainers
3
Weekly downloads
 
Created
Source

@magenta/music

This JavaScript implementation of Magenta's musical note-based models uses TensorFlow.js for GPU-accelerated inference.

Complete documentation is available at https://tensorflow.github.io/magenta-js.

For the Python TensorFlow implementations, see the main Magenta repo.

Contents

Example Applications

Here are a few applications built with MagentaMusic.js:

Supported Models

We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!

MusicRNN

MusicRNN implements Magenta's LSTM-based language models. These include MelodyRNN, DrumsRNN, ImprovRNN, and PerformanceRNN.

MusicVAE

MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, and chord-conditioned "multi-track" models.

Getting started

There are two main ways to get MagentaMusic.js in your JavaScript project: via script tags or by installing it from NPM and using a build tool like yarn.

via Script Tag

Add the following code to an HTML file:

<html>
  <head>
    <!-- Load @magenta/music -->
    <script src="https://cdn.jsdelivr.net/npm/@magenta/music@1.0.0"></script>
    <script>
      // Instantiate model by loading desired config.
      const model = new mm.MusicVAE(
        'https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/trio_4bar');
      const player = new mm.Player();

      function play() {
        mm.Player.tone.context.resume();  // enable audio
        model.sample(1)
          .then((samples) => player.start(samples[0], 80));
      }
    </script>
  </head>
  <body><button onclick="play()"><h1>Play Trio</h1></button></body>
</html>

Open up that html file in your browser (or click here for a hosted version) and the code will run. Click the "Play Trio" button to hear 4-bar trios that are randomly generated by MusicVAE.

It's also easy to add the ability to download MIDI for generated outputs, which is demonstrated in this example.

See the Neural Drum Machine by @teropa for a complete example application with code.

via NPM

Add MagentaMusic.js to your project using yarn or npm. For example, with yarn you can simply call yarn add @magenta/music.

Then, you can use the library in your own code as in the following example:

import * as mm from '@magenta/music';

const model = new mm.MusicVAE('/path/to/checkpoint');
const player = new mm.Player();

model.initialize()
    .then(() => model.sample(1))
    .then((samples) => player.start(samples[0]));

See our demos for example usage.

Example Commands

yarn install to install dependencies.

yarn test to run tests.

yarn bundle to produce a bundled version in dist/.

yarn run-demos to build and run the demo.

Model Checkpoints

Since MagentaMusic.js does not support training models, you must use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.

Magenta-Hosted Checkpoints

Several pre-trained MusicRNN and MusicVAE checkpoints are hosted on GCS. The full list can is available in this table and can be accessed programmatically via a JSON index at https://goo.gl/magenta/js-checkpoints-json.

More information is available at https://goo.gl/magenta/js-checkpoints.

Your Own Checkpoints

Dumping Your Weights

To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.

This tool is dependent on tfjs-converter, which you must first install using pip install tensorflowjs. Once installed, you can execute the script as follows:

../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir

There are additonal flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call ../scripts/checkpoint_converter.py -h to list the avilable options.

Specifying the Model Configuration

The model configuration should be placed in a JSON file named config.json in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example config.json file might look like:

{
  "type": "MusicRNN",
    "dataConverter": {
      "type": "MelodyConverter",
      "args": {
        "minPitch": 48,
        "maxPitch": 83
      }
    },
    "chordEncoder": "PitchChordEncoder"
}

This configuration corresponds to a chord-conditioned melody MusicRNN model.

FAQs

Package last updated on 20 Jun 2018

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc