Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@magenta/music

Package Overview
Dependencies
Maintainers
2
Versions
69
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@magenta/music

Make music in the browser with machine learning.

  • 0.0.7
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
1.3K
increased by27.05%
Maintainers
2
Weekly downloads
 
Created
Source

MagentaMusic.js API

This JavaScript implementation of Magenta's musical note-based models uses TensorFlow.js for GPU-accelerated inference.

For the Python TensorFlow implementations, see the main Magenta repo.

Contents

Example Applications

Here are a few applications built with MagentaMusic.js:

Supported Models

We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!

MusicRNN

MusicRNN implements Magenta's LSTM-based language models. These include MelodyRNN, DrumsRNN, ImprovRNN, and PerformanceRNN.

MusicVAE

MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, and chord-conditioned "multi-track" models.

Getting started

There are two main ways to get MagentaMusic.js in your JavaScript project: via script tags or by installing it from NPM and using a build tool like yarn.

via Script Tag

Add the following code to an HTML file:

<html>
  <head>
    <!-- Load MagentaMusic.js -->
    <script src="https://cdn.jsdelivr.net/npm/@magenta/music@0.0.6"> </script>

    <!-- Place your code in the script tag below. You can also use an external .js file -->
    <script>
      // Notice there is no 'import' statement. 'mm' is available on the index-page
      // because of the script tag above.

      // Instantiate the model by loading the desired checkpoint.
      const model = new mm.MusicVAE(
          'https://storage.googleapis.com/download.magenta.tensorflow.org/' +
          'tfjs_checkpoints/music_vae/trio_4bar_lokl_small_q1');
      const player = new mm.Player();

      // Endlessly sample and play back the result.
      Promise.resolve().then(
        function sampleAndPlay() {
          return model.sample(1)
              .then((samples) => player.start(samples[0]))
              .then(sampleAndPlay);
        });
    </script>
  </head>

  <body></body>
</html>

Open up that html file in your browser (or click here for a hosted version) and the code will run. After a few seconds you'll hear an endless stream of 4-bar trios that are randomly generated by MusicVAE!

See the Neural Drum Machine by @teropa for a complete example application with code.

via NPM

Add MagentaMusic.js to your project using yarn or npm. For example, with yarn you can simply call yarn add @magenta/music.

Then, you can use the library in your own code as in the folliwng example:

import * as mm from '@magenta/music';

const model = new mm.MusicVAE('/path/to/checkpoint');
const player = new mm.Player();

model.initialize()
    .then(() => model.sample(1))
    .then((samples) => player.start(samples[0]));

See our demos for example usage.

Example Commands

yarn install to install dependencies.

yarn test to run tests.

yarn bundle to produce a bundled version in dist/.

yarn run-demos to build and run the demo.

Model Checkpoints

Since MagentaMusic.js does not support training models, you must use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.

Magenta-Hosted Checkpoints

Several pre-trained MusicRNN and MusicVAE checkpoints are hosted on GCS. The full list can is available in this table and can be accessed programmatically via a JSON index at https://goo.gl/magenta/js-checkpoints-json.

More information is available at https://goo.gl/magenta/js-checkpoints.

Your Own Checkpoints

Dumping Your Weights

To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.

This tool is dependent on tfjs-converter, which you must first install using pip install tensorflowjs. Once installed, you can execute the script as follows:

../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir

There are additonal flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call ../scripts/checkpoint_converter.py -h to list the avilable options.

Specifying the Model Configuration

The model configuration should be placed in a JSON file named config.json in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example config.json file might look like:

{
  "type": "MusicRNN",
    "dataConverter": {
      "type": "MelodyConverter",
      "args": {
        "minPitch": 48,
        "maxPitch": 83
      }
    },
    "chordEncoder": "PitchChordEncoder"
}

This configuration corresponds to a chord-conditioned melody MusicRNN model.

FAQs

Package last updated on 27 Apr 2018

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc