Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@magenta/music

Package Overview
Dependencies
Maintainers
5
Versions
69
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@magenta/music - npm Package Compare versions

Comparing version 1.2.0 to 1.2.1

3

es5/core/index.d.ts

@@ -6,4 +6,5 @@ import * as aux_inputs from './aux_inputs';

import * as logging from './logging';
import * as performance from './performance';
import * as sequences from './sequences';
export { aux_inputs, chords, constants, data, logging, sequences };
export { aux_inputs, chords, constants, data, logging, performance, sequences };
export * from './midi_io';

@@ -10,0 +11,0 @@ export * from './player';

@@ -16,2 +16,4 @@ "use strict";

exports.logging = logging;
var performance = require("./performance");
exports.performance = performance;
var sequences = require("./sequences");

@@ -18,0 +20,0 @@ exports.sequences = sequences;

@@ -1,2 +0,2 @@

export declare class OnsetsAndFrames {
declare class OnsetsAndFrames {
private checkpointURL;

@@ -14,8 +14,9 @@ chunkLength: number;

isInitialized(): boolean;
transcribeFromMelSpec(melSpec: number[][], parallelBatches?: number): Promise<import("../../../../../../../Users/adarob/repos/adarob-magentajs/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioBuffer(audioBuffer: AudioBuffer, batchSize?: number): Promise<import("../../../../../../../Users/adarob/repos/adarob-magentajs/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioFile(blob: Blob): Promise<import("../../../../../../../Users/adarob/repos/adarob-magentajs/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioURL(url: string): Promise<import("../../../../../../../Users/adarob/repos/adarob-magentajs/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromMelSpec(melSpec: number[][], parallelBatches?: number): Promise<import("../../../../../../../Users/noms/Code/magenta-js/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioBuffer(audioBuffer: AudioBuffer, batchSize?: number): Promise<import("../../../../../../../Users/noms/Code/magenta-js/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioFile(blob: Blob): Promise<import("../../../../../../../Users/noms/Code/magenta-js/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
transcribeFromAudioURL(url: string): Promise<import("../../../../../../../Users/noms/Code/magenta-js/music/src/protobuf/proto").tensorflow.magenta.NoteSequence>;
private processBatches;
private build;
}
export { OnsetsAndFrames };
{
"name": "@magenta/music",
"version": "1.2.0",
"version": "1.2.1",
"description": "Make music with machine learning, in the browser.",

@@ -41,3 +41,3 @@ "main": "es5/index.js",

"scripts": {
"prepublish": "yarn lint && yarn test && yarn build && yarn doc && yarn bundle && yarn publish-demos",
"prepublish": "yarn lint && yarn test && yarn build && yarn doc && yarn bundle",
"build": "tsc && cp src/protobuf/proto.* es5/protobuf",

@@ -51,3 +51,3 @@ "bundle": "browserify --standalone mm src/index.ts -p [tsify] > dist/magentamusic.js",

"proto": "sh compile-proto.sh",
"doc": "sh generate-docs.sh"
"doc": "sh generate-docs.sh && yarn publish-demos"
},

@@ -54,0 +54,0 @@ "author": "Magenta",

@@ -18,8 +18,10 @@ # @magenta/music

Here are a few applications built with MagentaMusic.js:
Here are a few applications built with `@magenta/music`:
- [Piano Scribe](https://piano-scribe.glitch.me) by [Monica Dinculescu](https://github.com/notwaldorf) and [Adam Roberts](https://github.com/adarob)
- [Beat Blender](https://g.co/beatblender) by [Google Creative Lab](https://github.com/googlecreativelab)
- [Melody Mixer](https://g.co/melodymixer) by [Google Creative Lab](https://github.com/googlecreativelab)
- [Latent Loops](https://goo.gl/magenta/latent-loops) by [Google Pie Shop](https://github.com/teampieshop)
- [Neural Drum Machine](https://codepen.io/teropa/pen/RMGxOQ) by [Tero Parviainen](https://github.com/teropa)
- [Neural Drum Machine](https://goo.gl/magenta/neuraldrum) by [Tero Parviainen](https://github.com/teropa)
- [Tenori-Off](https://tenori-off.glitch.me) by [Monica Dinculescu](https://github.com/notwaldorf)

@@ -33,10 +35,20 @@ You can also try our [hosted demos](https://tensorflow.github.io/magenta-js/music/demos) for each model and have a look at the [demo code](./demos).

### Piano Transcription w/ Onsets and Frames
[OnsetsAndFrames](https://tensorflow.github.io/magenta-js/music/classes/_transcription_model_.onsetsandframes.html) implements Magenta's [piano transcription model](g.co/magenta/onsets-frames) for converting raw audio to MIDI in the browser. While it is somewhat flexible, it works best on solo piano recordings. The algorithm takes half the duration of audio to run on most browsers, but due to a [Webkit bug](https://github.com/WebKit/webkit/blob/4a4870b75b95a836b516163d45a5cbd6f5222562/Source/WebCore/Modules/webaudio/AudioContext.cpp#L109), audio resampling will make this it significantly slower on Safari.
**Demo Application:** [Piano Scribe](https://piano-scribe.glitch.me)
### MusicRNN
[MusicRNN](https://tensorflow.github.io/magenta-js/classes/_music_vae_model_.musicvae.html) implements Magenta's LSTM-based language models. These include [MelodyRNN][melody-rnn], [DrumsRNN][drums-rnn], [ImprovRNN][improv-rnn], and [PerformanceRNN][performance-rnn].
[MusicRNN](https://tensorflow.github.io/magenta-js/music/classes/_music_rnn_model_.musicrnn.html) implements Magenta's LSTM-based language models. These include [MelodyRNN][melody-rnn], [DrumsRNN][drums-rnn], [ImprovRNN][improv-rnn], and [PerformanceRNN][performance-rnn].
**Demo Application:** [Neural Drum Machine](https://goo.gl/magenta/neuraldrum)
### MusicVAE
[MusicVAE](https://tensorflow.github.io/magenta-js/classes/_music_rnn_model_.musicrnn.html) implements several configurations of Magenta's variational autoencoder model called [MusicVAE][music-vae] including melody and drum "loop" models, 4- and 16-bar "trio" models, and chord-conditioned "multi-track" models.
[MusicVAE](https://tensorflow.github.io/magenta-js/music/classes/_music_vae_model_.musicvae.html) implements several configurations of Magenta's variational autoencoder model called [MusicVAE][music-vae] including melody and drum "loop" models, 4- and 16-bar "trio" models, and chord-conditioned "multi-track" models.
**Demo Application:** [Endless Trios](https://goo.gl/magenta/endless-trios)
## Getting started

@@ -43,0 +55,0 @@

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc