Security News
PyPI Introduces Digital Attestations to Strengthen Python Package Security
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers creating web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects. Additionally, Tone provides high-performance building blocks to create your own synthesizers, effects, and complex control signals.
There are two ways to incorporate Tone.js into a project. First, it can be installed locally into a project using npm
:
npm install tone // Install the latest stable version
npm install tone@next // Or, alternatively, use the 'next' version
Add Tone.js to a project using the JavaScript import
syntax:
import * as Tone from "tone";
Tone.js is also hosted at unpkg.com. It can be added directly within an HTML document, as long as it precedes any project scripts. See the example here for more details.
<script src="http://unpkg.com/tone"></script>
//create a synth and connect it to the main output (your speakers)
const synth = new Tone.Synth().toDestination();
//play a middle 'C' for the duration of an 8th note
synth.triggerAttackRelease("C4", "8n");
Tone.Synth
is a basic synthesizer with a single oscillator and an ADSR envelope.
triggerAttack
starts the note (the amplitude is rising), and triggerRelease
is when the amplitude is going back to 0 (i.e. note off).
const synth = new Tone.Synth().toDestination();
const now = Tone.now();
// trigger the attack immediately
synth.triggerAttack("C4", now);
// wait one second before triggering the release
synth.triggerRelease(now + 1);
triggerAttackRelease
is a combination of triggerAttack
and triggerRelease
The first argument to the note which can either be a frequency in hertz (like 440
) or as "pitch-octave" notation (like "D#2"
).
The second argument is the duration that the note is held. This value can either be in seconds, or as a tempo-relative value.
The third (optional) argument of triggerAttackRelease
is when along the AudioContext time the note should play. It can be used to schedule events in the future.
const synth = new Tone.Synth().toDestination();
const now = Tone.now();
synth.triggerAttackRelease("C4", "8n", now);
synth.triggerAttackRelease("E4", "8n", now + 0.5);
synth.triggerAttackRelease("G4", "8n", now + 1);
Web Audio has advanced, sample accurate scheduling capabilities. The AudioContext time is what the Web Audio API uses to schedule events, starts at 0 when the page loads and counts up in seconds.
Tone.now()
gets the current time of the AudioContext.
setInterval(() => console.log(Tone.now()), 100);
Tone.js abstracts away the AudioContext time. Instead of defining all values in seconds, any method which takes time as an argument can accept a number or a string. For example "4n"
is a quarter-note, "8t"
is an eighth-note triplet, and "1m"
is one measure.
IMPORTANT: Browsers will not play any audio until a user clicks something (like a play button). Run your Tone.js code only after calling Tone.start()
from a event listener which is triggered by a user action such as "click" or "keydown".
Tone.start()
returns a promise, the audio will be ready only after that promise is resolved. Scheduling or playing audio before the AudioContext is running will result in silence or incorrect scheduling.
//attach a click listener to a play button
document.querySelector("button")?.addEventListener("click", async () => {
await Tone.start();
console.log("audio is ready");
});
Tone.getTransport()
returns the main timekeeper. Unlike the AudioContext clock, it can be started, stopped, looped and adjusted on the fly. You can think of it like the arrangement view in a Digital Audio Workstation.
Multiple events and parts can be arranged and synchronized along the Transport. Tone.Loop
is a simple way to create a looped callback that can be scheduled to start and stop.
// create two monophonic synths
const synthA = new Tone.FMSynth().toDestination();
const synthB = new Tone.AMSynth().toDestination();
//play a note every quarter-note
const loopA = new Tone.Loop((time) => {
synthA.triggerAttackRelease("C2", "8n", time);
}, "4n").start(0);
//play another note every off quarter-note, by starting it "8n"
const loopB = new Tone.Loop((time) => {
synthB.triggerAttackRelease("C4", "8n", time);
}, "4n").start("8n");
// all loops start when the Transport is started
Tone.getTransport().start();
// ramp up to 800 bpm over 10 seconds
Tone.getTransport().bpm.rampTo(800, 10);
Since Javascript callbacks are not precisely timed, the sample-accurate time of the event is passed into the callback function. Use this time value to schedule the events.
There are numerous synths to choose from including Tone.FMSynth
, Tone.AMSynth
and Tone.NoiseSynth
.
All of these instruments are monophonic (single voice) which means that they can only play one note at a time.
To create a polyphonic synthesizer, use Tone.PolySynth
, which accepts a monophonic synth as its first parameter and automatically handles the note allocation so you can pass in multiple notes. The API is similar to the monophonic synths, except triggerRelease
must be given a note or array of notes.
const synth = new Tone.PolySynth(Tone.Synth).toDestination();
const now = Tone.now();
synth.triggerAttack("D4", now);
synth.triggerAttack("F4", now + 0.5);
synth.triggerAttack("A4", now + 1);
synth.triggerAttack("C5", now + 1.5);
synth.triggerAttack("E5", now + 2);
synth.triggerRelease(["D4", "F4", "A4", "C5", "E5"], now + 4);
Sound generation is not limited to synthesized sounds. You can also load a sample and play that back in a number of ways. Tone.Player
is one way to load and play back an audio file.
const player = new Tone.Player(
"https://tonejs.github.io/audio/berklee/gong_1.mp3"
).toDestination();
Tone.loaded().then(() => {
player.start();
});
Tone.loaded()
returns a promise which resolves when all audio files are loaded. It's a helpful shorthand instead of waiting on each individual audio buffer's onload
event to resolve.
Multiple samples can also be combined into an instrument. If you have audio files organized by note, Tone.Sampler
will pitch shift the samples to fill in gaps between notes. So for example, if you only have every 3rd note on a piano sampled, you could turn that into a full piano sample.
Unlike the other synths, Tone.Sampler is polyphonic so doesn't need to be passed into Tone.PolySynth
const sampler = new Tone.Sampler({
urls: {
C4: "C4.mp3",
"D#4": "Ds4.mp3",
"F#4": "Fs4.mp3",
A4: "A4.mp3",
},
release: 1,
baseUrl: "https://tonejs.github.io/audio/salamander/",
}).toDestination();
Tone.loaded().then(() => {
sampler.triggerAttackRelease(["Eb4", "G4", "Bb4"], 4);
});
In the above examples, the sources were always connected directly to the Destination
, but the output of the synth could also be routed through one (or more) effects before going to the speakers.
const player = new Tone.Player({
url: "https://tonejs.github.io/audio/berklee/gurgling_theremin_1.mp3",
loop: true,
autostart: true,
});
//create a distortion effect
const distortion = new Tone.Distortion(0.4).toDestination();
//connect a player to the distortion
player.connect(distortion);
The connection routing is flexible, connections can run serially or in parallel.
const player = new Tone.Player({
url: "https://tonejs.github.io/audio/drum-samples/loops/ominous.mp3",
autostart: true,
});
const filter = new Tone.Filter(400, "lowpass").toDestination();
const feedbackDelay = new Tone.FeedbackDelay(0.125, 0.5).toDestination();
// connect the player to the feedback delay and filter in parallel
player.connect(filter);
player.connect(feedbackDelay);
Multiple nodes can be connected to the same input enabling sources to share effects. Tone.Gain
is useful utility node for creating complex routing.
Like the underlying Web Audio API, Tone.js is built with audio-rate signal control over nearly everything. This is a powerful feature which allows for sample-accurate synchronization and scheduling of parameters.
Signal
properties have a few built in methods for creating automation curves.
For example, the frequency
parameter on Oscillator
is a Signal so you can create a smooth ramp from one frequency to another.
const osc = new Tone.Oscillator().toDestination();
// start at "C4"
osc.frequency.value = "C4";
// ramp to "C2" over 2 seconds
osc.frequency.rampTo("C2", 2);
// start the oscillator for 2 seconds
osc.start().stop("+3");
Tone.js creates an AudioContext when it loads and shims it for maximum browser compatibility using standardized-audio-context. The AudioContext can be accessed at Tone.getContext
. Or set your own AudioContext using Tone.setContext(audioContext)
.
To use MIDI files, you'll first need to convert them into a JSON format which Tone.js can understand using Midi.
Tone.js makes extensive use of the native Web Audio Nodes such as the GainNode and WaveShaperNode for all signal processing, which enables Tone.js to work well on both desktop and mobile browsers.
This wiki article has some suggestions related to performance for best practices.
Tone.js runs an extensive test suite using mocha and chai with nearly 100% coverage. Passing builds on the 'dev' branch are published on npm as tone@next
.
There are many ways to contribute to Tone.js. Check out this wiki if you're interested.
If you have questions (or answers) that are not necessarily bugs/issues, please post them to the forum.
FAQs
A Web Audio framework for making interactive music in the browser.
The npm package tone receives a total of 11,885 weekly downloads. As such, tone popularity was classified as popular.
We found that tone demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.