Node Core Audio
A C++ extension for node.js that gives javascript access to audio buffers and basic audio processing functionality
Right now, it's basically a node.js binding for PortAudio.
Installation
npm install node-core-audio
Basic Usage
Below is the most basic use of the audio engine. We create a new instance of
node-core-audio, and then give it our processing function. The audio engine
will call the audio callback whenever it needs an output buffer to send to
the sound card.
var coreAudio = require("node-core-audio");
var engine = coreAudio.createNewAudioEngine();
function processAudio( inputBuffer ) {
console.log( "%d channels", inputBuffer.length );
console.log( "Channel 0 has %d samples", inputBuffer[0].length );
return inputBuffer;
}
engine.addAudioCallback( processAudio );
// Alternatively, you can read/write samples to the sound card manually
var engine = coreAudio.createNewAudioEngine();
var buffer = engine.read();
for( var iSample=0; iSample<inputBuffer[0].length; ++iSample )
buffer[0][iSample] = 0.0;
engine.write( buffer );
Important! Processing Thread
When you are writing code inside of your audio callback, you are operating on
the processing thread of the application. This high priority environment means you
should try to think about performance as much as possible. Allocations and other
complex operations are possible, but dangerous.
IF YOU TAKE TOO LONG TO RETURN A BUFFER TO THE SOUND CARD, YOU WILL HAVE AUDIO DROPOUTS
The basic principle is that you should have everything ready to go before you enter
the processing function. Buffers, objects, and functions should be created in a constructor or static function outside of the audio callback whenever possible. The
examples in this readme are not necessarily good practice as far as performance is concerned.
The callback is only called if all buffers has been processed by the soundcard.
Audio Engine Options
- sampleRate [default 44100]
- Sample rate - number of samples per second in the audio stream
- sampleFormat [default sampleFormatFloat32]
- Bit depth - Number of bits used to represent sample values
- formats are sampleFormatFloat32, sampleFormatInt32, sampleFormatInt24, sampleFormatInt16, sampleFormatInt8, sampleFormatUInt8.
- framesPerBuffer [default 256]
- Buffer length - Number of samples per buffer
- interleaved [default false]
- Interleaved / Deinterleaved - determines whether samples are given to you as a two dimensional array (buffer[channel][sample]) (deinterleaved) or one buffer with samples from alternating channels (interleaved).
- inputChannels [default 2]
- Input channels - number of input channels
- outputChannels [default 2]
- Output channels - number of output channels
- inputDevice [default to Pa_GetDefaultInputDevice]
- Input device - id of the input device
- outputDevice [default to Pa_GetDefaultOutputDevice]
- Output device - id of the output device
API
First things first
var coreAudio = require("node-core-audio");
Create and audio processing function
function processAudio( inputBuffer ) {
console.log( inputBuffer[0][0] );
}
Initialize the audio engine and setup the processing loop
var engine = coreAudio.createNewAudioEngine();
engine.addAudioCallback( processAudio );
General functionality
bool engine.isActive();
engine.setOptions({
inputChannels: 2
});
array engine.getOptions();
array engine.read();
bool engine.write(array input);
string engine.getDeviceName( int inputDeviceIndex );
int engine.getNumDevices();
Known Issues / TODO
- Add FFTW to C++ extension, so you can get fast FFT's from javascript, and also register for the FFT of incoming audio, rather than the audio itself
- Add support for streaming audio over sockets
License
MIT - See LICENSE file.
Copyright Mike Vegeto, 2013