Socket
Socket
Sign inDemoInstall

binauralfir

Package Overview
Dependencies
3
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

    binauralfir

Processing audio node which spatializes an incoming audio stream in three-dimensional space for binaural audio


Version published
Weekly downloads
2
Maintainers
1
Install size
14.4 MB
Created
Weekly downloads
 

Readme

Source

BinauralFIR node

Processing audio node which spatializes an incoming audio stream in three-dimensional space for binaural audio.

The binauralFIR node provides binaural listening to the user with three simple steps. The novelty of this library is that it permits to use your own HRTF dataset. This library can be used as a regular node - AudioNode - inside the Web Audio API. You can connect the native nodes to the binauralFIR node by using the connect method to binauralFIR.input:

nativeNode.connect(binauralFIR.input);
binauralFIR.connect(audioContext.destination);

We provide a HRTF dataset example provided by IRCAM in the /example/snd/complete_hrtfs.js file.

Example

A working demo for this module can be found here and in the examples folder.

HRTF dataset format

As this library allow you to use your own HRTF Dataset, if you want to use your dataset in the library you have to follow the following format:

DataDescription
azimuthAzimuth in degrees: from 0 to -180 for source on your left, and from 0 to 180 for source on your right
distanceDistance in meters
elevationElevation in degrees: from 0 to 90 for source above your head, 0 for source in front of your head, and from 0 to -90 for source below your head)
bufferAudioBuffer representing the decoded audio data. An audio file can be decoded by using the [buffer-loader library] (https://github.com/Ircam-RnD/buffer-loader)

This data must be provided inside an Array of Objects, like this example:

[
  {
    'azimuth': 0,
    'distance': 1,
    'elevation': 0,
    'buffer': AudioBuffer
  },
  {
    'azimuth': 5,
    'distance': 1,
    'elevation': 0,
    'buffer': AudioBuffer

  },
  {
    'azimuth': 5,
    'distance': 1,
    'elevation': 5,
    'buffer': AudioBuffer
  }
]

API

The binauralFIR object exposes the following API:

MethodDescription
binauralFIR.connect()Connects the binauralFIRNode to the Web Audio graph
binauralFIR.disconnect()Disconnect the binauralFIRNode from the Web Audio graph
binauralFIR.HRTFDatasetSet HRTF Dataset to be used with the virtual source.
binauralFIR.setPosition(azimuth, elevation, distance)Set position of the virtual source.
binauralFIR.getPosition()Get the current position of the virtual source.
binauralFIR.setCrossfadeDuration(duration)Set the duration of crossfading in miliseconds.
binauralFIR.getCrossfadeDuration()Get the duration of crossfading in miliseconds.

License

This module is released under the BSD-3-Clause license.

Acknowledgments

This code has been developed from both Acoustic And Cognitive Spaces and Analysis of Musical Practices IRCAM research teams. It is also part of the WAVE project (http://wave.ircam.fr), funded by ANR (The French National Research Agency), ContInt program, 2012-2015.

FAQs

Last updated on 13 Apr 2019

Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc