Security News
Input Validation Vulnerabilities Dominate MITRE's 2024 CWE Top 25 List
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
web-audio-api-player
Advanced tools
🎶 An opensource javascript (typescript) audio player for the browser, built using the Web Audio API with support for HTML5 audio elements
this player can be added to any javascript project and extended in many ways, it is not bound to a specific UI, this player is just a core that can be used to create any kind of player you can imagine and even be used to play sound in video games or for any other sound / song playing needs you may have
Want to help improve the documentation or contribute to this project by improving and fixing it, then first check out the TODOs section below, maybe there is something in the list you want to help with. Any contribution, even things not listed on the TODO list are of course welcome. To get started check out the section "contributing" section below.
web audio API player is published to the npm registry so you can install it with either npm or yarn
with npm:
npm i web-audio-api-player
or with yarn:
yarn add web-audio-api-player
the best way to get started is to check out the examples folder, check out the source of simple player example if you want to see how to build a fully working player with UI elements of a basic audio player
in this chapter I will try to explain how to set up the most important parts of a player, but I also recommend you have a look at the simple player example which is an HTML / Javascript client and has an express.js server, to demonstrate how to build an UI, you can explore and run the example locally if you want to know more about how to use this package and see a working example
after having installed the package you need to import it, like so:
import { PlayerCore, ICoreOptions, ISoundAttributes } from 'web-audio-api-player'
what you must import is the PlayerCore, the other two ICoreOptions and ISoundAttributes are optional, I import those two because I write my code using typescript and want the player and sound / song options types
first we define some options for our player core:
const options: ICoreOptions = {
soundsBaseUrl: '/assets/songs/',
loopQueue: true,
}
Note: the soundsBaseUrl is the first option we set, it will tell the player what the full URL for the songs source is (for example https://www.example.com/songs/) or if the player and songs are hosted on the same domain the path is enough, loopQueue by default is set to false, I enable it here, this means that at the end of a queue (a playlist) the player won't stop but instead go back to the first song and play that song
Note 2: for a full list of all available player options check out the player options chapter
next we initialize the player using our options object and get a player instance in return:
const player = new PlayerCore(options)
now we are going to create our first song:
const firstSongAttributes: ISoundAttributes = {
source: [
{
url: 'mp3/song1.mp3',
codec: 'mp3',
},
{
url: 'ogg/song2.ogg',
codec: 'ogg',
isPreferred: true,
}
],
id: 1,
}
the only two attributes that are mandatory are the source array and the sound id, the source only needs one entry but for demonstration purposes I added two here, the first one is the song encoded as an mp3 and the second source is the same song but this time it has is encoded using the ogg codec, a third source option is isPreferred, it tells the player that if the browser has support for both codecs but that it should preferrably use ogg over mp3, the id can be any numeric value, it can be usefull if you have additional song data stored somewhere, for example if you have the related band name info, the songs music genre and so on, for example stored in a database and want to display that data in the UI while the song is being played
Note: for a full list of all available sound attributes check out the sound attributes chapter
after we have set the attributes for our first song we pass these attributes to the player queue:
const firstSong = player.addSoundToQueue({ soundAttributes: firstSongAttributes })
if you want to you can add callbacks via the songs attributes, these callbacks will get triggered by the player when an internal event happens to let your code adapt the UI based on them, I'm going to use those callbacks with a console.log inside to demonstrate their use as I add a second song to queue:
const secondSongAttributes: ISoundAttributes = {
source: [
{
url: 'mp3/song2.mp3',
codec: 'mp3',
},
{
url: 'ogg/song2.ogg',
codec: 'ogg',
isPreferred: true,
}
],
id: 2,
onLoading: (loadingProgress, maximumValue, currentValue) => {
console.log('onLoading (loadingProgress, maximumValue, currentValue): ', loadingProgress, maximumValue, currentValue)
},
onPlaying: (playingPercentage, duration, playTime) => {
console.log('onPlaying (playingPercentage, duration, playTime): ', playingPercentage, duration, playTime)
},
onStarted: (playTimeOffset) => {
console.log('onStarted (playTimeOffset): ', playTimeOffset)
},
onPaused: (playTime) => {
console.log('onPaused (playTime): ', playTime)
},
onStopped: (playTime) => {
console.log('onStopped (playTime): ', playTime)
},
onResumed: (playTime) => {
console.log('onResumed (playTime): ', playTime)
},
onEnded: (willPlayNext) => {
console.log('onEnded (willPlayNext): ', willPlayNext)
},
onSeeking: (seekingPercentage, duration, playTime) => {
console.log('onPlaying (seekingPercentage, duration, playTime): ', seekingPercentage, duration, playTime)
},
}
after we have set the attributes for our second song we pass these attributes to the player queue too, which means we now have a queue with two songs:
const secondSong = player.addSoundToQueue({ soundAttributes: secondSongAttributes })
some player options can be changed even after initialization, for example if you want to adjust the volume, you could do this:
let volume = 90
player.setVolume(volume)
or you want to player to be muted when the browser of the user goes into the background then you can still enable the option:
player.setVisibilityAutoMute(true)
or you want the queue to make a loop when the last song in the player queue (your playlist) finishes playing, then you would enable / disable it like this:
player.setLoopQueue(true)
Note: all of these setters have a corresponding getter, so if you want to now what the current value is, for example if you want to know what the current volume is set to:
const volume = player.getVolume(volume)
now it is time to build your player UI, if you want a good example of such an UI check out the simple player example
first thing we need is an play button (of course you can use any element you want, you just need to attach an onclick to it), in this example we will use an HTML button element:
<button id="playButton" class="button">
<span id="play-icon">></span>
</button>
and then you listen for the onclick, when the onclick gets triggered you tell the player to start playing (if nothing is defined it will play the first song in the queue by default):
const playButton = document.getElementById('playButton');
playButton.addEventListener('click', (event) => {
event.preventDefault();
player.play()
})
here is another example from a react component I use for my blog chris.lu source on github:
<button onClick={onClickPlayHandler} className={styles.play}>
<FontAwesomeIcon icon={faPlay} size="2x" color='white' />
</button>
and here is the click handler I have in my react component, which tells the player to play the first song from the queue:
const onClickPlayHandler = () => {
player.play()
}
One last tip, when you want to change the position of the song, for example when someone uses the range slider of your player UI, then it is best to not stop (or pause) the song and then use play() to resume playing at a certain position, instead the easiest way is just to call the setPosition method of the player:
const onChangePositionHandler = (positionInPercent: number): void => {
player.setPosition(positionInPercent)
}
Note: if you use typescript, import the ICoreOptions interface along with the playerCore, this makes it a lot easier to see what player options are available and what the type of each value is
Note: if you use typescript, import the ISoundAttributes interface along with the playerCore, this makes it a lot easier to see what sound attributes are available and what the type of each value is
sound options:
sound callbacks:
Note: all player functions a promise, I recommend using a try catch and await the promise or call promise.catch to fetch eventual errors thrown by the player, like so:
async function foo(): Promise<void> {
try {
await player.play()
} catch (error) {
console.error(error)
}
}
foo()
or like so:
function bar(): void {
player.play().catch((error) => {
console.error(error)
})
}
bar()
IPlayOptions {
whichSound: accepted values: song ID (number or string) OR one of these 4 constants: PlayerCore.PLAY_SOUND_NEXT, PlayerCore.PLAY_SOUND_PREVIOUS, PlayerCore.PLAY_SOUND_FIRST, PlayerCore.PLAY_SOUND_LAST
playTimeOffset: the time at which you want the sound to start (in seconds), usually the song would start at zero but if you set this it will start at playTimeOffset
}
Note: the playTimeOffset (if set) will always get honored, so if you want to resume after a pause don't set the playTimeOffset, if playTimeOffset is set the song will start at the specified position, if no playTimeOffset is set the player will use the songs playTime value, which is 0 for a song that gets played for the first time or a value > 0 for a song that was paused
const mySoundAttributes = {
source: [{ url: 'https://example.com/mySound.mp3', codec: 'mp3' }],
}
player.addSoundToQueue({ soundAttributes: mySoundAttributes })
Note: You might have read (like I did) a lot of outdated web audio articles which stated the web audio element lacks a lot of features the web audio API and that hence it is not suited to create complex audio software or for example be used in games where you might want to add effects and filters to sounds. This is not true anymore and especially not true for this library. Yes the audio element if used as a standalone lacks a lot of features. But this library does combine the web audio element with the web audio API, meaning that no matter what mode you chose the sound will be converted to an AudioSourceNode.
If you use this library, the difference is only how the sound (song) gets retrieved:
will use an XMLHttpRequest the source will be an AudioBufferSourceNode
will use the HTML audio element, then the player will use createMediaElementSource method of the AudioContext internally to create an MediaElementAudioSourceNode
It depends on what you intend to build.
If you build something like a music player, it is probably best to use the PLAYER_MODE_AUDIO as you might to want to start playing the sound (song) as quickly as possible and don't care if it has fully loaded, because in this mode the song will start playing as soon as enough data has been buffered even though the song has not been fully loaded yet (it will get the rest of it from the server in the background while playing). To display the time range(s) that have been loaded you could for example use a 2D canvas element.
If you build something that has a lot (of small sounds) that get (pre-)loaded and maybe cached, but played later at some time after they finished loading, use PLAYER_MODE_AJAX. Its progress is easier to understand, because when the loading progress of the sound has reached 100% you know it can be played. To display the loading progress a simple HTML progress element is enough.
You can inject your own using the audioContext player option, if you want to reuse an existing one your app already created
This is especially useful if you want to add your own nodes to the AudioGraph (audio routing graph). For example you may want to add an AnalyserNode or a pannerNode, delayNode or any other node that is available in the web audio API.
install the latest nodejs (if you haven't already) nodejs
install or update to the latest git version git scm downloads (During installation at the step "choosing the default editor used by Git", if like me you are using visual studio code you might want to chose the new option "use visual studio code as Git's default editor") (also if like me you are on windows, at the step "adjusting your PATH environment", ensure the second radio button option is selected "git from the line and also from 3rd-party software" to ensure git is added to the windows PATH, which will allow you to use git with any command line tool like windows powershell for example)
git clone this repository to get a local copy
git clone git@github.com:chrisweb/web-audio-api-player.git
open your favorite command line tool and go to the root directory of this repository
update npm to latest version
npm install npm@latest -g
install the dependencies
npm i
to build the distributions
npm run build
in development you can use watch to rebuild every time you edit a typescript file
npm run watch
to lint the typescript files
npm run lint
check out the releases page on github
if you wish to contribute to this project, please first open a ticket in the github issues page of this project and explain briefly what fix or improvement you want to provide (remember the github ticket number you will need it for the commit message later on), if you want to help but are not sure what would be useful then check out the todos list
go the github page of this project and hit the fork button
follow the instructions in the previous section "development: build", but instead of cloning this projects repository, clone your own fork of the project to get a local copy that you can edit in your IDE (VSCode)
git clone https://github.com/YOUR_GITHUB_USER/web-audio-api-player.git
when you are done coding, commit your local changes (if your commit is related to a ticket start your commit message with the "#TICKER_NUMBER", this will "link" the commit to the ticket)
git commit -m "#TICKER_NUMBER commit message"
now go to the github page of your fork and hit the pull request button
things I intend to add some day or if you want to help but are not sure what to do, check out this list and just pick one idea you like or that would you think is most useful
if you are interested in helping out 😊 by working on one of the following TODOs, please start by reading the "contributing" chapter above
var fileInput = document.querySelector('input[type="file"]');
fileInput.addEventListener('change', function(event) {
var reader = new FileReader();
reader.onload = function(event) {
playerCore._decodeSound(this.result);
};
reader.readAsArrayBuffer(this.files[0]);
}, false);
As of the 25.05.2019 the web audio api typings seem to be included in lib.d.ts, so removing them from package.json:
"dependencies": {
"@types/webaudioapi": "0.0.27"
},
Unfortunately (as of 06/2019) the defined window does not have AudioContext:
This is fixed, as of now (20.02.2023) the AudioContext is now defined properly
FAQs
web audio api player
The npm package web-audio-api-player receives a total of 7 weekly downloads. As such, web-audio-api-player popularity was classified as not popular.
We found that web-audio-api-player demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.