Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
web-audio-api-player
Advanced tools
🎶 An opensource javascript (typescript) audio player for the browser, built using the Web Audio API with support for HTML5 audio elements
This player can be added to any javascript project and extended in many ways, it is not bound to a specific UI, this player is just a core that can be used to create any kind of player you can imagine
To get started I recommend checking out the guide to building a simple audio player UI in this readme, or head straight to the player options and functions documentation, but also have a look at the code of the working simple player example that is part of this repository
If you want to help improve the documentation or contribute to this project by improving and fixing it, then first check out the TODOs section below, maybe there is something in the list you want to help with
Any contribution, even things not listed on the TODO list are of course welcome. To get started check out the section "contributing" section below
If you found a bug or want to request a new feature please go to the issues page and if you have a question please use the dicussions page
web audio API player is published to the npm registry so you can install it with either npm or yarn
with npm:
npm i web-audio-api-player
or with yarn:
yarn add web-audio-api-player
the best way to get started is to check out the examples folder, check out the source of simple player example if you want to see how to build a fully working player with UI elements of a basic audio player
in this chapter I will try to explain how to set up the most important parts of a player, but I also recommend you have a look at the simple player example which is an HTML / Javascript client and has an express.js server, to demonstrate how to build an UI, you can explore and run the example locally if you want to know more about how to use this package and see a working example
after having installed the package you need to import it, like so:
import { PlayerCore, ICoreOptions, ISoundAttributes } from 'web-audio-api-player'
what you must import is the PlayerCore, the other two ICoreOptions and ISoundAttributes are optional, I import those two because I write my code using typescript and want the player and sound / song options types
first we define some options for our player core:
const options: ICoreOptions = {
soundsBaseUrl: '/assets/songs/',
loopQueue: true,
}
Note: the soundsBaseUrl is the first option we set, it will tell the player what the full URL for the songs source is (for example https://www.example.com/songs/) or if the player and songs are hosted on the same domain the path is enough, loopQueue by default is set to false, I enable it here, this means that at the end of a queue (a playlist) the player won't stop but instead go back to the first song and play that song
Note 2: for a full list of all available player options check out the player options chapter
next we initialize the player using our options object and get a player instance in return:
const player = new PlayerCore(options)
now we are going to create our first song:
const firstSongAttributes: ISoundAttributes = {
source: [
{
url: 'mp3/song1.mp3',
codec: 'mp3',
},
{
url: 'ogg/song2.ogg',
codec: 'ogg',
isPreferred: true,
}
],
id: 1,
}
the only two attributes that are mandatory are the source array and the sound id, the source only needs one entry but for demonstration purposes I added two here, the first one is the song encoded as an mp3 and the second source is the same song but this time it has is encoded using the ogg codec, a third source option is isPreferred, it tells the player that if the browser has support for both codecs but that it should preferably use ogg over mp3, the id can be any numeric value, it can be useful if you have additional song data stored somewhere, for example if you have the related band name info, the songs music genre and so on, for example stored in a database and want to display that data in the UI while the song is being played
Note: for a full list of all available sound attributes check out the sound attributes chapter
after we have set the attributes for our first song we pass these attributes to the player queue:
const firstSong = player.addSoundToQueue({ soundAttributes: firstSongAttributes })
if you want to you can add callbacks via the songs attributes, these callbacks will get triggered by the player when an internal event happens to let your code adapt the UI based on them, I'm going to use those callbacks with a console.log inside to demonstrate their use as I add a second song to queue:
const secondSongAttributes: ISoundAttributes = {
source: [
{
url: 'mp3/song2.mp3',
codec: 'mp3',
},
{
url: 'ogg/song2.ogg',
codec: 'ogg',
isPreferred: true,
}
],
id: 2,
onLoading: (loadingProgress, maximumValue, currentValue) => {
console.log('onLoading (loadingProgress, maximumValue, currentValue): ', loadingProgress, maximumValue, currentValue)
},
onPlaying: (playingPercentage, duration, playTime) => {
console.log('onPlaying (playingPercentage, duration, playTime): ', playingPercentage, duration, playTime)
},
onStarted: (playTimeOffset) => {
console.log('onStarted (playTimeOffset): ', playTimeOffset)
},
onPaused: (playTime) => {
console.log('onPaused (playTime): ', playTime)
},
onStopped: (playTime) => {
console.log('onStopped (playTime): ', playTime)
},
onResumed: (playTime) => {
console.log('onResumed (playTime): ', playTime)
},
onEnded: (willPlayNext) => {
console.log('onEnded (willPlayNext): ', willPlayNext)
},
onSeeking: (seekingPercentage, duration, playTime) => {
console.log('onPlaying (seekingPercentage, duration, playTime): ', seekingPercentage, duration, playTime)
},
}
after we have set the attributes for our second song we pass these attributes to the player queue too, which means we now have a queue with two songs:
const secondSong = player.addSoundToQueue({ soundAttributes: secondSongAttributes })
some player options can be changed even after initialization, for example if you want to adjust the volume, you could do this:
let volume = 90
player.setVolume(volume)
or you want to player to be muted when the browser of the user goes into the background then you can still enable the option:
player.setVisibilityAutoMute(true)
or you want the queue to make a loop when the last song in the player queue (your playlist) finishes playing, then you would enable / disable it like this:
player.setLoopQueue(true)
Note: all of these setters have a corresponding getter, so if you want to now what the current value is, for example if you want to know what the current volume is set to:
const volume = player.getVolume(volume)
now it is time to build your player UI, if you want a good example of such an UI check out the simple player example
first thing we need is an play button (of course you can use any element you want, you just need to attach an onclick to it), in this example we will use an HTML button element:
<button id="playButton" class="button">
<span id="play-icon">></span>
</button>
and then you listen for the onclick, when the onclick gets triggered you tell the player to start playing (if nothing is defined it will play the first song in the queue by default):
const playButton = document.getElementById('playButton');
playButton.addEventListener('click', (event) => {
event.preventDefault();
player.play()
})
here is another example from a react component I use for my blog chris.lu source on github:
<button onClick={onClickPlayHandler} className={styles.play}>
<FontAwesomeIcon icon={faPlay} size="2x" color='white' />
</button>
and here is the click handler I have in my react component, which tells the player to play the first song from the queue:
const onClickPlayHandler = () => {
player.play()
}
One last tip, when you want to change the position of the song, for example when someone uses the range slider of your player UI, then it is best to not stop (or pause) the song and then use play() to resume playing at a certain position, instead the easiest way is just to call the setPosition method of the player:
const onChangePositionHandler = (positionInPercent: number): void => {
player.setPosition(positionInPercent)
}
Note: if you use typescript, import the ICoreOptions interface along with the playerCore, this makes it a lot easier to see what player options are available and what the type of each value is
Note: all player functions a promise, I recommend using a try catch and await the promise or call promise.catch to fetch eventual errors thrown by the player, like so:
async function foo(): Promise<void> {
try {
await player.play()
} catch (error) {
console.error(error)
}
}
foo()
or like so:
function bar(): void {
player.play().catch((error) => {
console.error(error)
})
}
bar()
IPlayOptions {
whichSound: accepted values: song ID (number or string) OR one of these 4 constants: PlayerCore.PLAY_SOUND_NEXT, PlayerCore.PLAY_SOUND_PREVIOUS, PlayerCore.PLAY_SOUND_FIRST, PlayerCore.PLAY_SOUND_LAST
playTimeOffset: the time at which you want the sound to start (in seconds), usually the song would start at zero but if you set this it will start at playTimeOffset
}
Note: the playTimeOffset (if set) will always get honored, so if you want to resume after a pause don't set the playTimeOffset, if playTimeOffset is set the song will start at the specified position, if no playTimeOffset is set the player will use the songs playTime value, which is 0 for a song that gets played for the first time or a value > 0 for a song that was paused
const mySoundAttributes = {
source: [{ url: 'https://example.com/mySound.mp3', codec: 'mp3' }],
}
player.addSoundToQueue({ soundAttributes: mySoundAttributes })
Note: if you use typescript, import the ISoundAttributes interface along with the playerCore, this makes it a lot easier to see what sound attributes are available and what the type of each value is
sound options:
sound callbacks:
All mobile browsers prevent playing sounds (songs) if no user gesture has happened yet. This means that on mobile you can NOT play sounds (songs) programmatically (this is also the reason why the autoplay attribute on an audio element does not auto play a song on mobile and also the reason videos will only autoplay if they are muted)
Note: If the user clicks on a play button and call player.play() then audio will play just fine, this chapter is about audio not playing when calling player.play() before the user interacted with the page (app)
If you attempt play a sound (song) on mobile programmatically (before a user interaction) then the mobile browser will throw a NotAllowedError error:
The request is not allowed by the user agent or the platform in the current context, possibly because the user denied permission (No legacy code value and constant name).
Note: iOS (iPhone) and android mobile devices will throw that error, in the past iPad tablets would throw an error too, however newer versions are considered a desktop device and do not need throw an error
There is however a trick to unlock audio on mobile, the trick is to listen for events like a user clicking on something in your page and use that interaction to play a silent sound for a brief moment, after that audio will be unlocked and you will be able to trigger the play function at any time programmatically to play the song you want (even if it is not a direct action initiated by the user)
the web-audio-player has two options to unlock audio on mobile:
solution 1: there is a player option called unlockAudioOnFirstUserInteraction, set it to true when initializing the player and the player will add user interaction listeners to the html document, on the first user interaction the player catches, it will attempt to unlock audio, after audio is unlocked you will be able to call the players play() function programmatically and it will not throw an error anymore
solution 2: there is a player function called manuallyUnlockAudio() that you can use to attempt to unlock audio on mobile, this function MUST be played inside an event handler that got triggered by a user interaction, events that you can use are for example "keydown" (excluding the Escape key and possibly some keys reserved by the browser or OS), "mousedown", "pointerdown" or "pointerup" (but only if the pointerType is "mouse") and "touchend"
Note: You might have read (like I did) a lot of outdated web audio articles which stated the web audio element lacks a lot of features the web audio API and that hence it is not suited to create complex audio software or for example be used in games where you might want to add effects and filters to sounds. This is not true anymore and especially not true for this library. Yes the audio element if used as a standalone lacks a lot of features. But this library does combine the web audio element with the web audio API, meaning that no matter what mode you chose the sound will be converted to an AudioSourceNode.
If you use this library, you have two player modes you can chose to use, the main difference is how the sound (song) gets loaded:
PLAYER_MODE_AJAX will use an XMLHttpRequest the source will be an AudioBufferSourceNode
PLAYER_MODE_AUDIO will use the HTML audio element, then the player will use createMediaElementSource method of the AudioContext internally to create an MediaElementAudioSourceNode
If you build something like a music player, it is probably best to use the PLAYER_MODE_AUDIO as you might want to start playing the sound (song) as quickly as possible and don't care if it has fully loaded. This mode is ideal for big files that don't get loaded all at once (streaming). The audio mode (via the audio element) has support for partial content (http code 206) this means with this mode the song will start playing as soon as enough data has been buffered even though the song has not been fully loaded yet (it will load more data from the server in the background as the song progresses). The loading progress callback will return a percentage, which represents the amount of data that got loaded so far, which means it might not represent the loading state of the full song. If you want to display what parts of the song have been loaded more accurately (display the time range(s) that got loaded) I recommend using a 2D canvas element and to get the ranges that have been loaded, I recommend you use the audioElement property of a song to get the song.audioElement that is loading the song and then read the audioElement.buffered value(s).
You can use the PLAYER_MODE_AJAX if for example you want to build something where it doesn't matter that the song will only play after it has been fully loaded. However in this mode you can (pre-)load and maybe also cach sounds by yourself (you can inject an array buffer that you loaded yourself (via an XMLHttpRequest or using fetch) or even an already decoded audio buffer) by setting the sound.arrayBuffer or sound.audioBuffer. Use this mode if you prefer to have a smooth loading animation, because its loading progress callback is straight forward, when the loading progress callback gets triggered by the player, you can use the percentage value and pass it to a progress bar. To display the loading progress you could for example use a HTML progress element, you can find such an example in the simple player example.
You can inject your own using the audioContext player option, if you want to reuse an existing one your app already created
This is especially useful if you want to add your own nodes to the AudioGraph (audio routing graph). For example you may want to add an AnalyserNode or a pannerNode, delayNode or any other node that is available in the web audio API.
install the latest nodejs (if you haven't already) nodejs
install or update to the latest git version git scm downloads (During installation at the step "choosing the default editor used by Git", if like me you are using visual studio code you might want to chose the new option "use visual studio code as Git's default editor") (also if like me you are on windows, at the step "adjusting your PATH environment", ensure the second radio button option is selected "git from the line and also from 3rd-party software" to ensure git is added to the windows PATH, which will allow you to use git with any command line tool like windows powershell for example)
git clone this repository to get a local copy
git clone git@github.com:chrisweb/web-audio-api-player.git
open your favorite command line tool and go to the root directory of this repository
update npm to latest version
npm install npm@latest -g
install the dependencies
npm i
to build the distributions
npm run build
in development you can use watch to rebuild every time you edit a typescript file
npm run watch
to lint the typescript files
npm run lint
check out the releases page on github
if you wish to contribute to this project, please first open a ticket in the github issues page of this project and explain briefly what fix or improvement you want to provide (remember the github ticket number you will need it for the commit message later on), if you want to help but are not sure what would be useful then check out the todos list
go the github page of this project and hit the fork button
follow the instructions in the previous section "development: build", but instead of cloning this projects repository, clone your own fork of the project to get a local copy that you can edit in your IDE (VSCode)
git clone https://github.com/YOUR_GITHUB_USER/web-audio-api-player.git
when you are done coding, commit your local changes (if your commit is related to a ticket start your commit message with the "#TICKER_NUMBER", this will "link" the commit to the ticket)
git commit -m "#TICKER_NUMBER commit message"
now go to the github page of your fork and hit the pull request button
things I intend to add some day or if you want to help but are not sure what to do, check out this list and just pick one idea you like or that would you think is most useful
if you are interested in helping out 😊 by working on one of the following TODOs, please start by reading the "contributing" chapter above
var fileInput = document.querySelector('input[type="file"]');
fileInput.addEventListener('change', function(event) {
var reader = new FileReader();
reader.onload = function(event) {
playerCore._decodeSound(this.result);
};
reader.readAsArrayBuffer(this.files[0]);
}, false);
As of the 25.05.2019 the web audio api typings seem to be included in lib.d.ts, so removing them from package.json:
"dependencies": {
"@types/webaudioapi": "0.0.27"
},
Unfortunately (as of 06/2019) the defined window does not have AudioContext:
This is fixed, as of now (20.02.2023) the AudioContext is now defined properly
FAQs
web audio api player
The npm package web-audio-api-player receives a total of 7 weekly downloads. As such, web-audio-api-player popularity was classified as not popular.
We found that web-audio-api-player demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.