Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
web-speech-cognitive-services
Advanced tools
Polyfill Web Speech API with Cognitive Services Speech-to-Text service
Web Speech API adapter to use Cognitive Services Speech Services for both speech-to-text and text-to-speech service.
Speech technologies enables a lot of interesting scenarios, including Intelligent Personal Assistant and provide alternative inputs for assistive technologies.
Although W3C standardized speech technologies in browser, speech-to-text and text-to-speech support are still scarce. However, cloud-based speech technologies are very mature.
This polyfill provides W3C Speech Recognition and Speech Synthesis API in browser by using Azure Cognitive Services Speech Services. This will bring speech technologies to all modern first-party browsers available on both PC and mobile platforms.
Before getting started, please obtain a Cognitive Services subscription key from your Azure subscription.
Try out our demo at https://compulim.github.io/web-speech-cognitive-services. If you don't have a subscription key, you can still try out our demo in a speech-supported browser.
We use react-dictate-button
and react-say
to quickly setup the playground.
Speech recognition requires WebRTC API and the page must hosted thru HTTPS or localhost
. Although iOS 12 support WebRTC, native apps using WKWebView
do not support WebRTC.
Speech synthesis requires Web Audio API. For Safari, user gesture (click or tap) is required to play audio clips using Web Audio API. To ready the Web Audio API to use without user gesture, you can synthesize an empty string, which will not trigger any network calls but playing an empty hardcoded short audio clip. If you already have a "primed" AudioContext
object, you can also pass it as an option.
There are two ways to use this package:
<script>
to load the bundleTo use the ponyfill directly in HTML, you can use our published bundle from unpkg.
In the sample below, we use the bundle to perform text-to-speech with a voice named "Aria24kRUS".
<!DOCTYPE html>
<html lang="en-US">
<head>
<script src="https://unpkg.com/web-speech-cognitive-services/umd/web-speech-cognitive-services.production.min.js"></script>
</head>
<body>
<script>
const { speechSynthesis, SpeechSynthesisUtterance } = window.WebSpeechCognitiveServices.create({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
});
speechSynthesis.addEventListener('voiceschanged', () => {
const voices = speechSynthesis.getVoices();
const utterance = new SpeechSynthesisUtterance('Hello, World!');
utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));
speechSynthesis.speak(utterance);
});
</script>
</body>
</html>
We do not host the bundle. You should always use Subresource Integrity to protect bundle integrity when loading from a third-party CDN.
The voiceschanged
event come shortly after you created the ponyfill. You will need to wait until the event arrived before able to choose a voice for your utterance.
For production build, run npm install web-speech-cognitive-services
.
For development build, run npm install web-speech-cognitive-services@master
.
Since Speech Services SDK is not on NPM yet, we will bundle the SDK inside this package for now. When Speech Services SDK release on NPM, we will define it as a peer dependency.
In JavaScript, polyfill is a technique to bring newer features to older environment. Ponyfill is very similar, but instead polluting the environment by default, we prefer to let the developer to choose what they want. This article talks about polyfill vs. ponyfill.
In this package, we prefer ponyfill because it do not pollute the hosting environment. You are also free to mix-and-match multiple speech recognition engines under a single environment.
The following list all options supported by the adapter.
Name and type | Default value | Description |
---|---|---|
audioConfig: AudioConfig | fromDefaultMicrophoneInput() |
AudioConfig object to use with speech recognition. Please refer to this article for details on selecting different audio devices.
|
audioContext: AudioContext
| undefined |
The audio context is synthesizing speech on. If this is undefined , the AudioContext object will be created on first synthesis.
|
credentials: ( ICredentials || Promise<ICredentials> || () => ICredentials || () => Promise<ICredentials> ) ICredentials: { authorizationToken: string, region: string } || { region: string, subscriptionKey: string } || { authorizationToken: string, customVoiceHostname?: string, speechRecognitionHostname: string, speechSynthesisHostname: string } || { customVoiceHostname?: string, speechRecognitionHostname: string, speechSynthesisHostname: string, subscriptionKey: string }
| (Required) |
Credentials (including Azure region) from Cognitive Services. Please refer to this article to obtain an authorization token. Subscription key is not recommended for production use as it will be leaked in the browser. For sovereign cloud such as Azure Government (United States) and Azure China, instead of specifying region , please specify speechRecongitionHost and speechSynthesisHostname instead. You can find the sovereign cloud connection parameters from this article.
|
enableTelemetry | undefined |
Pass-through option to enable or disable telemetry for Speech SDK recognizer as outlined in Speech SDK. This adapter does not collect any telemetry. By default, Speech SDK will collect telemetry unless this is set to false .
|
looseEvents: boolean | false |
Specifies if the event order should strictly follow observed browser behavior (false ), or loosened behavior (true ). Regardless of the option, both behaviors conform with W3C specifications.
You can read more about this option in event order section. |
ponyfill.AudioContext: AudioContext | window.AudioContext || window.webkitAudioContext |
Ponyfill for Web Audio API. Currently, only Web Audio API can be ponyfilled. We may expand to WebRTC for audio recording in the future. |
referenceGrammars: string[] | undefined | Reference grammar IDs to send for speech recognition. |
speechRecognitionEndpointId: string | undefined | Endpoint ID for Custom Speech service. |
speechSynthesisDeploymentId: string | undefined |
Deployment ID for Custom Voice service. When you are using Custom Voice, you will need to specify your voice model name through SpeechSynthesisVoice.voiceURI . Please refer to the "Custom Voice support" section for details.
|
speechSynthesisOutputFormat: string | "audio-24khz-160kbitrate-mono-mp3" | Audio format for speech synthesis. Please refer to this article for list of supported formats. |
textNormalization: string | "display" |
Supported text normalization options:
|
You can use the adapter to connect to sovereign clouds, including Azure Government (United States) and Microsoft Azure China.
Please refer to this article on limitations when using Cognitive Services Speech Services on sovereign clouds.
createPonyfill({
credentials: {
authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
speechRecognitionHostname: 'virginia.stt.speech.azure.us',
speechSynthesisHostname: 'virginia.tts.speech.azure.us'
}
});
createPonyfill({
credentials: {
authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
speechRecognitionHostname: 'chinaeast2.stt.speech.azure.cn',
speechSynthesisHostname: 'chinaeast2.tts.speech.azure.cn'
}
});
For readability, we omitted the async function in all code snippets. To run the code, you will need to wrap the code using an async function.
import { createSpeechRecognitionPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/SpeechToText';
const {
SpeechRecognition
} = await createSpeechRecognitionPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
});
const recognition = new SpeechRecognition();
recognition.interimResults = true;
recognition.lang = 'en-US';
recognition.onresult = ({ results }) => {
console.log(results);
};
recognition.start();
Note: most browsers requires HTTPS or
localhost
for WebRTC.
You can use react-dictate-button
to integrate speech recognition functionality to your React app.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import DictateButton from 'react-dictate-button';
const {
SpeechGrammarList,
SpeechRecognition
} = await createPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
});
export default props =>
<DictateButton
onDictate={ ({ result }) => alert(result.transcript) }
speechGrammarList={ SpeechGrammarList }
speechRecognition={ SpeechRecognition }
>
Start dictation
</DictateButton>
import { createSpeechSynthesisPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/TextToSpeech';
const {
speechSynthesis,
SpeechSynthesisUtterance
} = await createSpeechSynthesisPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
});
speechSynthesis.addEventListener('voiceschanged', () => {
const voices = speechSynthesis.getVoices();
const utterance = new SpeechSynthesisUtterance('Hello, World!');
utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));
speechSynthesis.speak(utterance);
});
Note:
speechSynthesis
is camel-casing because it is an instance.
List of supported regions can be found in this article.
pitch
, rate
, voice
, and volume
are supported. Only onstart
, onerror
, and onend
events are supported.
You can use react-say
to integrate speech synthesis functionality to your React app.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import React, { useEffect, useState } from 'react';
import Say from 'react-say';
export default () => {
const [ponyfill, setPonyfill] = useState();
useEffect(async () => {
setPonyfill(await createPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
}));
}, [setPonyfill]);
return (
ponyfill &&
<Say
speechSynthesis={ ponyfill.speechSynthesis }
speechSynthesisUtterance={ ponyfill.SpeechSynthesisUtterance }
text="Hello, World!"
/>
);
};
Instead of exposing subscription key on the browser, we strongly recommend using authorization token.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
credentials: {
authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
region: 'westus'
}
});
You can also provide an async function that will fetch the authorization token and Azure region on-demand. You should cache the authorization token for subsequent request. For simplicity of this code snippets, we are not caching the result.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
credentials: () => fetch('https://example.com/your-token').then(res => ({
authorizationToken: res.text(),
region: 'westus'
}))
});
List of supported regions can be found in this article.
Lexical and ITN support is unique in Cognitive Services Speech Services. Our adapter added additional properties transcriptITN
, transcriptLexical
, and transcriptMaskedITN
to surface the result, in addition to transcript
and confidence
.
In some cases, you may want the speech recognition engine to be biased towards "Bellevue" because it is not trivial for the engine to recognize between "Bellevue", "Bellview" and "Bellvue" (without "e"). By giving a list of words, teh speech recognition engine will be more biased to your choice of words.
Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList
to better fit the scenario.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const {
SpeechGrammarList,
SpeechRecognition
} = await createPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
}
});
const recognition = new SpeechRecognition();
recognition.grammars = new SpeechGrammarList();
recognition.grammars.phrases = ['Tuen Mun', 'Yuen Long'];
recognition.onresult = ({ results }) => {
console.log(results);
};
recognition.start();
Please refer to "What is Custom Speech?" for tutorial on creating your first Custom Speech model.
To use custom speech for speech recognition, you need to pass the endpoint ID while creating the ponyfill.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
},
speechRecognitionEndpointId: '12345678-1234-5678-abcd-12345678abcd',
});
Please refer to "Get started with Custom Voice" for tutorial on creating your first Custom Voice model.
To use Custom Voice for speech synthesis, you need to pass the deployment ID while creating the ponyfill, and pass the voice model name as voice URI.
import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
const ponyfill = await createPonyfill({
credentials: {
region: 'westus',
subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
},
speechSynthesisDeploymentId: '12345678-1234-5678-abcd-12345678abcd',
});
const { speechSynthesis, SpeechSynthesisUtterance } = ponyfill;
const utterance = new SpeechSynthesisUtterance('Hello, World!');
utterance.voice = { voiceURI: 'your-model-name' };
await speechSynthesis.speak(utterance);
According to W3C specifications, the result
event can be fire at any time after audiostart
event.
In continuous mode, finalized result
event will be sent as early as possible. But in non-continuous mode, we observed browsers send finalized result
event just before audioend
, instead of as early as possible.
By default, we follow event order observed from browsers (a.k.a. strict event order). For a speech recognition in non-continuous mode and with interims, the observed event order will be:
start
audiostart
soundstart
speechstart
result
(these are interim results, with isFinal
property set to false
)speechend
soundend
audioend
result
(with isFinal
property set to true
)end
You can loosen event order by setting looseEvents
to true
. For the same scenario, the event order will become:
start
audiostart
soundstart
speechstart
result
(these are interim results, with isFinal
property set to false
)result
(with isFinal
property set to true
)speechend
soundend
audioend
end
For error
events (abort, "no-speech"
or other errors), we always sent it just before the last end
event.
In some cases, loosening event order may improve recognition performance. This will not break conformance to W3C standard.
For detailed test matrix, please refer to SPEC-RECOGNITION.md
or SPEC-SYNTHESIS.md
.
0.5
for interim resultsonboundary
, onmark
, onpause
, and onresume
are not supported/firedpause
will pause immediately and do not pause on word breaks due to lack of boundarystop()
and abort()
functionmicrosoft-speech-browser-sdk@0.0.12
, tracking on this issuepause
/resume
supportpaused
/pending
/speaking
supportLike us? Star us.
Want to make it better? File us an issue.
Don't like something you see? Submit a pull request.
FAQs
Polyfill Web Speech API with Cognitive Services Speech-to-Text service
The npm package web-speech-cognitive-services receives a total of 7,868 weekly downloads. As such, web-speech-cognitive-services popularity was classified as popular.
We found that web-speech-cognitive-services demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.