
Security News
Open Source Maintainers Feeling the Weight of the EU’s Cyber Resilience Act
The EU Cyber Resilience Act is prompting compliance requests that open source maintainers may not be obligated or equipped to handle.
react-web-voice
Advanced tools
react-web-voice
is a library created to ease the integration of Web Speech Api (including speech synthesis and speech recognition) to your React web application.
npm install react-web-voice
At the moment, not all browsers support Web Speech Api
. The library has been developed and tested on Google Chrome, which is the only browser fully support Web Speech Api
at the moment.
In the future, as other browsers adopt this feature, the library will be updated accordingly to support them.
More information on this topic can be found here: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
The library provide two separated react hooks, namely useSpeech
and useRecognition
to support voice speaking and voice recognizing respectively.
As React
hook is currently an alpha feature, using of the next
version of react is required.
useSpeech
can be used in your functional component to access to the speaking functions.
The hook return a list of messages
that has already been spoken and a speak
function that allow you to order the browser to speak.
const SpeechComponent = () => {
const { messages, speak } = useSpeech();
const speakButtonHandler = async () => {
const utterance = await speak({
text: 'Hello',
volume: 0.5,
rate: 1,
pitch: 1
});
};
return <button onClick={speakButtonHandler}>Click to speak</button>;
};
As shown in the example above the speak
function accept a message
object which can be used to define the content of the message, the volume, rate and pitch and a callback function after the browser finish speaking.
By default, useSpeech
use the Google Us English
, you can require it to use other voice by passing in a config object:
// To get the full list of voices available: window.speechSynthesis.getVoices()
const { messages, speak } = useSpeech({ voice: 'Karen' });
useRecognition
can be used in your functional component to access to the voice recognition functions.
const RecognitionComponent = () => {
const { transcripts, listen } = useRecognition();
const listenButtonHandler = async () => {
const transcript = await listen();
};
return <button onClick={listenButtonHandler}>Start speaking</button>;
};
As shown in the example above the listen
function accept a callback function as its parameter, the callback function will be triggered with the message that the browser detects and recognizes.
This project is written in typescript and fully support it.
An example of how to use these two hooks can found inside the demo folder.
With version 1.0, we decided to get rid of the callback function on both speak
and listen
function but replacing them with a promise.
FAQs
web speech and recognition as react hook
The npm package react-web-voice receives a total of 0 weekly downloads. As such, react-web-voice popularity was classified as not popular.
We found that react-web-voice demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
The EU Cyber Resilience Act is prompting compliance requests that open source maintainers may not be obligated or equipped to handle.
Security News
Crates.io adds Trusted Publishing support, enabling secure GitHub Actions-based crate releases without long-lived API tokens.
Research
/Security News
Undocumented protestware found in 28 npm packages disrupts UI for Russian-language users visiting Russian and Belarusian domains.