Security News
Weekly Downloads Now Available in npm Package Search Results
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.
@untemps/react-vocal
Advanced tools
A React component and hook to initiate a SpeechRecognition session
:red_circle: LIVE DEMO :red_circle:
The Web Speech API is only supported by few browsers so
far (see caniuse). If the API is not available, the Vocal
component
won't display anything.
This component intends to catch a speech result as soon as possible. This can be a good fit for vocal commands or search
field filling. For now on it does not support continuous speech (see Roadmap below).
That means either a result is caught and returned or timeout is reached and the recognition is discarded.
The stop
function returned by children-as-function mechanism allows to prematurely discard the recognition before
timeout elapses.
Some browsers supports the SpeechRecognition
API but not all the related APIs.
For example, browsers on iOS 14.5, the SpeechGrammar
and SpeechGrammarList
and Permissions
APIs are not supported.
Although the lack of SpeechGrammar
and SpeechGrammarList
is handled by the underlaying @untemps/vocal
library, you need to deal with Permissions
by yourself.
yarn add @untemps/react-vocal
Vocal
componentimport Vocal from '@untemps/react-vocal'
const App = () => {
const [result, setResult] = useState('')
const _onVocalStart = () => {
setResult('')
}
const _onVocalResult = (result) => {
setResult(result)
}
return (
<div className="App">
<span style={{ position: 'relative' }}>
<Vocal
onStart={_onVocalStart}
onResult={_onVocalResult}
style={{ width: 16, position: 'absolute', right: 10, top: -2 }}
/>
<input defaultValue={result} style={{ width: 300, height: 40 }} />
</span>
</div>
)
}
By default, Vocal
displays an icon with two states:
But you can provide your own component.
import Vocal from '@untemps/react-vocal'
const App = () => {
return (
<Vocal>
<button>Start</button>
</Vocal>
)
}
In this case, a onClick
handler is automatically attached to the component to start a recognition session.
Only the first direct descendant of Vocal will receive the onClick
handler. If you want to use a more complex
hierarchy, use the function syntax below.
import Vocal from '@untemps/react-vocal'
const Play = () => (
<div
style={{
width: 0,
height: 0,
marginLeft: 1,
borderStyle: 'solid',
borderWidth: '4px 0 4px 8px',
borderColor: 'transparent transparent transparent black',
}}
/>
)
const Stop = () => (
<div
style={{
width: 8,
height: 8,
backgroundColor: 'black',
}}
/>
)
const App = () => {
return (
<Vocal>
{(start, stop, isStarted) => (
<button style={{ padding: 5 }} onClick={isStarted ? stop : start}>
{isStarted ? <Stop /> : <Play />}
</button>
)}
</Vocal>
)
}
The following parameters are passed to the function:
Arguments | Type | Description |
---|---|---|
start | func | The function used to start the recognition |
stop | func | The function used to stop the recognition |
isStarted | bool | A flag that indicates whether the recognition is started or not |
The Vocal
component accepts a commands
prop to map special recognition results to callbacks.
That means you can define vocal commands to trigger specific functions.
const App = () => {
return (
<Vocal commands={{
'switch border color': () => setBorderColor('red'),
}}/>
)
}
commands
object is a key/pair model where the key
is the command to be caught by the recognition and the value
is the callback triggered when the command is detected.
key
is not case sensitive.
const commands = {
submit: () => submitForm(),
'Change the background color': () => setBackgroundColor('red'),
'PLAY MUSIC': play
}
The component utilizes a special hook called useCommands
to respond to the commands.
The hook performs a fuzzy search to match approximate commands if needed. This allows to fix accidental typos or approximate recognition results.
To do so the hook uses fuse.js which implements an algorithm to find strings that are approximately equal to a given input. The score precision that distinguishes acceptable command-to-callback mapping from negative matching can be customized in the hook instantiantion.
useCommands(commands, threshold) // threshold is the limit not to exceed to be considered a match
See fuze.js scoring theory for more details.
:warning: The
Vocal
component doesn't expose that score yet. For now on you have to deal with the default value (0.4)
Vocal
component APIProps | Type | Default | Description |
---|---|---|---|
commands | object | null | Callbacks to be triggered when specified commands are detected by the recognition |
lang | string | 'en-US' | Language understood by the recognition BCP 47 language tag |
grammars | SpeechGrammarList | null | Grammars understood by the recognition JSpeech Grammar Format |
timeout | number | 3000 | Time in ms to wait before discarding the recognition |
style | object | null | Styles of the root element if className is not specified |
className | string | null | Class of the root element |
onStart | func | null | Handler called when the recognition starts |
onEnd | func | null | Handler called when the recognition ends |
onSpeechStart | func | null | Handler called when the speech starts |
onSpeechEnd | func | null | Handler called when the speech ends |
onResult | func | null | Handler called when a result is recognized |
onError | func | null | Handler called when an error occurs |
onNoMatch | func | null | Handler called when no result can be recognized |
useVocal
hookimport React, { useState } from 'react'
import { useVocal } from '@untemps/react-vocal'
import Icon from './Icon'
const App = () => {
const [isListening, setIsListening] = useState(false)
const [result, setResult] = useState('')
const [, { start, subscribe }] = useVocal('fr_FR')
const _onButtonClick = () => {
setIsListening(true)
subscribe('speechstart', _onVocalStart)
subscribe('result', _onVocalResult)
subscribe('error', _onVocalError)
start()
}
const _onVocalStart = () => {
setResult('')
}
const _onVocalResult = (result) => {
setIsListening(false)
setResult(result)
}
const _onVocalError = (e) => {
console.error(e)
}
return (
<div>
<span style={{ position: 'relative' }}>
<div
role="button"
aria-label="Vocal"
tabIndex={0}
style={{ width: 16, position: 'absolute', right: 10, top: 2 }}
onClick={_onButtonClick}
>
<Icon color={isListening ? 'red' : 'blue'} />
</div>
<input defaultValue={result} style={{ width: 300, height: 40 }} />
</span>
</div>
)
}
useVocal(lang, grammars)
Args | Type | Default | Description |
---|---|---|---|
lang | string | 'en-US' | Language understood by the recognition BCP 47 language tag |
grammars | SpeechGrammarList | null | Grammars understood by the recognition JSpeech Grammar Format |
const [ref, { start, stop, abort, subscribe, unsubscribe, clean }]
Args | Type | Description |
---|---|---|
ref | Ref | React ref to the SpeechRecognitionWrapper instance |
start | func | Function to start the recognition |
stop | func | Function to stop the recognition |
abort | func | Function to abort the recognition |
subscribe | func | Function to subscribe to recognition events |
unsubscribe | func | Function to unsubscribe to recognition events |
clean | func | Function to clean subscription to recognition events |
import Vocal, { isSupported } from '@untemps/react-vocal'
const App = () => {
return isSupported ? <Vocal /> : <p>Your browser does not support Web Speech API</p>
}
Events | Description |
---|---|
audioend | Fired when the user agent has finished capturing audio for recognition |
audiostart | Fired when the user agent has started to capture audio for recognition |
end | Fired when the recognition service has disconnected |
error | Fired when a recognition error occurs |
nomatch | Fired when the recognition service returns a final result with no significant recognition |
result | Fired when the recognition service returns a result |
soundend | Fired when any sound — recognisable or not — has stopped being detected |
soundstart | Fired when any sound — recognisable or not — has been detected |
speechend | Fired when speech recognized by the recognition service has stopped being detected |
speechstart | Fired when sound recognized by the recognition service as speech has been detected |
start | fired when the recognition service has begun listening to incoming audio |
The process to grant microphone access permissions is automatically managed by the hook (internally used by the Vocal
component).
The component can be served for development purpose on http://localhost:10001/
using:
yarn dev
Contributions are warmly welcomed:
[feature type]_[imperative verb]-[description of the feature]
)FAQs
React component and hook to initiate a SpeechRecognition session
The npm package @untemps/react-vocal receives a total of 42 weekly downloads. As such, @untemps/react-vocal popularity was classified as not popular.
We found that @untemps/react-vocal demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.
Security News
A Stanford study reveals 9.5% of engineers contribute almost nothing, costing tech $90B annually, with remote work fueling the rise of "ghost engineers."
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.