web-speech-cognitive-services
Advanced tools
Comparing version 1.0.0 to 1.0.1-master.2034662
@@ -8,5 +8,13 @@ # Changelog | ||
## [Unreleased] | ||
### Added | ||
- SpeechSynthesis polyfill with Cognitive Services | ||
### Changed | ||
- Removed `CognitiveServices` prefix | ||
- Renamed `CognitiveServicesSpeechGrammarList` to `SpeechGrammarList` | ||
- Renamed `CognitiveServicesSpeechRecognition` to `SpeechRecognition` | ||
## [1.0.0] - 2018-06-29 | ||
### Added | ||
- Initial release | ||
- SpeechRecognition polyfill with Cognitive Services |
@@ -6,18 +6,29 @@ 'use strict'; | ||
}); | ||
exports.CognitiveServicesSpeechGrammarList = undefined; | ||
exports.SpeechSynthesisUtterance = exports.speechSynthesis = exports.SpeechRecognition = exports.SpeechGrammarList = undefined; | ||
require('babel-polyfill'); | ||
var _CognitiveServicesSpeechGrammarList = require('./CognitiveServicesSpeechGrammarList'); | ||
var _SpeechGrammarList = require('./recognition/SpeechGrammarList'); | ||
var _CognitiveServicesSpeechGrammarList2 = _interopRequireDefault(_CognitiveServicesSpeechGrammarList); | ||
var _SpeechGrammarList2 = _interopRequireDefault(_SpeechGrammarList); | ||
var _CognitiveServicesSpeechRecognition = require('./CognitiveServicesSpeechRecognition'); | ||
var _SpeechRecognition = require('./recognition/SpeechRecognition'); | ||
var _CognitiveServicesSpeechRecognition2 = _interopRequireDefault(_CognitiveServicesSpeechRecognition); | ||
var _SpeechRecognition2 = _interopRequireDefault(_SpeechRecognition); | ||
var _speechSynthesis = require('./synthesis/speechSynthesis'); | ||
var _speechSynthesis2 = _interopRequireDefault(_speechSynthesis); | ||
var _SpeechSynthesisUtterance = require('./synthesis/SpeechSynthesisUtterance'); | ||
var _SpeechSynthesisUtterance2 = _interopRequireDefault(_SpeechSynthesisUtterance); | ||
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; } | ||
exports.default = _CognitiveServicesSpeechRecognition2.default; | ||
exports.CognitiveServicesSpeechGrammarList = _CognitiveServicesSpeechGrammarList2.default; | ||
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbIi4uL3NyYy9pbmRleC5qcyJdLCJuYW1lcyI6WyJDb2duaXRpdmVTZXJ2aWNlc1NwZWVjaFJlY29nbml0aW9uIiwiQ29nbml0aXZlU2VydmljZXNTcGVlY2hHcmFtbWFyTGlzdCJdLCJtYXBwaW5ncyI6Ijs7Ozs7OztBQUFBOztBQUVBOzs7O0FBQ0E7Ozs7OztrQkFFZUEsNEM7UUFHYkMsa0MsR0FBQUEsNEMiLCJmaWxlIjoiaW5kZXguanMiLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgJ2JhYmVsLXBvbHlmaWxsJztcblxuaW1wb3J0IENvZ25pdGl2ZVNlcnZpY2VzU3BlZWNoR3JhbW1hckxpc3QgZnJvbSAnLi9Db2duaXRpdmVTZXJ2aWNlc1NwZWVjaEdyYW1tYXJMaXN0JztcbmltcG9ydCBDb2duaXRpdmVTZXJ2aWNlc1NwZWVjaFJlY29nbml0aW9uIGZyb20gJy4vQ29nbml0aXZlU2VydmljZXNTcGVlY2hSZWNvZ25pdGlvbic7XG5cbmV4cG9ydCBkZWZhdWx0IENvZ25pdGl2ZVNlcnZpY2VzU3BlZWNoUmVjb2duaXRpb25cblxuZXhwb3J0IHtcbiAgQ29nbml0aXZlU2VydmljZXNTcGVlY2hHcmFtbWFyTGlzdFxufVxuIl19 | ||
exports.default = _SpeechRecognition2.default; | ||
exports.SpeechGrammarList = _SpeechGrammarList2.default; | ||
exports.SpeechRecognition = _SpeechRecognition2.default; | ||
exports.speechSynthesis = _speechSynthesis2.default; | ||
exports.SpeechSynthesisUtterance = _SpeechSynthesisUtterance2.default; | ||
//# sourceMappingURL=index.js.map |
{ | ||
"name": "web-speech-cognitive-services", | ||
"version": "1.0.0", | ||
"version": "1.0.1-master.2034662", | ||
"description": "Polyfill Web Speech API with Cognitive Services Speech-to-Text service", | ||
@@ -5,0 +5,0 @@ "keywords": [ |
217
README.md
@@ -5,3 +5,3 @@ # web-speech-cognitive-services | ||
Polyfill Web Speech API with Cognitive Services Speech-to-Text service. | ||
Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service. | ||
@@ -14,3 +14,3 @@ This scaffold is provided by [`react-component-template`](https://github.com/compulim/react-component-template/). | ||
We use [`react-dictate-button`](https://github.com/compulim/react-dictate-button/) to quickly setup the playground. | ||
We use [`react-dictate-button`](https://github.com/compulim/react-dictate-button/) and [`react-say`](https://github.com/compulim/react-say/) to quickly setup the playground. | ||
@@ -21,5 +21,5 @@ # Background | ||
Microsoft Azure [Cognitive Services Speech-to-Text](https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/) service provide speech recognition with great accuracy. But unfortunately, the APIs are not based on Web Speech API. | ||
Microsoft Azure [Cognitive Services Bing Speec](https://azure.microsoft.com/en-us/services/cognitive-services/speech/) service provide speech recognition with great accuracy. But unfortunately, the APIs are not based on Web Speech API. | ||
This package will polyfill Web Speech API by turning Cognitive Services Speech-to-Text API into Web Speech API. We test this package with popular combination of platforms and browsers. | ||
This package will polyfill Web Speech API by turning Cognitive Services Bing Speech API into Web Speech API. We test this package with popular combination of platforms and browsers. | ||
@@ -30,6 +30,8 @@ # How to use | ||
## Speech recognition (speech-to-text) | ||
```jsx | ||
import CognitiveServicesSpeechRecognition from 'web-speech-cognitive-services'; | ||
import SpeechRecognition from 'web-speech-cognitive-services'; | ||
const recognition = new CognitiveServicesSpeechRecognition(); | ||
const recognition = new SpeechRecognition(); | ||
@@ -55,3 +57,3 @@ // There are two ways to provide your credential: | ||
## Integrating with React | ||
### Integrating with React | ||
@@ -61,3 +63,3 @@ You can use [`react-dictate-button`](https://github.com/compulim/react-dictate-button/) to integrate speech recognition functionality to your React app. | ||
```jsx | ||
import CognitiveServicesSpeechRecognitionm, { CognitiveServicesSpeechGrammarList } from 'web-speech-recognition-services'; | ||
import { SpeechGrammarList, SpeechRecognition } from 'web-speech-cognitive-services'; | ||
import DictateButton from 'react-dictate-button'; | ||
@@ -69,4 +71,4 @@ | ||
onDictate={ ({ result }) => alert(result.transcript) } | ||
speechGrammarList={ CognitiveServicesSpeechGrammarList } | ||
speechRecognition={ CognitiveServicesSpeechRecognition } | ||
speechGrammarList={ SpeechGrammarList } | ||
speechRecognition={ SpeechRecognition } | ||
> | ||
@@ -79,175 +81,37 @@ Start dictation | ||
# Test matrix | ||
## Speech synthesis (text-to-speech) | ||
Browsers are all latest as of 2018-06-28, except: | ||
```jsx | ||
import { speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services'; | ||
* macOS was 10.13.1 (2017-10-31), instead of 10.13.5 | ||
* Since Safari does not support Web Speech API, the test matrix remains the same | ||
* Xbox was tested on Insider build (1806) with Kinect sensor connected | ||
* The latest Insider build does not support both WebRTC and Web Speech API, so we suspect the production build also does not support both | ||
const utterance = new SpeechSynthesisUtterance('Hello, World!'); | ||
Quick grab: | ||
speechSynthesis.speak(utterance); | ||
``` | ||
* Web Speech API | ||
* Works on most popular platforms, except iOS. Some requires non-default browser. | ||
* iOS: None of the popular browsers support Web Speech API | ||
* Windows: requires Chrome | ||
* Cognitive Services Speech-to-Text | ||
* Works on default browsers on all popular platforms | ||
* iOS: Chrome and Edge does not support Cognitive Services (WebRTC) | ||
> Note: `speechSynthesis` is camel-casing because it is an instance. | ||
| Platform | OS | Browser | Cognitive Services (WebRTC) | Web Speech API | | ||
| - | - | - | - | - | | ||
| PC | Windows 10 (1803) | Chrome 67.0.3396.99 | Yes | Yes | | ||
| PC | Windows 10 (1803) | Edge 42.17134.1.0 | Yes | No, `SpeechRecognition` not implemented | | ||
| PC | Windows 10 (1803) | Firefox 61.0 | Yes | No, `SpeechRecognition` not implemented | | ||
| MacBook Pro | macOS High Sierra 10.13.1 | Chrome 67.0.3396.99 | Yes | Yes | | ||
| MacBook Pro | macOS High Sierra 10.13.1 | Safari 11.0.1 | Yes | No, `SpeechRecognition` not implemented | | ||
| Apple iPhone X | iOS 11.4 | Chrome 67.0.3396.87 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Apple iPhone X | iOS 11.4 | Edge 42.2.2.0 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Apple iPhone X | iOS 11.4 | Safari | Yes | No, `SpeechRecognition` not implemented | | ||
| Apple iPod (6th gen) | iOS 11.4 | Chrome 67.0.3396.87 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Apple iPod (6th gen) | iOS 11.4 | Edge 42.2.2.0 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Apple iPod (6th gen) | iOS 11.4 | Safari | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Google Pixel 2 | Android 8.1.0 | Chrome 67.0.3396.87 | Yes | Yes | | ||
| Google Pixel 2 | Android 8.1.0 | Edge 42.0.0.2057 | Yes | Yes | | ||
| Google Pixel 2 | Android 8.1.0 | Firefox 60.1.0 | Yes | Yes | | ||
| Microsoft Lumia 950 | Windows 10 (1709) | Edge 40.15254.489.0 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
| Microsoft Xbox One | Windows 10 (1806) 17134.4054 | Edge 42.17134.4054.0 | No, `AudioSourceError` | No, `SpeechRecognition` not implemented | | ||
`pitch`, `rate`, `voice`, and `volume` are supported. Only `onstart` and `onend` events are supported. | ||
## Event lifecycle scenarios | ||
### Integrating with React | ||
We test multiple scenarios to make sure we polyfill Web Speech API correctly. Following are events and its firing order, in Cognitive Services and Web Speech API respectively. | ||
You can use [`react-say`](https://github.com/compulim/react-say/) to integrate speech synthesis functionality to your React app. | ||
* [Happy path](#happy-path) | ||
* [Abort during recognition](#abort-during-recognition) | ||
* [Network issues](#network-issues) | ||
* [Audio muted or volume too low](#audio-muted-or-volume-too-low) | ||
* [No speech is recognized](#no-speech-is-recognized) | ||
* [Not authorized to use microphone](#not-authorized-to-use-microphone) | ||
```jsx | ||
import { speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services'; | ||
import Say from 'react-say'; | ||
### Happy path | ||
export default props => | ||
<Say | ||
extra={{ subscriptionKey: 'your subscription key' }} | ||
speechSynthesis={ speechSynthesis } | ||
SpeechSynthesisUtterance={ SpeechSynthesisUtterance } | ||
text="Hello, World!" | ||
/> | ||
``` | ||
Everything works, including multiple interim results. | ||
# Test matrix | ||
* Cognitive Services | ||
1. `RecognitionTriggeredEvent` | ||
2. `ListeningStartedEvent` | ||
3. `ConnectingToServiceEvent` | ||
4. `RecognitionStartedEvent` | ||
5. `SpeechHypothesisEvent` (could be more than one) | ||
6. `SpeechEndDetectedEvent` | ||
7. `SpeechDetailedPhraseEvent` | ||
8. `RecognitionEndedEvent` | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
3. `soundstart` | ||
4. `speechstart` | ||
5. `result` (multiple times) | ||
6. `speechend` | ||
7. `soundend` | ||
8. `audioend` | ||
9. `result(results = [{ isFinal = true }])` | ||
10. `end` | ||
For detailed test matrix, please refer to [`TESTMATRIX.md`](TESTMATRIX.md). | ||
### Abort during recognition | ||
#### Abort before first recognition is made | ||
* Cognitive Services | ||
* Essentially muted the microphone and receive `SpeechEndDetectedEvent` immediately, very similar to [happy path](#happy-path), could still result in success, silent, or no match | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
8. `audioend` | ||
9. `error(error = 'aborted')` | ||
10. `end` | ||
#### Abort after some text has recognized | ||
* Cognitive Services | ||
* Essentially muted the microphone and receive `SpeechEndDetectedEvent` immediately, very similar to [happy path](#happy-path), could still result in success, silent, or no match | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
3. `soundstart` | ||
4. `speechstart` | ||
5. `result` (one or more) | ||
6. `speechend` | ||
7. `soundend` | ||
8. `audioend` | ||
9. `error(error = 'aborted')` | ||
10. `end` | ||
### Network issues | ||
Turn on airplane mode. | ||
* Cognitive Services | ||
1. `RecognitionTriggeredEvent` | ||
2. `ListeningStartedEvent` | ||
3. `ConnectingToServiceEvent` | ||
5. `RecognitionEndedEvent(Result.RecognitionStatus = 'ConnectError')` | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
3. `audioend` | ||
4. `error(error = 'network')` | ||
5. `end` | ||
### Audio muted or volume too low | ||
* Cognitive Services | ||
1. `RecognitionTriggeredEvent` | ||
2. `ListeningStartedEvent` | ||
3. `ConnectingToServiceEvent` | ||
4. `RecognitionStartedEvent` | ||
5. `SpeechEndDetectedEvent` | ||
6. `SpeechDetailedPhraseEvent(Result.RecognitionStatus = 'InitialSilenceTimeout')` | ||
7. `RecognitionEndedEvent` | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
3. `audioend` | ||
4. `error(error = 'no-speech')` | ||
5. `end` | ||
### No speech is recognized | ||
Some sounds are heard, but they cannot be recognized as text. There could be some interim results with recognized text, but the confidence is so low it dropped out of final result. | ||
* Cognitive Services | ||
1. `RecognitionTriggeredEvent` | ||
2. `ListeningStartedEvent` | ||
3. `ConnectingToServiceEvent` | ||
4. `RecognitionStartedEvent` | ||
5. `SpeechHypothesisEvent` (could be more than one) | ||
6. `SpeechEndDetectedEvent` | ||
7. `SpeechDetailedPhraseEvent(Result.RecognitionStatus = 'NoMatch')` | ||
8. `RecognitionEndedEvent` | ||
* Web Speech API | ||
1. `start` | ||
2. `audiostart` | ||
3. `soundstart` | ||
4. `speechstart` | ||
5. `result` | ||
6. `speechend` | ||
7. `soundend` | ||
8. `audioend` | ||
9. `end` | ||
> Note: the Web Speech API has `onnomatch` event, but unfortunately, Google Chrome did not fire this event. | ||
### Not authorized to use microphone | ||
The user click "deny" on the permission dialog, or there are no microphone detected in the system. | ||
* Cognitive Services | ||
1. `RecognitionTriggeredEvent` | ||
2. `RecognitionEndedEvent(Result.RecognitionStatus = 'AudioSourceError')` | ||
* Web Speech API | ||
1. `error(error = 'not-allowed')` | ||
2. `end` | ||
# Known issues | ||
@@ -263,5 +127,10 @@ | ||
* [ ] Add grammar list | ||
* [ ] Add tests for lifecycle events | ||
* [ ] Investigate continuous mode | ||
* Speech recognition | ||
* [ ] Add grammar list | ||
* [ ] Add tests for lifecycle events | ||
* [ ] Investigate continuous mode | ||
* [ ] Enable Opus (OGG) encoding | ||
* Currently, there is a problem with `microsoft-speech-browser-sdk@0.0.12`, tracking on [this issue](https://github.com/Azure-Samples/SpeechToText-WebSockets-Javascript/issues/88) | ||
* Speech synthesis | ||
* No plan | ||
@@ -268,0 +137,0 @@ # Contributions |
Major refactor
Supply chain riskPackage has recently undergone a major refactor. It may be unstable or indicate significant internal changes. Use caution when updating to versions that include significant changes.
Found 1 instance in 1 package
Network access
Supply chain riskThis module accesses the network.
Found 1 instance in 1 package
No v1
QualityPackage is not semver >=1. This means it is not stable and does not support ^ ranges.
Found 1 instance in 1 package
130041
32
1397
1
135
3