web-speech-cognitive-services
Advanced tools
Comparing version 4.0.1-master.54dc22a to 4.0.1-master.556e2aa
@@ -23,3 +23,3 @@ # Changelog | ||
- Speech synthesis: Fix [#32](https://github.com/compulim/web-speech-cognitive-services/issues/32), fetch voices from services, in PR [#35](https://github.com/compulim/web-speech-cognitive-services/pull/35) | ||
- Speech synthesis: Fix [#34](https://github.com/compulim/web-speech-cognitive-services/issues/34), in PR [#36](https://github.com/compulim/web-speech-cognitive-services/pull/36) and PR [#XXX](https://github.com/compulim/web-speech-cognitive-services/pull/XXX) | ||
- Speech synthesis: Fix [#34](https://github.com/compulim/web-speech-cognitive-services/issues/34), in PR [#36](https://github.com/compulim/web-speech-cognitive-services/pull/36) and PR [#44](https://github.com/compulim/web-speech-cognitive-services/pull/44) | ||
- Support user-controlled `AudioContext` object to be passed as an option named `audioContext` | ||
@@ -34,9 +34,12 @@ - If no `audioContext` option is passed, will create a new `AudioContext` object on first synthesis | ||
- Use option `speechRecognitionEndpointId` | ||
- Speech synthesis: Fix [#28](https://github.com/compulim/web-speech-cognitive-services/issues/28), support custom voice font, in PR [#41](https://github.com/compulim/web-speech-cognitive-services/pull/41) | ||
- Speech synthesis: Fix [#28](https://github.com/compulim/web-speech-cognitive-services/issues/28) and [#62](https://github.com/compulim/web-speech-cognitive-services/issues/62), support custom voice font, in PR [#41](https://github.com/compulim/web-speech-cognitive-services/pull/41) and PR [#67](https://github.com/compulim/web-speech-cognitive-services/pull/67) | ||
- Use option `speechSynthesisDeploymentId` | ||
- Voice list is only fetch when using subscription key | ||
- Speech synthesis: Fix [#48](https://github.com/compulim/web-speech-cognitive-services/issues/48), support output format through `outputFormat` option, in PR [#49](https://github.com/compulim/web-speech-cognitive-services/pull/49) | ||
- `*`: Fix [#47](https://github.com/compulim/web-speech-cognitive-services/issues/47), add `enableTelemetry` option for disabling collecting telemetry data in Speech SDK, in PR [#51](https://github.com/compulim/web-speech-cognitive-services/pull/51) | ||
- `*`: Fix [#47](https://github.com/compulim/web-speech-cognitive-services/issues/47), add `enableTelemetry` option for disabling collecting telemetry data in Speech SDK, in PR [#51](https://github.com/compulim/web-speech-cognitive-services/pull/51) and PR [#66](https://github.com/compulim/web-speech/cognitive-services/pull/66) | ||
- `*`: Fix [#53](https://github.com/compulim/web-speech-cognitive-services/issues/53), added ESLint, in PR [#54](https://github.com/compulim/web-speech-cognitive-services/pull/54) | ||
- Speech synthesis: Fix [#39](https://github.com/compulim/web-speech-cognitive-services/issues/39), support SSML utterance, in PR [#57](https://github.com/compulim/web-speech-cognitive-services/pull/57) | ||
- Speech recognition: Fix [#59](https://github.com/compulim/web-speech-cognitive-services/issues/59), support `stop()` function by finalizing partial speech, in PR [#60](https://github.com/compulim/web-speech-cognitive-services/pull/60) | ||
- Fix [#67](https://github.com/compulim/web-speech-cognitive-services/issues/67), add warning when using subscription key instead of authorization token, in PR [#69](https://github.com/compulim/web-speech-cognitive-services/pull/69) | ||
- Fix [#70](https://github.com/compulim/web-speech-cognitive-services/issues/70), fetch authorization token before every synthesis, in PR [#71](https://github.com/compulim/web-speech-cognitive-services/pull/71) | ||
@@ -62,2 +65,4 @@ ### Changed | ||
- Fix [#55](https://github.com/compulim/web-speech-cognitive-services/issues/55) and [#63](https://github.com/compulim/web-speech-cognitive-services/issues/63). Moves to [WHATWG `EventTarget` interface](https://dom.spec.whatwg.org/#interface-eventtarget), in PR [#56](https://github.com/compulim/web-speech-cognitive-services/pulls/56) and PR [#64](https://github.com/compulim/web-speech-cognitive-services/pulls/64) | ||
- Instead of including `event-target-shim@5.0.1`, we are adopting its source code, in PR [#72](https://github.com/compulim/web-speech-cognitive-services/pulls/72) | ||
- This is because the original package requires browser to support rest/spread operators | ||
@@ -64,0 +69,0 @@ ### Fixed |
@@ -46,3 +46,3 @@ "use strict"; | ||
var VERSION = "4.0.1-master.54dc22a"; | ||
var VERSION = "4.0.1-master.556e2aa"; | ||
@@ -49,0 +49,0 @@ function buildSpeechResult(transcript, confidence, isFinal) { |
@@ -40,5 +40,18 @@ "use strict"; | ||
var shouldWarnOnSubscriptionKey = true; | ||
function createSpeechServicesPonyfill() { | ||
var ponyfill = _objectSpread({}, _SpeechToText.default.apply(void 0, arguments), {}, _TextToSpeech.default.apply(void 0, arguments)); | ||
var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {}; | ||
if (shouldWarnOnSubscriptionKey && options.subscriptionKey) { | ||
console.warn('web-speech-cognitive-services: In production environment, subscription key should not be used, authorization token should be used instead.'); | ||
shouldWarnOnSubscriptionKey = false; | ||
} | ||
for (var _len = arguments.length, args = new Array(_len > 1 ? _len - 1 : 0), _key = 1; _key < _len; _key++) { | ||
args[_key - 1] = arguments[_key]; | ||
} | ||
var ponyfill = _objectSpread({}, _SpeechToText.default.apply(void 0, [options].concat(args)), {}, _TextToSpeech.default.apply(void 0, [options].concat(args))); | ||
return _objectSpread({}, ponyfill, { | ||
@@ -54,4 +67,4 @@ then: function then(resolve) { | ||
meta.setAttribute('name', 'web-speech-cognitive-services'); | ||
meta.setAttribute('content', "version=".concat("4.0.1-master.54dc22a")); | ||
meta.setAttribute('content', "version=".concat("4.0.1-master.556e2aa")); | ||
document.head.appendChild(meta); | ||
//# sourceMappingURL=SpeechServices.js.map |
@@ -28,3 +28,3 @@ "use strict"; | ||
var _eventTargetShim = require("event-target-shim"); | ||
var _eventTargetShim = require("../../external/event-target-shim"); | ||
@@ -139,3 +139,4 @@ var _cognitiveServiceEventResultToWebSpeechRecognitionResultList = _interopRequireDefault(require("./cognitiveServiceEventResultToWebSpeechRecognitionResultList")); | ||
authorizationToken = _ref3.authorizationToken, | ||
enableTelemetry = _ref3.enableTelemetry, | ||
_ref3$enableTelemetry = _ref3.enableTelemetry, | ||
enableTelemetry = _ref3$enableTelemetry === void 0 ? true : _ref3$enableTelemetry, | ||
referenceGrammars = _ref3.referenceGrammars, | ||
@@ -150,6 +151,6 @@ _ref3$region = _ref3.region, | ||
if (!authorizationToken && !subscriptionKey) { | ||
console.warn('Either authorizationToken or subscriptionKey must be specified'); | ||
console.warn('web-speech-cognitive-services: Either authorizationToken or subscriptionKey must be specified'); | ||
return {}; | ||
} else if (!window.navigator.mediaDevices || !window.navigator.mediaDevices.getUserMedia) { | ||
console.warn('This browser does not support WebRTC and it will not work with Cognitive Services Speech Services.'); | ||
console.warn('web-speech-cognitive-services: This browser does not support WebRTC and it will not work with Cognitive Services Speech Services.'); | ||
return {}; | ||
@@ -186,5 +187,6 @@ } | ||
}); | ||
}); | ||
SpeechRecognizer.enableTelemetry(enableTelemetry); | ||
}); // If enableTelemetry is set to null or non-boolean, we will default to true. | ||
SpeechRecognizer.enableTelemetry(enableTelemetry !== false); | ||
var SpeechRecognition = | ||
@@ -191,0 +193,0 @@ /*#__PURE__*/ |
@@ -24,3 +24,3 @@ "use strict"; | ||
var _eventTargetShim = require("event-target-shim"); | ||
var _eventTargetShim = require("../../external/event-target-shim"); | ||
@@ -35,2 +35,4 @@ var _memoizeOne = _interopRequireDefault(require("memoize-one")); | ||
var _fetchCustomVoices = _interopRequireDefault(require("./fetchCustomVoices")); | ||
var _fetchVoices = _interopRequireDefault(require("./fetchVoices")); | ||
@@ -42,4 +44,6 @@ | ||
/* eslint class-methods-use-this: 0 */ | ||
// Supported output format can be found at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech#audio-outputs | ||
var DEFAULT_OUTPUT_FORMAT = 'audio-24khz-160kbitrate-mono-mp3'; | ||
var EMPTY_ARRAY = []; | ||
var TOKEN_EXPIRATION = 600000; | ||
@@ -63,6 +67,6 @@ var TOKEN_EARLY_RENEWAL = 60000; | ||
if (!authorizationToken && !subscriptionKey) { | ||
console.warn('Either authorization token or subscription key must be specified'); | ||
console.warn('web-speech-cognitive-services: Either authorization token or subscription key must be specified'); | ||
return {}; | ||
} else if (!ponyfill.AudioContext) { | ||
console.warn('This browser does not support Web Audio and it will not work with Cognitive Services Speech Services.'); | ||
console.warn('web-speech-cognitive-services: This browser does not support Web Audio and it will not work with Cognitive Services Speech Services.'); | ||
return {}; | ||
@@ -81,8 +85,11 @@ } | ||
}); | ||
var getAuthorizationTokenPromise = typeof authorizationToken === 'function' ? authorizationToken() : authorizationToken ? authorizationToken : fetchMemoizedAuthorizationToken({ | ||
now: Date.now, | ||
region: region, | ||
subscriptionKey: subscriptionKey | ||
}); | ||
var getAuthorizationToken = function getAuthorizationToken() { | ||
return typeof authorizationToken === 'function' ? authorizationToken() : authorizationToken ? authorizationToken : fetchMemoizedAuthorizationToken({ | ||
now: Date.now, | ||
region: region, | ||
subscriptionKey: subscriptionKey | ||
}); | ||
}; | ||
var SpeechSynthesis = | ||
@@ -102,3 +109,2 @@ /*#__PURE__*/ | ||
}); | ||
_this.voices = []; | ||
@@ -118,3 +124,3 @@ _this.updateVoices(); | ||
value: function getVoices() { | ||
return this.voices; | ||
return EMPTY_ARRAY; | ||
} | ||
@@ -144,4 +150,4 @@ }, { | ||
utterance.preload({ | ||
authorizationTokenPromise: getAuthorizationTokenPromise, | ||
deploymentId: speechSynthesisDeploymentId, | ||
getAuthorizationToken: getAuthorizationToken, | ||
outputFormat: speechSynthesisOutputFormat, | ||
@@ -159,15 +165,21 @@ region: region | ||
/*#__PURE__*/ | ||
_regenerator.default.mark(function _callee2() { | ||
_regenerator.default.mark(function _callee3() { | ||
var _this3 = this; | ||
return _regenerator.default.wrap(function _callee2$(_context2) { | ||
return _regenerator.default.wrap(function _callee3$(_context3) { | ||
while (1) { | ||
switch (_context2.prev = _context2.next) { | ||
switch (_context3.prev = _context3.next) { | ||
case 0: | ||
if (speechSynthesisDeploymentId) { | ||
_context2.next = 3; | ||
if (!speechSynthesisDeploymentId) { | ||
_context3.next = 7; | ||
break; | ||
} | ||
_context2.next = 3; | ||
if (!subscriptionKey) { | ||
_context3.next = 5; | ||
break; | ||
} | ||
console.warn('web-speech-cognitive-services: Listing of custom voice models are only available when using subscription key.'); | ||
_context3.next = 5; | ||
return (0, _onErrorResumeNext.default)( | ||
@@ -178,2 +190,3 @@ /*#__PURE__*/ | ||
_regenerator.default.mark(function _callee() { | ||
var voices; | ||
return _regenerator.default.wrap(function _callee$(_context) { | ||
@@ -183,37 +196,80 @@ while (1) { | ||
case 0: | ||
_context.t0 = _fetchVoices.default; | ||
_context.next = 3; | ||
return getAuthorizationTokenPromise; | ||
_context.next = 2; | ||
return (0, _fetchCustomVoices.default)({ | ||
deploymentId: speechSynthesisDeploymentId, | ||
region: region, | ||
subscriptionKey: subscriptionKey | ||
}); | ||
case 2: | ||
voices = _context.sent; | ||
_this3.getVoices = function () { | ||
return voices; | ||
}; | ||
case 4: | ||
case "end": | ||
return _context.stop(); | ||
} | ||
} | ||
}, _callee); | ||
}))); | ||
case 5: | ||
_context3.next = 9; | ||
break; | ||
case 7: | ||
_context3.next = 9; | ||
return (0, _onErrorResumeNext.default)( | ||
/*#__PURE__*/ | ||
(0, _asyncToGenerator2.default)( | ||
/*#__PURE__*/ | ||
_regenerator.default.mark(function _callee2() { | ||
var voices; | ||
return _regenerator.default.wrap(function _callee2$(_context2) { | ||
while (1) { | ||
switch (_context2.prev = _context2.next) { | ||
case 0: | ||
_context2.t0 = _fetchVoices.default; | ||
_context2.next = 3; | ||
return getAuthorizationToken(); | ||
case 3: | ||
_context.t1 = _context.sent; | ||
_context.t2 = speechSynthesisDeploymentId; | ||
_context.t3 = region; | ||
_context.t4 = { | ||
authorizationToken: _context.t1, | ||
deploymentId: _context.t2, | ||
region: _context.t3 | ||
_context2.t1 = _context2.sent; | ||
_context2.t2 = speechSynthesisDeploymentId; | ||
_context2.t3 = region; | ||
_context2.t4 = { | ||
authorizationToken: _context2.t1, | ||
deploymentId: _context2.t2, | ||
region: _context2.t3 | ||
}; | ||
_context.next = 9; | ||
return (0, _context.t0)(_context.t4); | ||
_context2.next = 9; | ||
return (0, _context2.t0)(_context2.t4); | ||
case 9: | ||
_this3.voices = _context.sent; | ||
voices = _context2.sent; | ||
_this3.dispatchEvent(new _SpeechSynthesisEvent.default('voiceschanged')); | ||
_this3.getVoices = function () { | ||
return voices; | ||
}; | ||
case 11: | ||
case "end": | ||
return _context.stop(); | ||
return _context2.stop(); | ||
} | ||
} | ||
}, _callee); | ||
}, _callee2); | ||
}))); | ||
case 3: | ||
case 9: | ||
this.dispatchEvent(new _SpeechSynthesisEvent.default('voiceschanged')); | ||
case 10: | ||
case "end": | ||
return _context2.stop(); | ||
return _context3.stop(); | ||
} | ||
} | ||
}, _callee2); | ||
}, _callee3, this); | ||
})); | ||
@@ -220,0 +276,0 @@ |
@@ -34,3 +34,3 @@ "use strict"; | ||
_regenerator.default.mark(function _callee(_ref) { | ||
var authorizationTokenPromise, deploymentId, _ref$lang, lang, outputFormat, pitch, rate, region, text, _ref$voice, voice, volume, authorizationToken, ssml, url, res; | ||
var deploymentId, getAuthorizationToken, _ref$lang, lang, outputFormat, pitch, rate, region, text, _ref$voice, voice, volume, authorizationToken, ssml, url, res; | ||
@@ -41,3 +41,3 @@ return _regenerator.default.wrap(function _callee$(_context) { | ||
case 0: | ||
authorizationTokenPromise = _ref.authorizationTokenPromise, deploymentId = _ref.deploymentId, _ref$lang = _ref.lang, lang = _ref$lang === void 0 ? DEFAULT_LANGUAGE : _ref$lang, outputFormat = _ref.outputFormat, pitch = _ref.pitch, rate = _ref.rate, region = _ref.region, text = _ref.text, _ref$voice = _ref.voice, voice = _ref$voice === void 0 ? DEFAULT_VOICE : _ref$voice, volume = _ref.volume; | ||
deploymentId = _ref.deploymentId, getAuthorizationToken = _ref.getAuthorizationToken, _ref$lang = _ref.lang, lang = _ref$lang === void 0 ? DEFAULT_LANGUAGE : _ref$lang, outputFormat = _ref.outputFormat, pitch = _ref.pitch, rate = _ref.rate, region = _ref.region, text = _ref.text, _ref$voice = _ref.voice, voice = _ref$voice === void 0 ? DEFAULT_VOICE : _ref$voice, volume = _ref.volume; | ||
@@ -53,3 +53,3 @@ if (text) { | ||
_context.next = 5; | ||
return authorizationTokenPromise; | ||
return getAuthorizationToken(); | ||
@@ -56,0 +56,0 @@ case 5: |
@@ -25,3 +25,3 @@ "use strict"; | ||
_regenerator.default.mark(function _callee(_ref) { | ||
var authorizationToken, deploymentId, region, res, voices; | ||
var authorizationToken, region, res, voices; | ||
return _regenerator.default.wrap(function _callee$(_context) { | ||
@@ -31,5 +31,5 @@ while (1) { | ||
case 0: | ||
authorizationToken = _ref.authorizationToken, deploymentId = _ref.deploymentId, region = _ref.region; | ||
authorizationToken = _ref.authorizationToken, region = _ref.region; | ||
_context.next = 3; | ||
return fetch(deploymentId ? "https://".concat(encodeURI(region), ".voice.speech.microsoft.com/cognitiveservices/voices/list?deploymentId=").concat(encodeURIComponent(deploymentId)) : "https://".concat(encodeURI(region), ".tts.speech.microsoft.com/cognitiveservices/voices/list"), { | ||
return fetch("https://".concat(encodeURI(region), ".tts.speech.microsoft.com/cognitiveservices/voices/list"), { | ||
headers: { | ||
@@ -36,0 +36,0 @@ authorization: "Bearer ".concat(authorizationToken), |
@@ -24,3 +24,3 @@ "use strict"; | ||
var _eventTargetShim = require("event-target-shim"); | ||
var _eventTargetShim = require("../../external/event-target-shim"); | ||
@@ -33,4 +33,2 @@ var _eventAsPromise = _interopRequireDefault(require("event-as-promise")); | ||
var _SpeechSynthesisVoice = _interopRequireDefault(require("./SpeechSynthesisVoice")); | ||
var _subscribeEvent = _interopRequireDefault(require("./subscribeEvent")); | ||
@@ -71,12 +69,12 @@ | ||
var _default = | ||
var SpeechSynthesisUtterance = | ||
/*#__PURE__*/ | ||
function (_EventTarget) { | ||
(0, _inherits2.default)(_default, _EventTarget); | ||
(0, _inherits2.default)(SpeechSynthesisUtterance, _EventTarget); | ||
function _default(text) { | ||
function SpeechSynthesisUtterance(text) { | ||
var _this; | ||
(0, _classCallCheck2.default)(this, _default); | ||
_this = (0, _possibleConstructorReturn2.default)(this, (0, _getPrototypeOf2.default)(_default).call(this)); | ||
(0, _classCallCheck2.default)(this, SpeechSynthesisUtterance); | ||
_this = (0, _possibleConstructorReturn2.default)(this, (0, _getPrototypeOf2.default)(SpeechSynthesisUtterance).call(this)); | ||
_this._lang = null; | ||
@@ -98,3 +96,3 @@ _this._pitch = 1; | ||
(0, _createClass2.default)(_default, [{ | ||
(0, _createClass2.default)(SpeechSynthesisUtterance, [{ | ||
key: "preload", | ||
@@ -105,3 +103,3 @@ value: function () { | ||
_regenerator.default.mark(function _callee(_ref2) { | ||
var authorizationTokenPromise, deploymentId, outputFormat, region; | ||
var deploymentId, getAuthorizationToken, outputFormat, region; | ||
return _regenerator.default.wrap(function _callee$(_context) { | ||
@@ -111,6 +109,6 @@ while (1) { | ||
case 0: | ||
authorizationTokenPromise = _ref2.authorizationTokenPromise, deploymentId = _ref2.deploymentId, outputFormat = _ref2.outputFormat, region = _ref2.region; | ||
deploymentId = _ref2.deploymentId, getAuthorizationToken = _ref2.getAuthorizationToken, outputFormat = _ref2.outputFormat, region = _ref2.region; | ||
this.arrayBufferPromise = (0, _fetchSpeechData.default)({ | ||
authorizationTokenPromise: authorizationTokenPromise, | ||
deploymentId: deploymentId, | ||
getAuthorizationToken: getAuthorizationToken, | ||
lang: this.lang || window.navigator.language, | ||
@@ -258,6 +256,5 @@ outputFormat: outputFormat, | ||
}]); | ||
return _default; | ||
return SpeechSynthesisUtterance; | ||
}(_eventTargetShim.EventTarget); | ||
exports.default = _default; | ||
(0, _eventTargetShim.defineEventAttribute)(SpeechSynthesisUtterance.prototype, 'boundary'); | ||
@@ -270,2 +267,4 @@ (0, _eventTargetShim.defineEventAttribute)(SpeechSynthesisUtterance.prototype, 'end'); | ||
(0, _eventTargetShim.defineEventAttribute)(SpeechSynthesisUtterance.prototype, 'start'); | ||
var _default = SpeechSynthesisUtterance; | ||
exports.default = _default; | ||
//# sourceMappingURL=SpeechSynthesisUtterance.js.map |
{ | ||
"name": "web-speech-cognitive-services", | ||
"version": "4.0.1-master.54dc22a", | ||
"version": "4.0.1-master.556e2aa", | ||
"description": "Polyfill Web Speech API with Cognitive Services Speech-to-Text service", | ||
@@ -5,0 +5,0 @@ "keywords": [ |
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
4274056
95
9010
10