Socket
Socket
Sign inDemoInstall

@aws-sdk/client-transcribe-streaming

Package Overview
Dependencies
48
Maintainers
0
Versions
389
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

@aws-sdk/client-transcribe-streaming


Version published
Maintainers
0
Created

Package description

What is @aws-sdk/client-transcribe-streaming?

@aws-sdk/client-transcribe-streaming is an AWS SDK for JavaScript package that allows developers to use Amazon Transcribe Streaming, a service that provides real-time speech-to-text capabilities. This package enables applications to transcribe audio streams into text in real-time, which can be useful for various applications such as live captioning, real-time transcription, and voice command processing.

What are @aws-sdk/client-transcribe-streaming's main functionalities?

Real-time Transcription

This feature allows you to start a real-time transcription session. The code sample demonstrates how to set up a TranscribeStreamingClient, create an audio stream, and start the transcription process. The transcription results are logged to the console as they are received.

const { TranscribeStreamingClient, StartStreamTranscriptionCommand } = require('@aws-sdk/client-transcribe-streaming');
const { Readable } = require('stream');

const client = new TranscribeStreamingClient({ region: 'us-west-2' });

const audioStream = new Readable();
audioStream._read = () => {}; // No-op

const command = new StartStreamTranscriptionCommand({
  LanguageCode: 'en-US',
  MediaSampleRateHertz: 16000,
  MediaEncoding: 'pcm',
  AudioStream: audioStream
});

const response = await client.send(command);

response.TranscriptResultStream.on('data', (event) => {
  console.log(event.TranscriptEvent.Transcript.Results);
});

audioStream.push(audioBuffer); // Push audio data to the stream

Handling Transcription Events

This feature demonstrates how to handle transcription events. The code sample shows how to process partial and final transcription results from the TranscriptResultStream.

response.TranscriptResultStream.on('data', (event) => {
  event.TranscriptEvent.Transcript.Results.forEach(result => {
    if (result.IsPartial) {
      console.log('Partial transcript:', result.Alternatives[0].Transcript);
    } else {
      console.log('Final transcript:', result.Alternatives[0].Transcript);
    }
  });
});

Stopping the Transcription

This feature shows how to stop the transcription process by signaling the end of the audio stream. The code sample demonstrates how to push a null value to the audio stream to indicate that no more audio data will be sent.

audioStream.push(null); // Signal the end of the audio stream

Other packages similar to @aws-sdk/client-transcribe-streaming

Changelog

Source

3.608.0 (2024-07-01)

Features

  • client-api-gateway: Add v2 smoke tests and smithy smokeTests trait for SDK testing. (af0513b)
  • client-cognito-identity: Add v2 smoke tests and smithy smokeTests trait for SDK testing. (3927da6)
  • client-connect: Authentication profiles are Amazon Connect resources (in gated preview) that allow you to configure authentication settings for users in your contact center. This release adds support for new ListAuthenticationProfiles, DescribeAuthenticationProfile and UpdateAuthenticationProfile APIs. (67d4def)
  • client-docdb: Add v2 smoke tests and smithy smokeTests trait for SDK testing. (62d34b8)
  • client-eks: Updates EKS managed node groups to support EC2 Capacity Blocks for ML (3293ed2)
  • client-payment-cryptography-data: Adding support for dynamic keys for encrypt, decrypt, re-encrypt and translate pin functions. With this change, customers can use one-time TR-31 keys directly in dataplane operations without the need to first import them into the service. (da1e387)
  • client-payment-cryptography: Added further restrictions on logging of potentially sensitive inputs and outputs. (66a9332)
  • client-sfn: Add v2 smoke tests and smithy smokeTests trait for SDK testing. (fe5f536)
  • client-swf: Add v2 smoke tests and smithy smokeTests trait for SDK testing. (03d945e)
  • clients: update client endpoints as of 2024-07-01 (4cc8858)

Readme

Source

@aws-sdk/client-transcribe-streaming

NPM version NPM downloads

Introduction

Amazon Transcribe streaming enables you to send an audio stream and receive back a stream of text in real time. The API makes it easy for developers to add real-time speech-to-text capability to their applications. It can be used for a variety of purposes. For example:

  • Streaming transcriptions can generate real-time subtitles for live broadcast media.
  • Lawyers can make real-time annotations on top of streaming transcriptions during courtroom depositions.
  • Video game chat can be transcribed in real time so that hosts can moderate content or run real-time analysis.
  • Streaming transcriptions can provide assistance to the hearing impaired.

The JavaScript SDK Transcribe Streaming client encapsulates the API into a JavaScript library that can be run on browsers, Node.js and potentially React Native. By default, the client uses HTTP/2 connection on Node.js, and uses WebSocket connection on browsers and React Native.

Installing

To install the this package, simply type add or install @aws-sdk/client-transcribe-streaming using your favorite package manager:

  • npm install @aws-sdk/client-transcribe-streaming
  • yarn add @aws-sdk/client-transcribe-streaming
  • pnpm add @aws-sdk/client-transcribe-streaming

Getting Started

In the sections bellow, we will explain the library by an example of using startStreamTranscription method to transcribe English speech to text.

If you haven't, please read the root README for guidance for creating a sample application and installation. After installation, in the index.js, you can import the Transcribe Streaming client like:

// ES5 example
const { TranscribeStreamingClient, StartStreamTranscriptionCommand } = require("@aws-sdk/client-transcribe-streaming");

If require is not available on the platform you are working on(browsers). You can import the client like:

// ES6+ example
import {
  TranscribeStreamingClient,
  StartMedicalStreamTranscriptionCommand,
} from "@aws-sdk/client-transcribe-streaming";

Constructing the Service Client

You can create a service client like bellow:

const client = new TranscribeStreamingClient({
  region,
  credentials,
});
// region and credentials are optional in Node.js

Acquire Speech Stream

The Transcribe Streaming client accepts streaming speech input as an async iterable. You can construct them from either an async generator or using Symbol.asyncIterable to emit binary chunks.

Here's an example of using async generator:

const audioStream = async function* () {
  await device.start();
  while (device.ends !== true) {
    const chunk = await device.read();
    yield chunk; /* yield binary chunk */
  }
};

Then you need to construct the binary chunk into an audio chunk shape that can be recognized by the SDK:

const audioStream = async function* () {
  for await (const chunk of audioSource()) {
    yield { AudioEvent: { AudioChunk: chunk } };
  }
};

Acquire from Node.js Stream API

In Node.js you will mostly acquire the speech in Stream API, from HTTP request or devices. Stream API in Node.js (>= 10.0.0) itself is an async iterable. You can supply the streaming into the SDK input without explicit convert. You only need to construct the audio chunk shape that can be recognized by the SDK:

const audioSource = req; //Incoming message
const audioStream = async function* () {
  for await (const payloadChunk of audioSource) {
    yield { AudioEvent: { AudioChunk: payloadChunk } };
  }
};

If you see don't limit the chunk size on the client side, for example, streams from fs, you might see The chunk is too big error from the Transcribe Streaming. You can solve it by setting the HighWaterMark:

const { PassThrough } = require("stream");
const { createReadStream } = require("fs");
const audioSource = createReadStream("path/to/speech.wav");
const audioPayloadStream = new PassThrough({ highWaterMark: 1 * 1024 }); // Stream chunk less than 1 KB
audioSource.pipe(audioPayloadStream);
const audioStream = async function* () {
  for await (const payloadChunk of audioPayloadStream) {
    yield { AudioEvent: { AudioChunk: payloadChunk } };
  }
};

Depending on the audio source, you may need to PCM encode you audio chunk.

Acquire from Browsers

The Transcribe Streaming SDK client also supports streaming from browsers. You can acquire the microphone data through getUserMedia API. Note that this API is supported by a subset of browsers. Here's a code snippet of acquiring microphone audio stream using microphone-stream

const mic = require("microphone-stream");
// this part should be put into an async function
micStream.setStream(
  await window.navigator.mediaDevices.getUserMedia({
    video: false,
    audio: true,
  })
);
const audioStream = async function* () {
  for await (const chunk of micStream) {
    yield { AudioEvent: { AudioChunk: pcmEncodeChunk(chunk) /* pcm Encoding is optional depending on the source */ } };
  }
};

You can find the a full front-end example here

PCM encoding

Currently Transcribe Streaming service only accepts PCM encoding. If your audio source is not already encoded, you need to PCM encoding the chunks. Here's an example:

const pcmEncodeChunk = (chunk) => {
  const input = mic.toRaw(chunk);
  var offset = 0;
  var buffer = new ArrayBuffer(input.length * 2);
  var view = new DataView(buffer);
  for (var i = 0; i < input.length; i++, offset += 2) {
    var s = Math.max(-1, Math.min(1, input[i]));
    view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
  }
  return Buffer.from(buffer);
};

Send the Speech Stream

const command = new StartStreamTranscriptionCommand({
  // The language code for the input audio. Valid values are en-GB, en-US, es-US, fr-CA, and fr-FR
  LanguageCode: "en-US",
  // The encoding used for the input audio. The only valid value is pcm.
  MediaEncoding: "pcm",
  // The sample rate of the input audio in Hertz. We suggest that you use 8000 Hz for low-quality audio and 16000 Hz for
  // high-quality audio. The sample rate must match the sample rate in the audio file.
  MediaSampleRateHertz: 44100,
  AudioStream: audioStream(),
});
const response = await client.send(command);

Handling Text Stream

If the request succeeds, you will get a response containing the streaming transcript like this. Just like the input speech stream, the transcript stream is an async iterable emitting the partial transcripts. Here is a code snippet of accessing the transcripts

// This snippet should be put into an async function
for await (const event of response.TranscriptResultStream) {
  if (event.TranscriptEvent) {
    const message = event.TranscriptEvent;
    // Get multiple possible results
    const results = event.TranscriptEvent.Transcript.Results;
    // Print all the possible transcripts
    results.map((result) => {
      (result.Alternatives || []).map((alternative) => {
        const transcript = alternative.Items.map((item) => item.Content).join(" ");
        console.log(transcript);
      });
    });
  }
}

Pipe Transcripts Stream

In Node.js, you can pipe this TranscriptResultStream to other destinations easily with the from API:

const { Readable } = require("stream");
const transcriptsStream = Readable.from(response.TranscriptResultStream);
transcriptsStream.pipe(/* some destinations */);

Error Handling

If you are using async...await style code, you are able to catch the errors with try...catch block. There are 2 categories of exceptions can be thrown:

  • Immediate exceptions thrown before transcription is started, like signature exceptions, invalid parameters exceptions, and network errors;
  • Streaming exceptions that happens after transcription is started, like InternalFailureException or ConflictException.

For immediate exceptions, the SDK client will retry the request if the error is retryable, like network errors. You can config the client to behave as you intend to.

For streaming exceptions, because the streaming transcription is already started, client cannot retry the request automatically. The client will throw these exceptions and users can handle the stream behavior accordingly.

Here's an example of error handling flow:

try {
  const response = await client.send(command);
  await handleResponse(response);
} catch (e) {
  if (e instanceof InternalFailureException) {
    /* handle InternalFailureException */
  } else if (e instanceof ConflictException) {
    /* handle ConflictException */
  }
} finally {
  /* clean resources like input stream */
}

Notes for React Native

This package is compatible with React Native (>= 0.60). However, it is not tested with any React Native libraries that converts microphone record into streaming data. Community input for integrating streaming microphone record data is welcome.

Thank you for reading this guide. If you want to know more about how streams are encoded, how connection is established, please refer to the Service API guide.

Contributing

This client code is generated automatically. Any modifications will be overwritten the next time the @aws-sdk/client-transcribe-streaming package is updated. To contribute to client you can check our generate clients scripts.

License

This SDK is distributed under the Apache License, Version 2.0, see LICENSE for more informatio

FAQs

Last updated on 01 Jul 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc