You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

react-native-deepgram

Package Overview
Dependencies
Maintainers
1
Versions
19
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

react-native-deepgram

React Native SDK for Deepgram's AI-powered speech-to-text, real-time transcription, and text intelligence APIs. Supports live audio streaming, file transcription, sentiment analysis, and topic detection for iOS and Android.

0.1.21
latest
Source
npmnpm
Version published
Weekly downloads
155
-55.84%
Maintainers
1
Weekly downloads
 
Created
Source

react-native-deepgram

npm version License: MIT

react-native-deepgram brings Deepgram’s AI to React Native & Expo:

  • 🔊 Live Speech-to-Text – capture PCM audio and stream over WebSocket.
  • 📄 File Transcription – POST audio blobs/URIs and receive a transcript.
  • 🎤 Text-to-Speech – generate natural speech with HTTP synthesis + WebSocket streaming.
  • 🧠 Text Intelligence – summarise, detect topics, intents & sentiment.
  • 🛠️ Management API – list models, keys, usage, projects & more.
  • ⚙️ Expo config plugin – automatic native setup (managed or bare workflow).

Installation

yarn add react-native-deepgram
# or
npm install react-native-deepgram

iOS (CocoaPods)

cd ios && pod install

Expo

// app.config.js
module.exports = {
  expo: {
    plugins: [
      [
        'react-native-deepgram',
        {
          microphonePermission:
            'Allow $(PRODUCT_NAME) to access your microphone.',
        },
      ],
    ],
  },
};
npx expo prebuild
npx expo run:ios   # or expo run:android

Configuration

import { configure } from 'react-native-deepgram';

configure({ apiKey: 'YOUR_DEEPGRAM_API_KEY' });

Heads‑up 🔐 The Management API needs a key with management scopes.
Don’t ship production keys in a public repo—use environment variables, Expo secrets, or your own backend.

Hooks at a glance

HookPurpose
useDeepgramSpeechToTextLive mic streaming + file transcription
useDeepgramTextToSpeechText-to-Speech synthesis + streaming
useDeepgramTextIntelligenceNLP analysis (summaries, topics, sentiment, intents)
useDeepgramManagementFull Management REST wrapper

useDeepgramSpeechToText

Example – live streaming
const { startListening, stopListening } = useDeepgramSpeechToText({
  onTranscript: console.log,
});

<Button title="Start" onPress={startListening} />
<Button title="Stop"  onPress={stopListening} />
Example – file transcription
const { transcribeFile } = useDeepgramSpeechToText({
  onTranscribeSuccess: console.log,
});

const pickFile = async () => {
  const f = await DocumentPicker.getDocumentAsync({ type: 'audio/*' });
  if (f.type === 'success') await transcribeFile(f);
};

Properties

NameTypeDescriptionDefault
onBeforeStart() => voidCalled before any setup (e.g. permission prompt)
onStart() => voidFires once the WebSocket connection opens
onTranscript(transcript: string) => voidCalled on every transcript update (partial & final)
onError(error: unknown) => voidCalled on any streaming error
onEnd() => voidFires when the session ends / WebSocket closes
onBeforeTranscribe() => voidCalled before file transcription begins
onTranscribeSuccess(transcript: string) => voidCalled with the final transcript of the file
onTranscribeError(error: unknown) => voidCalled if file transcription fails

Methods

NameSignatureDescription
startListening() => Promise<void>Begin mic capture and stream audio to Deepgram
stopListening() => Promise<void>Stop capture and close WebSocket
transcribeFile(file: Blob | { uri: string; name?: string; type?: string }) => Promise<void>Upload an audio file and receive its transcript via callbacks
Types
export type UseDeepgramSpeechToTextProps = /* …see above table… */
export type UseDeepgramSpeechToTextReturn = {
  startListening: () => void;
  stopListening: () => void;
  transcribeFile: (
    file: Blob | { uri: string; name?: string; type?: string }
  ) => Promise<void>;
};

useDeepgramTextToSpeech

Example – one-shot synthesis
const { synthesize } = useDeepgramTextToSpeech({
  onSynthesizeSuccess: () => console.log('Audio played successfully'),
  onSynthesizeError: (error) => console.error('TTS error:', error),
});

<Button
  title="Speak Text"
  onPress={() => synthesize('Hello from Deepgram!')}
/>;
Example – streaming with continuous text
const { startStreaming, sendText, stopStreaming } = useDeepgramTextToSpeech({
  onStreamStart: () => console.log('Stream started'),
  onStreamEnd: () => console.log('Stream ended'),
  onStreamError: (error) => console.error('Stream error:', error),
});

// Start streaming with initial text
<Button
  title="Start Stream"
  onPress={() => startStreaming('This is the first message.')}
/>

// Send additional text to the same stream
<Button
  title="Send More Text"
  onPress={() => sendText('And this is a follow-up message.')}
/>

// Stop the stream
<Button title="Stop Stream" onPress={stopStreaming} />

Properties

NameTypeDescriptionDefault
onBeforeSynthesize() => voidCalled before HTTP synthesis begins
onSynthesizeSuccess(audio: ArrayBuffer) => voidCalled when HTTP synthesis completes successfully
onSynthesizeError(error: unknown) => voidCalled if HTTP synthesis fails
onBeforeStream() => voidCalled before WebSocket stream starts
onStreamStart() => voidCalled when WebSocket connection opens
onAudioChunk(chunk: ArrayBuffer) => voidCalled for each audio chunk received via WebSocket
onStreamError(error: unknown) => voidCalled on WebSocket streaming errors
onStreamEnd() => voidCalled when WebSocket stream ends
optionsUseDeepgramTextToSpeechOptionsTTS configuration options{}

Methods

NameSignatureDescription
synthesize(text: string) => Promise<void>Generate and play audio for text using HTTP API (one-shot)
startStreaming(text: string) => Promise<void>Start WebSocket stream and send initial text
sendText(text: string) => booleanSend additional text to active WebSocket stream
stopStreaming() => voidClose WebSocket stream and stop audio playback

Options

NameTypeDescriptionDefault
modelstringTTS model to use'aura-2-thalia-en'
sampleRatenumberAudio sample rate (8000, 16000, 24000, etc.)16000
bitRatenumberAudio bit rate
callbackstringWebhook URL for completion notifications
callbackMethod'POST' | 'PUT'HTTP method for webhook
mipOptOutbooleanOpt out of Model Improvement Program
Types
export interface UseDeepgramTextToSpeechOptions {
  model?: string;
  sampleRate?: number;
  bitRate?: number;
  callback?: string;
  callbackMethod?: 'POST' | 'PUT' | string;
  mipOptOut?: boolean;
}

export interface UseDeepgramTextToSpeechProps {
  onBeforeSynthesize?: () => void;
  onSynthesizeSuccess?: (audio: ArrayBuffer) => void;
  onSynthesizeError?: (error: unknown) => void;
  onBeforeStream?: () => void;
  onStreamStart?: () => void;
  onAudioChunk?: (chunk: ArrayBuffer) => void;
  onStreamError?: (error: unknown) => void;
  onStreamEnd?: () => void;
  options?: UseDeepgramTextToSpeechOptions;
}

export interface UseDeepgramTextToSpeechReturn {
  synthesize: (text: string) => Promise<void>;
  startStreaming: (text: string) => Promise<void>;
  sendText: (text: string) => boolean;
  stopStreaming: () => void;
}

useDeepgramTextIntelligence

Example
const { analyze } = useDeepgramTextIntelligence({
  options: { summarize: true, topics: true, sentiment: true },
  onAnalyzeSuccess: console.log,
});

await analyze({ text: 'React Native makes mobile easy.' });

Properties

NameTypeDescriptionDefault
onBeforeAnalyze() => voidCalled before analysis begins (e.g. show spinner)
onAnalyzeSuccess(results: any) => voidCalled with the analysis results on success
onAnalyzeError(error: Error) => voidCalled if the analysis request fails
optionsUseDeepgramTextIntelligenceOptionsWhich NLP tasks to run{}

Methods

NameSignatureDescription
analyze(input: { text?: string; url?: string }) => Promise<void>Send raw text (or a URL) to Deepgram for processing
Types
export interface UseDeepgramTextIntelligenceOptions {
  summarize?: boolean;
  topics?: boolean;
  intents?: boolean;
  sentiment?: boolean;
  language?: string;
  customTopic?: string | string[];
  customTopicMode?: 'extended' | 'strict';
  callback?: string;
  callbackMethod?: 'POST' | 'PUT' | string;
}

export interface UseDeepgramTextIntelligenceReturn {
  analyze: (input: { text?: string; url?: string }) => Promise<void>;
}

useDeepgramManagement

Example
const dg = useDeepgramManagement();

// List all projects linked to the key
const projects = await dg.projects.list();
console.log(
  'Projects:',
  projects.map((p) => p.name)
);

Properties

This hook accepts no props – simply call it to receive a typed client.

Methods (snapshot)

GroupRepresentative methods
modelslist(includeOutdated?), get(modelId)
projectslist(), get(id), delete(id), patch(id, body), listModels(id), getModel(projectId, modelId)
keyslist(projectId), create(projectId, body), get(projectId, keyId), delete(projectId, keyId)
usagelistRequests(projectId), getRequest(projectId, requestId), listFields(projectId), getBreakdown(projectId)
balanceslist(projectId), get(projectId, balanceId)

(Plus helpers for members, scopes, invitations, and purchases.)

Types
export interface UseDeepgramManagementReturn {
  models: {
    list(includeOutdated?: boolean): Promise<DeepgramListModelsResponse>;
    get(modelId: string): Promise<DeepgramSttModel | DeepgramTtsModel>;
  };
  projects: {
    list(): Promise<DeepgramProject[]>;
    // …see source for full surface
  };
  // …keys, members, scopes, invitations, usage, balances, purchases
}

Example app

git clone https://github.com/itsRares/react-native-deepgram
cd react-native-deepgram/example
yarn && yarn start   # or expo start

Roadmap

  • ✅ Speech-to-Text (WebSocket + REST)
  • ✅ Text-to-Speech (HTTP synthesis + WebSocket streaming)
  • ✅ Text Intelligence (summaries, topics, sentiment, intents)
  • ✅ Management API wrapper
  • 🚧 Detox E2E tests for the example app

Contributing

Issues / PRs welcome—see CONTRIBUTING.md.

License

MIT

Keywords

react-native

FAQs

Package last updated on 18 Jul 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts