react-native-deepgram

react-native-deepgram brings Deepgram’s AI to React Native & Expo:
- 🔊 Live Speech-to-Text – capture PCM audio and stream over WebSocket.
- 📄 File Transcription – POST audio blobs/URIs and receive a transcript.
- 🎤 Text-to-Speech – generate natural speech with HTTP synthesis + WebSocket streaming.
- 🧠 Text Intelligence – summarise, detect topics, intents & sentiment.
- 🛠️ Management API – list models, keys, usage, projects & more.
- ⚙️ Expo config plugin – automatic native setup (managed or bare workflow).
Installation
yarn add react-native-deepgram
npm install react-native-deepgram
iOS (CocoaPods)
cd ios && pod install
Expo
module.exports = {
expo: {
plugins: [
[
'react-native-deepgram',
{
microphonePermission:
'Allow $(PRODUCT_NAME) to access your microphone.',
},
],
],
},
};
npx expo prebuild
npx expo run:ios
Configuration
import { configure } from 'react-native-deepgram';
configure({ apiKey: 'YOUR_DEEPGRAM_API_KEY' });
Heads‑up 🔐 The Management API needs a key with management scopes.
Don’t ship production keys in a public repo—use environment variables, Expo secrets, or your own backend.
Hooks at a glance
useDeepgramSpeechToText | Live mic streaming + file transcription |
useDeepgramTextToSpeech | Text-to-Speech synthesis + streaming |
useDeepgramTextIntelligence | NLP analysis (summaries, topics, sentiment, intents) |
useDeepgramManagement | Full Management REST wrapper |
useDeepgramSpeechToText
Example – live streaming
const { startListening, stopListening } = useDeepgramSpeechToText({
onTranscript: console.log,
});
<Button title="Start" onPress={startListening} />
<Button title="Stop" onPress={stopListening} />
Example – file transcription
const { transcribeFile } = useDeepgramSpeechToText({
onTranscribeSuccess: console.log,
});
const pickFile = async () => {
const f = await DocumentPicker.getDocumentAsync({ type: 'audio/*' });
if (f.type === 'success') await transcribeFile(f);
};
Properties
onBeforeStart | () => void | Called before any setup (e.g. permission prompt) | – |
onStart | () => void | Fires once the WebSocket connection opens | – |
onTranscript | (transcript: string) => void | Called on every transcript update (partial & final) | – |
onError | (error: unknown) => void | Called on any streaming error | – |
onEnd | () => void | Fires when the session ends / WebSocket closes | – |
onBeforeTranscribe | () => void | Called before file transcription begins | – |
onTranscribeSuccess | (transcript: string) => void | Called with the final transcript of the file | – |
onTranscribeError | (error: unknown) => void | Called if file transcription fails | – |
Methods
startListening | () => Promise<void> | Begin mic capture and stream audio to Deepgram |
stopListening | () => Promise<void> | Stop capture and close WebSocket |
transcribeFile | (file: Blob | { uri: string; name?: string; type?: string }) => Promise<void> | Upload an audio file and receive its transcript via callbacks |
Types
export type UseDeepgramSpeechToTextProps =
export type UseDeepgramSpeechToTextReturn = {
startListening: () => void;
stopListening: () => void;
transcribeFile: (
file: Blob | { uri: string; name?: string; type?: string }
) => Promise<void>;
};
useDeepgramTextToSpeech
Example – one-shot synthesis
const { synthesize } = useDeepgramTextToSpeech({
onSynthesizeSuccess: () => console.log('Audio played successfully'),
onSynthesizeError: (error) => console.error('TTS error:', error),
});
<Button
title="Speak Text"
onPress={() => synthesize('Hello from Deepgram!')}
/>;
Example – streaming with continuous text
const { startStreaming, sendText, stopStreaming } = useDeepgramTextToSpeech({
onStreamStart: () => console.log('Stream started'),
onStreamEnd: () => console.log('Stream ended'),
onStreamError: (error) => console.error('Stream error:', error),
});
<Button
title="Start Stream"
onPress={() => startStreaming('This is the first message.')}
/>
<Button
title="Send More Text"
onPress={() => sendText('And this is a follow-up message.')}
/>
<Button title="Stop Stream" onPress={stopStreaming} />
Properties
onBeforeSynthesize | () => void | Called before HTTP synthesis begins | – |
onSynthesizeSuccess | (audio: ArrayBuffer) => void | Called when HTTP synthesis completes successfully | – |
onSynthesizeError | (error: unknown) => void | Called if HTTP synthesis fails | – |
onBeforeStream | () => void | Called before WebSocket stream starts | – |
onStreamStart | () => void | Called when WebSocket connection opens | – |
onAudioChunk | (chunk: ArrayBuffer) => void | Called for each audio chunk received via WebSocket | – |
onStreamError | (error: unknown) => void | Called on WebSocket streaming errors | – |
onStreamEnd | () => void | Called when WebSocket stream ends | – |
options | UseDeepgramTextToSpeechOptions | TTS configuration options | {} |
Methods
synthesize | (text: string) => Promise<void> | Generate and play audio for text using HTTP API (one-shot) |
startStreaming | (text: string) => Promise<void> | Start WebSocket stream and send initial text |
sendText | (text: string) => boolean | Send additional text to active WebSocket stream |
stopStreaming | () => void | Close WebSocket stream and stop audio playback |
Options
model | string | TTS model to use | 'aura-2-thalia-en' |
sampleRate | number | Audio sample rate (8000, 16000, 24000, etc.) | 16000 |
bitRate | number | Audio bit rate | – |
callback | string | Webhook URL for completion notifications | – |
callbackMethod | 'POST' | 'PUT' | HTTP method for webhook | – |
mipOptOut | boolean | Opt out of Model Improvement Program | – |
Types
export interface UseDeepgramTextToSpeechOptions {
model?: string;
sampleRate?: number;
bitRate?: number;
callback?: string;
callbackMethod?: 'POST' | 'PUT' | string;
mipOptOut?: boolean;
}
export interface UseDeepgramTextToSpeechProps {
onBeforeSynthesize?: () => void;
onSynthesizeSuccess?: (audio: ArrayBuffer) => void;
onSynthesizeError?: (error: unknown) => void;
onBeforeStream?: () => void;
onStreamStart?: () => void;
onAudioChunk?: (chunk: ArrayBuffer) => void;
onStreamError?: (error: unknown) => void;
onStreamEnd?: () => void;
options?: UseDeepgramTextToSpeechOptions;
}
export interface UseDeepgramTextToSpeechReturn {
synthesize: (text: string) => Promise<void>;
startStreaming: (text: string) => Promise<void>;
sendText: (text: string) => boolean;
stopStreaming: () => void;
}
useDeepgramTextIntelligence
Example
const { analyze } = useDeepgramTextIntelligence({
options: { summarize: true, topics: true, sentiment: true },
onAnalyzeSuccess: console.log,
});
await analyze({ text: 'React Native makes mobile easy.' });
Properties
onBeforeAnalyze | () => void | Called before analysis begins (e.g. show spinner) | – |
onAnalyzeSuccess | (results: any) => void | Called with the analysis results on success | – |
onAnalyzeError | (error: Error) => void | Called if the analysis request fails | – |
options | UseDeepgramTextIntelligenceOptions | Which NLP tasks to run | {} |
Methods
analyze | (input: { text?: string; url?: string }) => Promise<void> | Send raw text (or a URL) to Deepgram for processing |
Types
export interface UseDeepgramTextIntelligenceOptions {
summarize?: boolean;
topics?: boolean;
intents?: boolean;
sentiment?: boolean;
language?: string;
customTopic?: string | string[];
customTopicMode?: 'extended' | 'strict';
callback?: string;
callbackMethod?: 'POST' | 'PUT' | string;
}
export interface UseDeepgramTextIntelligenceReturn {
analyze: (input: { text?: string; url?: string }) => Promise<void>;
}
useDeepgramManagement
Example
const dg = useDeepgramManagement();
const projects = await dg.projects.list();
console.log(
'Projects:',
projects.map((p) => p.name)
);
Properties
This hook accepts no props – simply call it to receive a typed client.
Methods (snapshot)
models | list(includeOutdated?) , get(modelId) |
projects | list() , get(id) , delete(id) , patch(id, body) , listModels(id) , getModel(projectId, modelId) |
keys | list(projectId) , create(projectId, body) , get(projectId, keyId) , delete(projectId, keyId) |
usage | listRequests(projectId) , getRequest(projectId, requestId) , listFields(projectId) , getBreakdown(projectId) |
balances | list(projectId) , get(projectId, balanceId) |
(Plus helpers for members
, scopes
, invitations
, and purchases
.)
Types
export interface UseDeepgramManagementReturn {
models: {
list(includeOutdated?: boolean): Promise<DeepgramListModelsResponse>;
get(modelId: string): Promise<DeepgramSttModel | DeepgramTtsModel>;
};
projects: {
list(): Promise<DeepgramProject[]>;
};
}
Example app
git clone https://github.com/itsRares/react-native-deepgram
cd react-native-deepgram/example
yarn && yarn start
Roadmap
- ✅ Speech-to-Text (WebSocket + REST)
- ✅ Text-to-Speech (HTTP synthesis + WebSocket streaming)
- ✅ Text Intelligence (summaries, topics, sentiment, intents)
- ✅ Management API wrapper
- 🚧 Detox E2E tests for the example app
Contributing
Issues / PRs welcome—see CONTRIBUTING.md.
License
MIT