Socket
Book a DemoInstallSign in
Socket

react-native-deepgram

Package Overview
Dependencies
Maintainers
1
Versions
23
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

react-native-deepgram

React Native SDK for Deepgram's AI-powered speech-to-text, real-time transcription, and text intelligence APIs. Supports live audio streaming, file transcription, sentiment analysis, and topic detection for iOS and Android.

latest
Source
npmnpm
Version
1.0.4
Version published
Weekly downloads
95
-54.76%
Maintainers
1
Weekly downloads
 
Created
Source

react-native-deepgram

npm version License: MIT

react-native-deepgram brings Deepgram's AI platform to React Native & Expo.

✅ Supports Speech-to-Text v1 and the new Speech-to-Text v2 (Flux) streaming API alongside Text-to-Speech, Text Intelligence, and the Management API.

Table of contents

Features

  • 🔊 Live Speech-to-Text – capture PCM audio and stream it over WebSocket (STT v1 or v2/Flux).
  • 📄 File Transcription – send audio files/URIs to Deepgram and receive transcripts.
  • 🎤 Text-to-Speech – synthesize speech with HTTP requests or WebSocket streaming controls.
  • 🗣️ Voice Agent – orchestrate realtime conversational agents with microphone capture + audio playback.
  • 🧠 Text Intelligence – summarisation, topic detection, intents, sentiment and more.
  • 🛠️ Management API – list models, keys, usage, projects, balances, etc.
  • ⚙️ Expo config plugin – automatic native configuration for managed and bare workflows.

Installation

yarn add react-native-deepgram
# or
npm install react-native-deepgram

iOS (CocoaPods)

cd ios && pod install

Expo

// app.config.js
module.exports = {
  expo: {
    plugins: [
      [
        'react-native-deepgram',
        {
          microphonePermission:
            'Allow $(PRODUCT_NAME) to access your microphone.',
        },
      ],
    ],
  },
};
npx expo prebuild
npx expo run:ios   # or expo run:android

Configuration

import { configure } from 'react-native-deepgram';

configure({ apiKey: 'YOUR_DEEPGRAM_API_KEY' });

Heads‑up 🔐 The Management API needs a key with management scopes. Do not ship production keys in source control—prefer environment variables, Expo secrets, or a backend proxy.

Usage overview

HookPurpose
useDeepgramVoiceAgentBuild conversational agents with streaming audio I/O
useDeepgramSpeechToTextLive microphone streaming and file transcription
useDeepgramTextToSpeechText-to-Speech synthesis (HTTP + WebSocket streaming)
useDeepgramTextIntelligenceText analysis (summaries, topics, intents, sentiment)
useDeepgramManagementTyped wrapper around the Management REST API

Voice Agent (useDeepgramVoiceAgent)

useDeepgramVoiceAgent connects to wss://agent.deepgram.com/v1/agent/converse, captures microphone audio, and optionally auto-plays the agent's streamed responses. It wraps the full Voice Agent messaging surface so you can react to conversation updates, function calls, warnings, and raw PCM audio.

Quick start

const {
  connect,
  disconnect,
  injectUserMessage,
  sendFunctionCallResponse,
  updatePrompt,
} = useDeepgramVoiceAgent({
  defaultSettings: {
    audio: {
      input: { encoding: 'linear16', sample_rate: 24_000 },
      output: { encoding: 'linear16', sample_rate: 24_000, container: 'none' },
    },
    agent: {
      language: 'en',
      greeting: 'Hello! How can I help you today?',
      listen: {
        provider: { type: 'deepgram', model: 'nova-3', smart_format: true },
      },
      think: {
        provider: { type: 'open_ai', model: 'gpt-4o', temperature: 0.7 },
        prompt: 'You are a helpful voice concierge.',
      },
      speak: {
        provider: { type: 'deepgram', model: 'aura-2-asteria-en' },
      },
    },
    tags: ['demo'],
  },
  onConversationText: (msg) => {
    console.log(`${msg.role}: ${msg.content}`);
  },
  onAgentThinking: (msg) => console.log('thinking:', msg.content),
  onAgentAudioDone: () => console.log('Agent finished speaking'),
  onServerError: (err) => console.error('Agent error', err.description),
});

const begin = async () => {
  try {
    await connect();
  } catch (err) {
    console.error('Failed to start agent', err);
  }
};

const askQuestion = () => {
  injectUserMessage("What's the weather like?");
};

const provideTooling = () => {
  sendFunctionCallResponse({
    id: 'func_12345',
    name: 'get_weather',
    content: JSON.stringify({ temperature: 72, condition: 'sunny' }),
    client_side: true,
  });
};

const rePrompt = () => {
  updatePrompt('You are now a helpful travel assistant.');
};

return (
  <>
    <Button title="Start agent" onPress={begin} />
    <Button title="Ask" onPress={askQuestion} />
    <Button title="Send tool output" onPress={provideTooling} />
    <Button title="Update prompt" onPress={rePrompt} />
    <Button title="Stop" onPress={disconnect} />
  </>
);

💬 The hook requests mic permissions, streams PCM to Deepgram, and surfaces the agent's replies as text so nothing plays back into the microphone.

API reference (Voice Agent)

Hook props

PropTypeDescription
endpointstringWebSocket endpoint used for the agent conversation (defaults to wss://agent.deepgram.com/v1/agent/converse).
defaultSettingsDeepgramVoiceAgentSettingsBase Settings payload sent on connect; merge per-call overrides via connect(override).
autoStartMicrophonebooleanAutomatically requests mic access and starts streaming PCM when true (default).
downsampleFactornumberManually override the downsample ratio applied to captured audio (defaults to a heuristic based on the requested sample rate).

Callbacks

CallbackSignatureFired when
onBeforeConnect() => voidconnect is called—before requesting mic permissions or opening the socket.
onConnect() => voidThe socket opens and the initial settings payload is delivered.
onClose(event?: any) => voidThe socket closes (manual disconnect or remote).
onError(error: unknown) => voidAny unexpected error occurs (mic, playback, socket send, etc.).
onMessage(message: DeepgramVoiceAgentServerMessage) => voidEvery JSON message from the Voice Agent API.
onWelcome(message: DeepgramVoiceAgentWelcomeMessage) => voidThe agent returns the initial Welcome envelope.
onSettingsApplied(message: DeepgramVoiceAgentSettingsAppliedMessage) => voidSettings are acknowledged by the agent.
onConversationText(message: DeepgramVoiceAgentConversationTextMessage) => voidTranscript updates (role + content) arrive.
onAgentThinking(message: DeepgramVoiceAgentAgentThinkingMessage) => voidThe agent reports internal reasoning state.
onAgentStartedSpeaking(message: DeepgramVoiceAgentAgentStartedSpeakingMessage) => voidA response playback session begins (latency metrics included).
onAgentAudioDone(message: DeepgramVoiceAgentAgentAudioDoneMessage) => voidThe agent finishes emitting audio for a turn.
onUserStartedSpeaking(message: DeepgramVoiceAgentUserStartedSpeakingMessage) => voidServer-side VAD detects the user speaking.
onFunctionCallRequest(message: DeepgramVoiceAgentFunctionCallRequestMessage) => voidThe agent asks the client to execute a tool marked client_side: true.
onFunctionCallResponse(message: DeepgramVoiceAgentReceiveFunctionCallResponseMessage) => voidThe server shares the outcome of a non-client-side function call.
onPromptUpdated(message: DeepgramVoiceAgentPromptUpdatedMessage) => voidThe active prompt is updated (e.g., after updatePrompt).
onSpeakUpdated(message: DeepgramVoiceAgentSpeakUpdatedMessage) => voidThe active speak configuration changes (sent by the server).
onInjectionRefused(message: DeepgramVoiceAgentInjectionRefusedMessage) => voidAn inject request is rejected (typically while the agent is speaking).
onWarning(message: DeepgramVoiceAgentWarningMessage) => voidThe API surfaces a non-fatal warning (e.g., degraded audio quality).
onServerError(message: DeepgramVoiceAgentErrorMessage) => voidThe API reports a structured error payload (description + code).

Returned methods

MethodSignatureDescription
connect(settings?: DeepgramVoiceAgentSettings) => Promise<void>Opens the socket, optionally merges additional settings, and begins microphone streaming.
disconnect() => voidTears down the socket, stops recording, and removes listeners.
sendMessage(message: DeepgramVoiceAgentClientMessage) => booleanSends a pre-built client envelope (handy for custom message types).
sendSettings(settings: DeepgramVoiceAgentSettings) => booleanSends a Settings message mid-session (merged with the type field).
injectUserMessage(content: string) => booleanInjects a user-side text message.
injectAgentMessage(message: string) => booleanInjects an assistant-side text message.
sendFunctionCallResponse(response: Omit<DeepgramVoiceAgentFunctionCallResponseMessage, 'type'>) => booleanReturns tool results for client-side function calls.
sendKeepAlive() => booleanEmits a KeepAlive ping to keep the session warm.
updatePrompt(prompt: string) => booleanReplaces the active system prompt.
sendMedia(chunk: ArrayBuffer | Uint8Array | number[]) => booleanStreams additional PCM audio to the agent (e.g., pre-recorded buffers).
isConnected() => booleanReturns true when the socket is open.

Settings payload (DeepgramVoiceAgentSettings)

Expand settings fields
FieldTypePurpose
tagsstring[]Labels applied to the session for analytics/routing.
flags.historybooleanEnable prior history playback to the agent.
audio.inputDeepgramVoiceAgentAudioConfigConfigure encoding/sample rate for microphone audio.
audio.outputDeepgramVoiceAgentAudioConfigChoose output encoding/sample rate/bitrate for agent speech.
agent.languagestringPrimary language for the conversation.
agent.context.messagesDeepgramVoiceAgentContextMessage[]Seed the conversation with prior turns or system notes.
agent.listen.providerDeepgramVoiceAgentListenProviderSpeech recognition provider/model configuration.
agent.think.providerDeepgramVoiceAgentThinkProviderLLM selection (type, model, temperature, etc.).
agent.think.functionsDeepgramVoiceAgentFunctionConfig[]Tooling exposed to the agent (name, parameters, optional endpoint metadata).
agent.think.promptstringSystem prompt presented to the thinking provider.
agent.speak.providerRecord<string, unknown>Text-to-speech model selection for spoken replies.
agent.greetingstringOptional greeting played once settings are applied.
mip_opt_outbooleanOpt the session out of the Model Improvement Program.

Speech-to-Text (useDeepgramSpeechToText)

The speech hook streams microphone audio using WebSockets and can also transcribe prerecorded audio sources. It defaults to STT v1 but automatically boots into Flux when apiVersion: 'v2' is supplied (defaulting the model to flux-general-en).

Live streaming quick start

const { startListening, stopListening } = useDeepgramSpeechToText({
  onTranscript: console.log,
  live: {
    apiVersion: 'v2',
    model: 'flux-general-en',
    punctuate: true,
    eotThreshold: 0.55,
  },
});

<Button
  title="Start"
  onPress={() => startListening({ keywords: ['Deepgram'] })}
/>
<Button title="Stop" onPress={stopListening} />

💡 When you opt into apiVersion: 'v2' the hook automatically selects flux-general-en if you do not provide a model.

File transcription quick start

const { transcribeFile } = useDeepgramSpeechToText({
  onTranscribeSuccess: (text) => console.log(text),
  prerecorded: {
    punctuate: true,
    summarize: 'v2',
  },
});

const pickFile = async () => {
  const f = await DocumentPicker.getDocumentAsync({ type: 'audio/*' });
  if (f.type === 'success') {
    await transcribeFile(f, { topics: true, intents: true });
  }
};

API reference (Speech-to-Text)

Hook props

PropTypeDescription
onBeforeStart() => voidInvoked before requesting mic permissions or starting a stream.
onStart() => voidFired once the WebSocket opens.
onTranscript(transcript: string) => voidCalled for every transcript update (partial and final).
onError(error: unknown) => voidReceives streaming errors.
onEnd() => voidFired when the socket closes.
onBeforeTranscribe() => voidCalled before posting a prerecorded transcription request.
onTranscribeSuccess(transcript: string) => voidReceives the final transcript for prerecorded audio.
onTranscribeError(error: unknown) => voidFired if prerecorded transcription fails.
liveDeepgramLiveListenOptionsDefault options merged into every live stream.
prerecordedDeepgramPrerecordedOptionsDefault options merged into every file transcription.

Returned methods

MethodSignatureDescription
startListening(options?: DeepgramLiveListenOptions) => Promise<void>Requests mic access, starts recording, and streams audio to Deepgram.
stopListening() => voidStops recording and closes the active WebSocket.
transcribeFile(file: DeepgramPrerecordedSource, options?: DeepgramPrerecordedOptions) => Promise<void>Uploads a file/URI/URL and resolves via the success/error callbacks.

Live transcription options (DeepgramLiveListenOptions)

Expand all live streaming parameters
OptionTypePurposeDefault
apiVersion'v1' | 'v2'Selects the realtime API generation ('v2' unlocks Flux streaming).'v1'
callbackstringWebhook URL invoked when the stream finishes.
callbackMethod'POST' | 'GET' | 'PUT' | 'DELETE'HTTP verb Deepgram should use for callback.'POST'
channelsnumberNumber of audio channels in the input.
diarizebooleanSeparate speakers into individual tracks.Disabled
dictationbooleanEnable dictation features (punctuation, formatting).Disabled
encodingDeepgramLiveListenEncodingAudio codec supplied to Deepgram.'linear16'
endpointingnumber | booleanControl endpoint detection (false disables).
extraRecord<string, string | number | boolean>Attach custom metadata returned with the response.
fillerWordsbooleanInclude filler words such as "um"/"uh".Disabled
interimResultsbooleanEmit interim (non-final) transcripts.Disabled
keytermstring | string[]Provide key terms to bias Nova-3 transcription.
keywordsstring | string[]Boost or suppress keywords.
languagestringBCP-47 language hint (e.g. en-US).Auto
mipOptOutbooleanOpt out of the Model Improvement Program.Disabled
modelDeepgramLiveListenModelStreaming model to request.'nova-2' (v1) / 'flux-general-en' (v2)
multichannelbooleanTranscribe each channel independently.Disabled
numeralsbooleanConvert spoken numbers into digits.Disabled
profanityFilterbooleanRemove profanity from transcripts.Disabled
punctuatebooleanAuto-insert punctuation and capitalization.Disabled
redactDeepgramLiveListenRedaction | DeepgramLiveListenRedaction[]Remove sensitive content such as PCI data.
replacestring | string[]Replace specific terms in the output.
sampleRatenumberSample rate of the PCM audio being sent.16000
searchstring | string[]Return timestamps for search terms.
smartFormatbooleanApply Deepgram smart formatting.Disabled
tagstringLabel the request for reporting.
eagerEotThresholdnumberConfidence required to emit an eager turn (Flux only).
eotThresholdnumberConfidence required to finalise a turn (Flux only).
eotTimeoutMsnumberSilence timeout before closing a turn (Flux only).
utteranceEndMsnumberDelay before emitting an utterance end event.
vadEventsbooleanEmit voice activity detection events.Disabled
versionstringRequest a specific model version.

Prerecorded transcription options (DeepgramPrerecordedOptions)

Expand all prerecorded transcription parameters
OptionTypePurposeDefault
callbackstringWebhook URL invoked once transcription finishes.
callbackMethodDeepgramPrerecordedCallbackMethodHTTP verb used for callback.'POST'
extraDeepgramPrerecordedExtraMetadata returned with the response.
sentimentbooleanRun sentiment analysis.Disabled
summarizeDeepgramPrerecordedSummarizeRequest AI summaries (true, 'v1', or 'v2').Disabled
tagstring | string[]Label the request.
topicsbooleanDetect topics.Disabled
customTopicstring | string[]Provide additional topics to monitor.
customTopicModeDeepgramPrerecordedCustomModeInterpret customTopic as 'extended' or 'strict'.'extended'
intentsbooleanDetect intents.Disabled
customIntentstring | string[]Provide custom intents to bias detection.
customIntentModeDeepgramPrerecordedCustomModeInterpret customIntent as 'extended' or 'strict'.'extended'
detectEntitiesbooleanExtract entities (names, places, etc.).Disabled
detectLanguageboolean | string | string[]Auto-detect language or limit detection.Disabled
diarizebooleanEnable speaker diarisation.Disabled
dictationbooleanEnable dictation formatting.Disabled
encodingDeepgramPrerecordedEncodingEncoding/codec of the uploaded audio.
fillerWordsbooleanInclude filler words.Disabled
keytermstring | string[]Provide key terms to bias Nova-3.
keywordsstring | string[]Boost or suppress keywords.
languagestringPrimary spoken language hint (BCP-47).Auto
measurementsbooleanConvert measurements into abbreviations.Disabled
modelDeepgramPrerecordedModelModel to use for transcription.API default
multichannelbooleanTranscribe each channel independently.Disabled
numeralsbooleanConvert spoken numbers into digits.Disabled
paragraphsbooleanSplit transcript into paragraphs.Disabled
profanityFilterbooleanRemove profanity from the transcript.Disabled
punctuatebooleanAuto-insert punctuation and capitalisation.Disabled
redactDeepgramPrerecordedRedaction | DeepgramPrerecordedRedaction[]Remove sensitive content (PCI/PII).
replacestring | string[]Replace specific terms in the output.
searchstring | string[]Return timestamps for search terms.
smartFormatbooleanApply Deepgram smart formatting.Disabled
utterancesbooleanReturn utterance-level timestamps.Disabled
uttSplitnumberPause duration (seconds) used to split utterances.
versionDeepgramPrerecordedVersionRequest a specific model version (e.g. 'latest').API default ('latest')

Text-to-Speech (useDeepgramTextToSpeech)

Generate audio via a single HTTP call or stream interactive responses over WebSocket. The hook exposes granular configuration for both request paths.

HTTP synthesis quick start

const { synthesize } = useDeepgramTextToSpeech({
  options: {
    http: {
      model: 'aura-2-asteria-en',
      encoding: 'mp3',
      bitRate: 48000,
      container: 'none',
    },
  },
  onSynthesizeSuccess: (buffer) => {
    console.log('Received bytes', buffer.byteLength);
  },
});

await synthesize('Hello from Deepgram!');

Streaming quick start

const {
  startStreaming,
  sendText,
  flushStream,
  clearStream,
  closeStreamGracefully,
  stopStreaming,
} = useDeepgramTextToSpeech({
  options: {
    stream: {
      model: 'aura-2-asteria-en',
      encoding: 'linear16',
      sampleRate: 24000,
      autoFlush: false,
    },
  },
  onAudioChunk: (chunk) => console.log('Audio chunk', chunk.byteLength),
  onStreamMetadata: (meta) => console.log(meta.model_name),
});

await startStreaming('Booting stream…');
sendText('Queue another sentence', { sequenceId: 1 });
flushStream();
closeStreamGracefully();

API reference

Hook props

PropTypeDescription
onBeforeSynthesize() => voidCalled before dispatching an HTTP synthesis request.
onSynthesizeSuccess(audio: ArrayBuffer) => voidReceives the raw audio bytes when the HTTP request succeeds.
onSynthesizeError(error: unknown) => voidFired if the HTTP request fails.
onBeforeStream() => voidCalled prior to opening the WebSocket stream.
onStreamStart() => voidFired once the socket is open and ready.
onAudioChunk(chunk: ArrayBuffer) => voidCalled for each PCM chunk received from the stream.
onStreamMetadata(metadata: DeepgramTextToSpeechStreamMetadataMessage) => voidEmits metadata describing the current stream.
onStreamFlushed(event: DeepgramTextToSpeechStreamFlushedMessage) => voidRaised when Deepgram confirms a flush.
onStreamCleared(event: DeepgramTextToSpeechStreamClearedMessage) => voidRaised when Deepgram confirms a clear.
onStreamWarning(warning: DeepgramTextToSpeechStreamWarningMessage) => voidRaised when Deepgram warns about the stream.
onStreamError(error: unknown) => voidFired when the WebSocket errors.
onStreamEnd() => voidFired when the stream closes (gracefully or otherwise).
optionsUseDeepgramTextToSpeechOptionsDefault configuration merged into HTTP and streaming requests.

Returned methods

MethodSignatureDescription
synthesize(text: string) => Promise<ArrayBuffer>Sends a single piece of text via REST and resolves with the full audio buffer.
startStreaming(text: string) => Promise<void>Opens the streaming WebSocket and queues the first message.
sendMessage(message: DeepgramTextToSpeechStreamInputMessage) => booleanSends a raw control message (Text, Flush, Clear, Close) to the active stream.
sendText(text: string, options?: { flush?: boolean; sequenceId?: number }) => booleanQueues additional text frames, optionally suppressing auto-flush or setting a sequence id.
flushStream() => booleanRequests Deepgram to emit all buffered audio immediately.
clearStream() => booleanClears buffered text/audio without closing the socket.
closeStreamGracefully() => booleanAsks Deepgram to finish outstanding audio then close the stream.
stopStreaming() => voidForce-closes the socket and releases resources.

Configuration (UseDeepgramTextToSpeechOptions)

UseDeepgramTextToSpeechOptions mirrors the SDK's structure and is merged into both HTTP and WebSocket requests.

Global options
OptionTypeApplies toPurpose
model*DeepgramTextToSpeechModel | (string & {})BothLegacy shortcut for selecting a model (prefer per-transport model).
encoding*DeepgramTextToSpeechEncodingBothLegacy shortcut for selecting encoding (prefer http.encoding / stream.encoding).
sampleRate*DeepgramTextToSpeechSampleRateBothLegacy shortcut for sample rate (prefer transport-specific overrides).
bitRate*DeepgramTextToSpeechBitRateHTTPLegacy shortcut for bit rate.
container*DeepgramTextToSpeechContainerHTTPLegacy shortcut for container.
format*'mp3' | 'wav' | 'opus' | 'pcm' | (string & {})HTTPLegacy shortcut for container/format.
callback*stringHTTPLegacy shortcut for callback URL.
callbackMethod*DeepgramTextToSpeechCallbackMethodHTTPLegacy shortcut for callback method.
mipOptOut*booleanBothLegacy shortcut for Model Improvement Program opt-out.
queryParamsRecord<string, string | number | boolean>BothShared query string parameters appended to all requests.
httpDeepgramTextToSpeechHttpOptionsHTTPFine-grained HTTP synthesis configuration.
streamDeepgramTextToSpeechStreamOptionsStreamingFine-grained streaming configuration.

*Marked fields are supported for backwards compatibility but the transport-specific http/stream options are recommended.

`options.http` (REST synthesis)
OptionTypePurpose
modelDeepgramTextToSpeechModel | (string & {})Select the TTS voice/model.
encodingDeepgramTextToSpeechHttpEncodingOutput audio codec.
sampleRateDeepgramTextToSpeechSampleRateOutput sample rate in Hz.
containerDeepgramTextToSpeechContainerWrap audio in a container ('none', 'wav', 'ogg').
format'mp3' | 'wav' | 'opus' | 'pcm' | (string & {})Deprecated alias for container.
bitRateDeepgramTextToSpeechBitRateBit rate for compressed formats (e.g. MP3).
callbackstringWebhook URL invoked after synthesis completes.
callbackMethodDeepgramTextToSpeechCallbackMethodHTTP verb used for the callback.
mipOptOutbooleanOpt out of the Model Improvement Program.
queryParamsRecord<string, string | number | boolean>Extra query parameters appended to the request.
`options.stream` (WebSocket streaming)
OptionTypePurpose
modelDeepgramTextToSpeechModel | (string & {})Select the streaming voice/model.
encodingDeepgramTextToSpeechStreamEncodingOutput PCM encoding for streamed chunks.
sampleRateDeepgramTextToSpeechSampleRateOutput sample rate in Hz.
mipOptOutbooleanOpt out of the Model Improvement Program.
queryParamsRecord<string, string | number | boolean>Extra query parameters appended to the streaming URL.
autoFlushbooleanAutomatically flush after each sendText call (defaults to true).

Text Intelligence (useDeepgramTextIntelligence)

Run summarisation, topic detection, intent detection, sentiment analysis, and more over plain text or URLs.

const { analyze } = useDeepgramTextIntelligence({
  onAnalyzeSuccess: (result) => console.log(result.summary),
  options: {
    summarize: true,
    topics: true,
    intents: true,
    language: 'en-US',
  },
});

await analyze({ text: 'Deepgram makes voice data useful.' });

Options (UseDeepgramTextIntelligenceOptions)

OptionTypePurpose
summarizebooleanRun summarisation on the input.
topicsbooleanDetect topics.
customTopicstring | string[]Supply additional topics to monitor.
customTopicMode'extended' | 'strict'Interpret custom topics as additive (extended) or exact (strict).
intentsbooleanDetect intents.
customIntentstring | string[]Provide custom intents to bias detection.
customIntentMode'extended' | 'strict'Interpret custom intents as additive (extended) or exact (strict).
sentimentbooleanRun sentiment analysis.
languageDeepgramTextIntelligenceLanguageBCP-47 language hint (defaults to 'en').
callbackstringWebhook URL invoked after processing completes.
callbackMethod'POST' | 'PUT' | (string & {})HTTP method used for the callback.

Management API (useDeepgramManagement)

Receive a fully typed REST client for the Deepgram Management API. No props are required.

const dg = useDeepgramManagement();

const projects = await dg.projects.list();
console.log('Projects:', projects.map((p) => p.name));

Snapshot of available groups

GroupRepresentative methods
modelslist(includeOutdated?), get(modelId)
projectslist(), get(id), delete(id), patch(id, body), listModels(id)
keyslist(projectId), create(projectId, body), get(projectId, keyId), delete(...)
usagelistRequests(projectId), getRequest(projectId, requestId), getBreakdown(projectId)
balanceslist(projectId), get(projectId, balanceId)

(Plus helpers for members, scopes, invitations, and purchases.)

Example app

The repository includes an Expo-managed playground under example/ that wires up every hook in this package.

1. Install workspace dependencies

git clone https://github.com/itsRares/react-native-deepgram
cd react-native-deepgram
yarn install

2. Configure your Deepgram key

Create example/.env with an Expo public key so the app can authenticate:

echo "EXPO_PUBLIC_DEEPGRAM_API_KEY=your_deepgram_key" > example/.env

You can generate API keys from the Deepgram Console. For management endpoints, ensure the key carries the right scopes.

3. Run or build the example

  • yarn example – start Expo bundler in development mode (web preview + QR code)
  • yarn example:ios – compile and launch the iOS app with expo run:ios
  • yarn example:android – compile and launch the Android app with expo run:android

If you prefer using bare Expo commands, cd example and run yarn start, yarn ios, or yarn android.

Roadmap

  • ✅ Speech-to-Text (WebSocket + REST)
  • ✅ Speech-to-Text v2 / Flux streaming support
  • ✅ Text-to-Speech (HTTP synthesis + WebSocket streaming)
  • ✅ Text Intelligence (summaries, topics, sentiment, intents)
  • ✅ Management API wrapper
  • 🚧 Detox E2E tests for the example app

Contributing

Issues and PRs are welcome—see CONTRIBUTING.md.

License

MIT

Keywords

react-native

FAQs

Package last updated on 08 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts