![Introducing Enhanced Alert Actions and Triage Functionality](https://cdn.sanity.io/images/cgdhsj6q/production/fe71306d515f85de6139b46745ea7180362324f0-2530x946.png?w=800&fit=max&auto=format)
Product
Introducing Enhanced Alert Actions and Triage Functionality
Socket now supports four distinct alert actions instead of the previous two, and alert triaging allows users to override the actions taken for all individual alerts.
Readme
Build with AI models that can transcribe and understand audio
With a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.
Visit our AssemblyAI API Documentation to get an overview of our models!
pip install -U assemblyai
Before starting, you need to set the API key. If you don't have one yet, sign up for one!
import assemblyai as aai
# set the API key
aai.settings.api_key = f"{ASSEMBLYAI_API_KEY}"
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("./my-local-audio-file.wav")
print(transcript.text)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
print(transcript.text)
import assemblyai as aai
transcriber = aai.Transcriber()
# Binary data is supported directly:
transcript = transcriber.transcribe(data)
# Or: Upload data separately:
upload_url = transcriber.upload_file(data)
transcript = transcriber.transcribe(upload_url)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
# in SRT format
print(transcript.export_subtitles_srt())
# in VTT format
print(transcript.export_subtitles_vtt())
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
sentences = transcript.get_sentences()
for sentence in sentences:
print(sentence.text)
paragraphs = transcript.get_paragraphs()
for paragraph in paragraphs:
print(paragraph.text)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
matches = transcript.word_search(["price", "product"])
for match in matches:
print(f"Found '{match.text}' {match.count} times in the transcript")
import assemblyai as aai
config = aai.TranscriptionConfig()
config.set_custom_spelling(
{
"Kubernetes": ["k8s"],
"SQL": ["Sequel"],
}
)
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)
print(transcript.text)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
[
"https://example.org/customer1.mp3",
"https://example.org/customer2.mp3",
],
)
result = transcript_group.lemur.summarize(
context="Customers asking for cars",
answer_format="TLDR"
)
print(result.response)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/customer.mp3")
# ask some questions
questions = [
aai.LemurQuestion(question="What car was the customer interested in?"),
aai.LemurQuestion(question="What price range is the customer looking for?"),
]
result = transcript.lemur.question(questions)
for q in result.response:
print(f"Question: {q.question}")
print(f"Answer: {q.answer}")
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/customer.mp3")
result = transcript.lemur.action_items(
context="Customers asking for help with resolving their problem",
answer_format="Three bullet points",
)
print(result.response)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/customer.mp3")
result = transcript.lemur.task(
"You are a helpful coach. Provide an analysis of the transcript "
"and offer areas to improve with exact quotes. Include no preamble. "
"Start with an overall summary then get into the examples with feedback.",
)
print(result.response)
import assemblyai as aai
transcriber = aai.Transcriber()
config = aai.TranscriptionConfig(
speaker_labels=True,
)
transcript = transcriber.transcribe("https://example.org/customer.mp3", config=config)
# Example converting speaker label utterances into LeMUR input text
text = ""
for utt in transcript.utterances:
text += f"Speaker {utt.speaker}:\n{utt.text}\n"
result = aai.Lemur().task(
"You are a helpful coach. Provide an analysis of the transcript "
"and offer areas to improve with exact quotes. Include no preamble. "
"Start with an overall summary then get into the examples with feedback.",
input_text=text
)
print(result.response)
import assemblyai as aai
# Create a transcript and a corresponding LeMUR request that may contain senstive information.
transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
[
"https://example.org/customer1.mp3",
],
)
result = transcript_group.lemur.summarize(
context="Customers providing sensitive, personally identifiable information",
answer_format="TLDR"
)
# Get the request ID from the LeMUR response
request_id = result.request_id
# Now we can delete the data about this request
deletion_result = aai.Lemur.purge_request_data(request_id)
print(deletion_result)
import assemblyai as aai
config = aai.TranscriptionConfig()
config.set_redact_pii(
# What should be redacted
policies=[
aai.PIIRedactionPolicy.credit_card_number,
aai.PIIRedactionPolicy.email_address,
aai.PIIRedactionPolicy.location,
aai.PIIRedactionPolicy.person_name,
aai.PIIRedactionPolicy.phone_number,
],
# How it should be redacted
substitution=aai.PIISubstitutionPolicy.hash,
)
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)
To request a copy of the original audio file with the redacted information "beeped" out, set redact_pii_audio=True
in the config.
Once the Transcript
object is returned, you can access the URL of the redacted audio file with get_redacted_audio_url
, or save the redacted audio directly to disk with save_redacted_audio
.
import assemblyai as aai
transcript = aai.Transcriber().transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(
redact_pii=True,
redact_pii_policies=[aai.PIIRedactionPolicy.person_name],
redact_pii_audio=True
)
)
redacted_audio_url = transcript.get_redacted_audio_url()
transcript.save_redacted_audio("redacted_audio.mp3")
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(auto_chapters=True)
)
for chapter in transcript.chapters:
print(f"Summary: {chapter.summary}") # A one paragraph summary of the content spoken during this timeframe
print(f"Start: {chapter.start}, End: {chapter.end}") # Timestamps (in milliseconds) of the chapter
print(f"Healine: {chapter.headline}") # A single sentence summary of the content spoken during this timeframe
print(f"Gist: {chapter.gist}") # An ultra-short summary, just a few words, of the content spoken during this timeframe
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(summarization=True)
)
print(transcript.summary)
By default, the summarization model will be informative
and the summarization type will be bullets
. Read more about summarization models and types here.
To change the model and/or type, pass additional parameters to the TranscriptionConfig
:
config=aai.TranscriptionConfig(
summarization=True,
summary_model=aai.SummarizationModel.catchy,
summary_type=aai.SummarizationType.headline
)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(content_safety=True)
)
# Get the parts of the transcript which were flagged as sensitive
for result in transcript.content_safety.results:
print(result.text) # sensitive text snippet
print(result.timestamp.start)
print(result.timestamp.end)
for label in result.labels:
print(label.label) # content safety category
print(label.confidence) # model's confidence that the text is in this category
print(label.severity) # severity of the text in relation to the category
# Get the confidence of the most common labels in relation to the entire audio file
for label, confidence in transcript.content_safety.summary.items():
print(f"{confidence * 100}% confident that the audio contains {label}")
# Get the overall severity of the most common labels in relation to the entire audio file
for label, severity_confidence in transcript.content_safety.severity_score_summary.items():
print(f"{severity_confidence.low * 100}% confident that the audio contains low-severity {label}")
print(f"{severity_confidence.medium * 100}% confident that the audio contains mid-severity {label}")
print(f"{severity_confidence.high * 100}% confident that the audio contains high-severity {label}")
Read more about the content safety categories.
By default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, pass content_safety_confidence
(as an integer percentage between 25 and 100, inclusive) to the TranscriptionConfig
:
config=aai.TranscriptionConfig(
content_safety=True,
content_safety_confidence=80, # only include labels with a confidence greater than 80%
)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(sentiment_analysis=True)
)
for sentiment_result in transcript.sentiment_analysis:
print(sentiment_result.text)
print(sentiment_result.sentiment) # POSITIVE, NEUTRAL, or NEGATIVE
print(sentiment_result.confidence)
print(f"Timestamp: {sentiment_result.start} - {sentiment_result.end}")
If speaker_labels
is also enabled, then each sentiment analysis result will also include a speaker
field.
# ...
config = aai.TranscriptionConfig(sentiment_analysis=True, speaker_labels=True)
# ...
for sentiment_result in transcript.sentiment_analysis:
print(sentiment_result.speaker)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(entity_detection=True)
)
for entity in transcript.entities:
print(entity.text) # i.e. "Dan Gilbert"
print(entity.entity_type) # i.e. EntityType.person
print(f"Timestamp: {entity.start} - {entity.end}")
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(iab_categories=True)
)
# Get the parts of the transcript that were tagged with topics
for result in transcript.iab_categories.results:
print(result.text)
print(f"Timestamp: {result.timestamp.start} - {result.timestamp.end}")
for label in result.labels:
print(label.label) # topic
print(label.relevance) # how relevant the label is for the portion of text
# Get a summary of all topics in the transcript
for label, relevance in transcript.iab_categories.summary.items():
print(f"Audio is {relevance * 100}% relevant to {label}")
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(auto_highlights=True)
)
for result in transcript.auto_highlights.results:
print(result.text) # the important phrase
print(result.rank) # relevancy of the phrase
print(result.count) # number of instances of the phrase
for timestamp in result.timestamps:
print(f"Timestamp: {timestamp.start} - {timestamp.end}")
Read more about our Real-Time service.
import assemblyai as aai
def on_open(session_opened: aai.RealtimeSessionOpened):
"This function is called when the connection has been established."
print("Session ID:", session_opened.session_id)
def on_data(transcript: aai.RealtimeTranscript):
"This function is called when a new transcript has been received."
if not transcript.text:
return
if isinstance(transcript, aai.RealtimeFinalTranscript):
print(transcript.text, end="\r\n")
else:
print(transcript.text, end="\r")
def on_error(error: aai.RealtimeError):
"This function is called when an error occurs."
print("An error occured:", error)
def on_close():
"This function is called when the connection has been closed."
print("Closing Session")
# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
on_data=on_data,
on_error=on_error,
sample_rate=44_100,
on_open=on_open, # optional
on_close=on_close, # optional
)
# Start the connection
transcriber.connect()
# Open a microphone stream
microphone_stream = aai.extras.MicrophoneStream()
# Press CTRL+C to abort
transcriber.stream(microphone_stream)
transcriber.close()
import assemblyai as aai
def on_data(transcript: aai.RealtimeTranscript):
"This function is called when a new transcript has been received."
if not transcript.text:
return
if isinstance(transcript, aai.RealtimeFinalTranscript):
print(transcript.text, end="\r\n")
else:
print(transcript.text, end="\r")
def on_error(error: aai.RealtimeError):
"This function is called when the connection has been closed."
print("An error occured:", error)
# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
on_data=on_data,
on_error=on_error,
sample_rate=44_100,
)
# Start the connection
transcriber.connect()
# Only WAV/PCM16 single channel supported for now
file_stream = aai.extras.stream_file(
filepath="audio.wav",
sample_rate=44_100,
)
transcriber.stream(file_stream)
transcriber.close()
transcriber = aai.RealtimeTranscriber(...)
# Manually end an utterance and immediately produce a final transcript.
transcriber.force_end_utterance()
# Configure the threshold for automatic utterance detection.
transcriber = aai.RealtimeTranscriber(
...,
end_utterance_silence_threshold=500
)
# Can be changed any time during a session.
# The valid range is between 0 and 20000.
transcriber.configure_end_utterance_silence_threshold(300)
# Set disable_partial_transcripts to `True`
transcriber = aai.RealtimeTranscriber(
...,
disable_partial_transcripts=True
)
# Define a callback to handle the extra session information message
def on_extra_session_information(data: aai.RealtimeSessionInformation):
"This function is called when a session information message has been received."
print(data.audio_duration_seconds)
# Configure the RealtimeTranscriber
transcriber = aai.RealtimeTranscriber(
...,
on_extra_session_information=on_extra_session_information,
)
Visit one of our Playgrounds:
When no TranscriptionConfig
is being passed to the Transcriber
or its methods, it will use a default instance of a TranscriptionConfig
.
If you would like to re-use the same TranscriptionConfig
for all your transcriptions,
you can set it on the Transcriber
directly:
config = aai.TranscriptionConfig(punctuate=False, format_text=False)
transcriber = aai.Transcriber(config=config)
# will use the same config for all `.transcribe*(...)` operations
transcriber.transcribe("https://example.org/audio.wav")
You can override the default configuration later via the .config
property of the Transcriber
:
transcriber = aai.Transcriber()
# override the `Transcriber`'s config with a new config
transcriber.config = aai.TranscriptionConfig(punctuate=False, format_text=False)
In case you want to override the Transcriber
's configuration for a specific operation with a different one, you can do so via the config
parameter of a .transcribe*(...)
method:
config = aai.TranscriptionConfig(punctuate=False, format_text=False)
# set a default configuration
transcriber = aai.Transcriber(config=config)
transcriber.transcribe(
"https://example.com/audio.mp3",
# overrides the above configuration on the `Transcriber` with the following
config=aai.TranscriptionConfig(dual_channel=True, disfluencies=True)
)
Currently, the SDK provides two ways to transcribe audio files.
The synchronous approach halts the application's flow until the transcription has been completed.
The asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives a concurrent.futures.Future
object which can be used to check the status of the transcription at a later time.
You can identify those two approaches by the _async
suffix in the Transcriber
's method name (e.g. transcribe
vs transcribe_async
).
By default we poll the Transcript
's status each 3s
. In case you would like to adjust that interval:
import assemblyai as aai
aai.settings.polling_interval = 1.0
If you previously created a transcript, you can use its ID to retrieve it later.
import assemblyai as aai
transcript = aai.Transcript.get_by_id("<TRANSCRIPT_ID>")
print(transcript.id)
print(transcript.text)
You can also retrieve multiple existing transcripts and combine them into a single TranscriptGroup
object. This allows you to perform operations on the transcript group as a single unit, such as querying the combined transcripts with LeMUR.
import assemblyai as aai
transcript_group = aai.TranscriptGroup.get_by_ids(["<TRANSCRIPT_ID_1>", "<TRANSCRIPT_ID_2>"])
summary = transcript_group.lemur.summarize(context="Customers asking for cars", answer_format="TLDR")
print(summary)
Both Transcript.get_by_id
and TranscriptGroup.get_by_ids
have asynchronous counterparts, Transcript.get_by_id_async
and TranscriptGroup.get_by_ids_async
, respectively. These functions immediately return a Future
object, rather than blocking until the transcript(s) are retrieved.
See the above section on Synchronous vs Asynchronous for more information.
FAQs
AssemblyAI Python SDK
We found that assemblyai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket now supports four distinct alert actions instead of the previous two, and alert triaging allows users to override the actions taken for all individual alerts.
Security News
Polyfill.io has been serving malware for months via its CDN, after the project's open source maintainer sold the service to a company based in China.
Security News
OpenSSF is warning open source maintainers to stay vigilant against reputation farming on GitHub, where users artificially inflate their status by manipulating interactions on closed issues and PRs.