
Security News
The Nightmare Before Deployment
Season’s greetings from Socket, and here’s to a calm end of year: clean dependencies, boring pipelines, no surprises.
@mux/ai 📼 🤝 🤖A TypeScript toolkit for building AI-driven video workflows on the server, powered by Mux!
@mux/ai does this by providing:
Easy to use, purpose-driven, cost effective, configurable workflow functions that integrate with a variety of popular AI/LLM providers (OpenAI, Anthropic, Google).
getSummaryAndTags, getModerationScores, hasBurnedInCaptions, generateChapters, generateVideoEmbeddings, translateCaptions, translateAudio"use workflow" compatability with Workflow DevKitConvenient, parameterized, commonly needed primitive functions backed by Mux Video for building your own media-based AI workflows and integrations.
getStoryboardUrl, chunkVTTCues, fetchTranscriptForAssetimport { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("your-asset-id", {
provider: "openai",
tone: "professional",
includeTranscript: true
});
console.log(result.title); // "Getting Started with TypeScript"
console.log(result.description); // "A comprehensive guide to..."
console.log(result.tags); // ["typescript", "tutorial", "programming"]
⚠️ Important: Many workflows rely on video transcripts for best results. Consider enabling auto-generated captions on your Mux assets to unlock the full potential of transcript-based workflows like summarization, chapters, and embeddings.
npm install @mux/ai
We support dotenv, so you can simply add the following environment variables to your .env file:
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# Needed if your assets _only_ have signed playback IDs
MUX_SIGNING_KEY=your_signing_key_id
MUX_PRIVATE_KEY=your_base64_encoded_private_key
# You only need to configure API keys for the AI platforms and workflows you're using
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
ELEVENLABS_API_KEY=your_elevenlabs_api_key
# S3-Compatible Storage (required for translation & audio dubbing)
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-key
💡 Tip: If you're using
.envin a repository or version tracking system, make sure you add this file to your.gitignoreor equivalent to avoid unintentionally committing secure credentials.
| Workflow | Description | Providers | Default Models | Mux Asset Requirements | Cloud Infrastructure Requirements |
|---|---|---|---|---|---|
getSummaryAndTagsAPI · Source | Generate titles, descriptions, and tags for an asset | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-2.5-flash (Google) | Video (required), Captions (optional) | None |
getModerationScoresAPI · Source | Detect inappropriate (sexual or violent) content in an asset | OpenAI, Hive | omni-moderation-latest (OpenAI) or Hive visual moderation task | Video (required) | None |
hasBurnedInCaptionsAPI · Source | Detect burned-in captions (hardcoded subtitles) in an asset | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-2.5-flash (Google) | Video (required) | None |
generateChaptersAPI · Source | Generate chapter markers for an asset using the transcript | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-2.5-flash (Google) | Video (required), Captions (required) | None |
generateVideoEmbeddingsAPI · Source | Generate vector embeddings for an asset's transcript chunks | OpenAI, Google | text-embedding-3-small (OpenAI), gemini-embedding-001 (Google) | Video (required), Captions (required) | None |
translateCaptionsAPI · Source | Translate an asset's captions into different languages | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-2.5-flash (Google) | Video (required), Captions (required) | AWS S3 (if uploadToMux=true) |
translateAudioAPI · Source | Create AI-dubbed audio tracks in different languages for an asset | ElevenLabs only | ElevenLabs Dubbing API | Video (required), Audio (required) | AWS S3 (if uploadToMux=true) |
All workflows are compatible with Workflow DevKit. The workflows in this SDK are exported with "use workflow" directives and "use step" directives in the code.
If you are using Workflow DevKit in your project, then you must call workflow functions like this:
import { start } from 'workflow/api';
import { getSummaryAndTags } from '@mux/ai/workflows';
const assetId = 'YOUR_ASSET_ID';
const run = await start(getSummaryAndTags, [assetId]);
// optionally, wait for the workflow run return value:
// const result = await run.returnValue
Workflows can be nested
import { start } from "workflow/api";
import { getSummaryAndTags } from '@mux/ai/workflows';
async function processVideoSummary (assetId: string) {
'use workflow'
const summary = await getSummaryAndTags(assetId);
const emailResp = await emailSummaryToAdmins(summary: summary);
return { assetId, summary, emailResp }
}
async function emailSummaryToAdmins (assetId: string) {
'use step';
return { sent: true }
}
//
// this will call the processVideoSummary workflow that is defined above
// in that workflow, it calls `getSummaryAndTags()` workflow
//
const run = await start(processVideoSummary, [assetId]);
Generate SEO-friendly titles, descriptions, and tags from your video content:
import { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("your-asset-id", {
provider: "openai",
tone: "professional",
includeTranscript: true
});
console.log(result.title); // "Getting Started with TypeScript"
console.log(result.description); // "A comprehensive guide to..."
console.log(result.tags); // ["typescript", "tutorial", "programming"]
Automatically detect inappropriate content in videos:
import { getModerationScores } from "@mux/ai/workflows";
const result = await getModerationScores("your-asset-id", {
provider: "openai",
thresholds: { sexual: 0.7, violence: 0.8 }
});
if (result.exceedsThreshold) {
console.log("Content flagged for review");
console.log(`Max scores: ${result.maxScores}`);
}
Create automatic chapter markers for better video navigation:
import { generateChapters } from "@mux/ai/workflows";
const result = await generateChapters("your-asset-id", "en", {
provider: "anthropic"
});
// Use with Mux Player
player.addChapters(result.chapters);
// [
// { startTime: 0, title: "Introduction" },
// { startTime: 45, title: "Main Content" },
// { startTime: 120, title: "Conclusion" }
// ]
Generate embeddings for semantic video search:
import { generateVideoEmbeddings } from "@mux/ai/workflows";
const result = await generateVideoEmbeddings("your-asset-id", {
provider: "openai",
languageCode: "en",
chunkingStrategy: {
type: "token",
maxTokens: 500,
overlap: 100
}
});
// Store embeddings in your vector database
for (const chunk of result.chunks) {
await vectorDB.insert({
embedding: chunk.embedding,
metadata: {
assetId: result.assetId,
startTime: chunk.metadata.startTime,
endTime: chunk.metadata.endTime
}
});
}
gpt-5.1, claude-sonnet-4-5, and gemini-2.5-flash to keep analysis costs low while maintaining high quality resultsIntl.DisplayNames for all ISO 639-1 codes@mux/ai is built around two complementary abstractions:
Workflows are functions that handle complete video AI tasks end-to-end. Each workflow orchestrates the entire process: fetching video data from Mux (transcripts, thumbnails, storyboards), formatting it for AI providers, and returning structured results.
import { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("asset-id", { provider: "openai" });
Use workflows when you need battle-tested solutions for common tasks like summarization, content moderation, chapter generation, or translation.
Primitives are low-level building blocks that give you direct access to Mux video data and utilities. They provide functions for fetching transcripts, storyboards, thumbnails, and processing text—perfect for building custom workflows.
import { fetchTranscriptForAsset, getStoryboardUrl } from "@mux/ai/primitives";
const transcript = await fetchTranscriptForAsset("asset-id", "en");
const storyboard = getStoryboardUrl("playback-id", { width: 640 });
Use primitives when you need complete control over your AI prompts or want to build custom workflows not covered by the pre-built options.
// Import workflows
import { generateChapters } from "@mux/ai/workflows";
// Import primitives
import { fetchTranscriptForAsset } from "@mux/ai/primitives";
// Or import everything
import { workflows, primitives } from "@mux/ai";
You'll need to set up credentials for Mux as well as any AI provider you want to use for a particular workflow. In addition, some workflows will need other cloud-hosted access (e.g. cloud storage via AWS S3).
All workflows require a Mux API access token to interact with your video assets. If you're already logged into the dashboard, you can create a new access token here.
Required Permissions:
These permissions cover all current workflows. You can set these when creating your token in the dashboard.
💡 Tip: For security reasons, consider creating a dedicated access token specifically for your AI workflows rather than reusing existing tokens.
If your Mux assets use signed playback URLs for security, you'll need to provide signing credentials so @mux/ai can access the video data.
When needed: Only if your assets have signed playback policies enabled and no public playback ID.
How to get:
Configuration:
MUX_SIGNING_KEY=your_signing_key_id
MUX_PRIVATE_KEY=your_base64_encoded_private_key
Different workflows support various AI providers. You only need to configure API keys for the providers you plan to use.
Used by: getSummaryAndTags, getModerationScores, hasBurnedInCaptions, generateChapters, generateVideoEmbeddings, translateCaptions
Get your API key: OpenAI API Keys
OPENAI_API_KEY=your_openai_api_key
Used by: getSummaryAndTags, hasBurnedInCaptions, generateChapters, translateCaptions
Get your API key: Anthropic Console
ANTHROPIC_API_KEY=your_anthropic_api_key
Used by: getSummaryAndTags, hasBurnedInCaptions, generateChapters, generateVideoEmbeddings, translateCaptions
Get your API key: Google AI Studio
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
Used by: translateAudio (audio dubbing)
Get your API key: ElevenLabs API Keys
Note: Requires a Creator plan or higher for dubbing features.
ELEVENLABS_API_KEY=your_elevenlabs_api_key
Used by: getModerationScores (alternative to OpenAI moderation)
Get your API key: Hive Console
HIVE_API_KEY=your_hive_api_key
Required for: translateCaptions, translateAudio (only if uploadToMux is true, which is the default)
Translation workflows need temporary storage to upload translated files before attaching them to your Mux assets. Any S3-compatible storage service works (AWS S3, Cloudflare R2, DigitalOcean Spaces, etc.).
AWS S3 Setup:
s3:PutObject, s3:GetObject, and s3:PutObjectAcl permissions for your bucketConfiguration:
S3_ENDPOINT=https://s3.amazonaws.com # Or your S3-compatible endpoint
S3_REGION=us-east-1 # Your bucket region
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-key
Cloudflare R2 Example:
S3_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-r2-access-key
S3_SECRET_ACCESS_KEY=your-r2-secret-key
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, we'd love your help.
Please see our Contributing Guide for details on:
For questions or discussions, feel free to open an issue.
FAQs
AI library for Mux
We found that @mux/ai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Season’s greetings from Socket, and here’s to a calm end of year: clean dependencies, boring pipelines, no surprises.

Research
/Security News
Impostor NuGet package Tracer.Fody.NLog typosquats Tracer.Fody and its author, using homoglyph tricks, and exfiltrates Stratis wallet JSON/passwords to a Russian IP address.

Security News
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.