
Security News
AGENTS.md Gains Traction as an Open Format for AI Coding Agents
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.
Uses Whisper AI to transcribe speech from video and audio files. Also accepts URLs for YouTube, Rumble, BitChute, clear file links, etc.
Over 800+⭐'s because this program this app just works! Works great for windows and mac. This whisper front-end app is the only one to generate a speaker.json
file which partitions the conversation by who doing the speaking.
Turbo Mac acceleration using the new lightning-whisper-mlx backend.
This is a communinity contribution by https://github.com/aj47. On behalf of all the mac users, thank you!
mps
whisper backend.mps
only supports english).--initial_prompt
.# Mac accelerated back-end
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device mlx
Special thank
Mac acceleration option using the new lightning-whisper-mlx backend. Enable with --device mlx
. Now supports multiple languages, custom vocabulary via --initial_prompt
, and both transcribe/translate tasks. 10x faster than Whisper CPP, 4x faster than previous MLX implementations!
Model Storage: MLX models are now stored in ~/.cache/whisper/mlx_models/
for consistency with other backends, instead of cluttering your current working directory.
GPU Accelerated Dockerfile
Recently added in 3.0.10 is a GPU accelerated Dockerfile.
If you are are doing translations at scale, check out the sister project: https://github.com/zackees/transcribe-everything.
You can pull the docker image like so:
docker pull niteris/transcribe-anything
Easiest whisper implementation to install and use. Just install with pip install transcribe-anything
. All whisper backends are executed in an isolated environment. GPU acceleration is automatic, using the blazingly fast insanely-fast-whisper as the backend for --device insane
. This is the only tool to optionally produces a speaker.json
file, representing speaker-assigned text that has been de-chunkified.
Hardware acceleration on Windows/Linux --device insane
MacArm acceleration when using --device mlx
(now with multi-language support and custom vocabulary)
Input a local file or youtube/rumble url and this tool will transcribe it using Whisper AI into subtitle files and raw text.
Uses whisper AI so this is state of the art translation service - completely free. 🤯🤯🤯
Your data stays private and is not uploaded to any service.
The new version now has state of the art speed in transcriptions, thanks to the new backend --device insane
, as well as producing a speaker.json
file.
pip install transcribe-anything
# Basic usage - CPU mode (works everywhere, slower)
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ
# GPU accelerated (Windows/Linux)
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device insane
# Mac Apple Silicon accelerated
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ --device mlx
# Advanced options (see Advanced Options section below for full details)
transcribe-anything video.mp4 --device mlx --batch_size 16 --verbose
transcribe-anything video.mp4 --device insane --batch-size 8 --flash True
python api
from transcribe_anything import transcribe_anything
transcribe_anything(
url_or_file="https://www.youtube.com/watch?v=dQw4w9WgXcQ",
output_dir="output_dir",
task="transcribe",
model="large",
device="cuda"
)
# Full function signiture:
def transcribe(
url_or_file: str,
output_dir: Optional[str] = None,
model: Optional[str] = None, # tiny,small,medium,large
task: Optional[str] = None, # transcribe or translate
language: Optional[str] = None, # auto detected if none, "en" for english...
device: Optional[str] = None, # cuda,cpu,insane,mlx
embed: bool = False, # Produces a video.mp4 with the subtitles burned in.
hugging_face_token: Optional[str] = None, # If you want a speaker.json
other_args: Optional[list[str]] = None, # Other args to be passed to to the whisper backend
initial_prompt: Optional[str] = None, # Custom prompt for better recognition of specific terms
) -> str:
insane
mode with model large-v3
+ batching
This is by far the fastest combination. Experimental, it produces text that tends to be lower quality:
It's unclear if this is due to batching or large-v3
itself. More testing is needed. If you do this then please let us know the results by filing a bug in the issues page.
Large batch sizes require more significant amounts of Nvidia GPU Ram. For a 12 GB card, it's been experimentally shown that batch-size=8 will work on all videos from an internally tested data lake.
cuda
platformsIf you pass in --device insane
on a cuda platform then this tool will use this state of the art version of whisper: https://github.com/Vaibhavs10/insanely-fast-whisper, which is MUCH faster and has a pipeline for speaker identification (diarization) using the --hf_token
option.
Compatible with Python 3.10 and above. Backends use an isolated environment with pinned requirements and python version.
When diarization is enabled via --hf_token
(hugging face token) then the output json will contain speaker info labeled as SPEAKER_00
, SPEAKER_01
etc. For licensing agreement reasons, you must get your own hugging face token if you want to enable this feature. Also there is an additional step to agree to the user policies for the pyannote.audio
located here: https://huggingface.co/pyannote/segmentation-3.0. If you don't do this then you'll see runtime exceptions from pyannote
when the --hf_token
is used.
What's special to this app is that we also generate a speaker.json
which is a de-chunkified version of the output json speaker section.
[
{
"speaker": "SPEAKER_00",
"timestamp": [0.0, 7.44],
"text": "for that. But welcome, Zach Vorhees. Great to have you back on. Thank you, Matt. Craving me back onto your show. Man, we got a lot to talk about.",
"reason": "beginning"
},
{
"speaker": "SPEAKER_01",
"timestamp": [7.44, 33.52],
"text": "Oh, we do. 2023 was the year that OpenAI released, you know, chat GPT-4, which I think most people would say has surpassed average human intelligence, at least in test taking, perhaps not in, you know, reasoning and things like that. But it was a major year for AI. I think that most people are behind the curve on this. What's your take of what just happened in the last 12 months and what it means for the future of human cognition versus machine cognition?",
"reason": "speaker-switch"
},
{
"speaker": "SPEAKER_00",
"timestamp": [33.52, 44.08],
"text": "Yeah. Well, you know, at the beginning of 2023, we had a pretty weak AI system, which was a chat GPT 3.5 turbo was the best that we had. And then between the beginning of last",
"reason": "speaker-switch"
}
]
Note that speaker.json
is only generated when using --device insane
and not for --device cuda
nor --device cpu
.
cuda
vs insane
Insane mode eats up a lot of memory and it's common to get out of memory errors while transcribing. For example a 3060 12GB nividia card produced out of memory errors are common for big content. If you experience this then pass in --batch-size 8
or smaller. Note that any arguments not recognized by transcribe-anything
are passed onto the backend transcriber.
Also, please don't use distil-whisper/distil-large-v2
, it produces extremely bad stuttering and it's not entirely clear why this is. I've had to switch it out of production environments because it's so bad. It's also non-deterministic so I think that somehow a fallback non-zero temperature is being used, which produces these stutterings.
cuda
is the original AI model supplied by openai. It's more stable but MUCH slower. It also won't produce a speaker.json
file which looks like this:
--embed
. This app will optionally embed subtitles directly "burned" into an output video.
This front end app for whisper boasts the easiest install in the whisper ecosystem thanks to isolated-environment. You can simply install it with pip, like this:
pip install transcribe-anything
We have a Dockerfile that will be descently fast for startup. It is tuned specifically for device=insane
. If you have extremely large batches of data you'd like to convert all at once then consider using the sister project transcribe-everything which operates on entire remote paths hierarchies.
GPU acceleration will be automatically enabled for windows and linux. Mac users can use --device mlx
for hardware acceleration on Apple Silicon. --device insane
may also work on Mac M1+ but has been less tested.
Windows/Linux:
--device insane
Mac:
--device mlx
Backend | Device Flag | Key Arguments | Best For |
---|---|---|---|
MLX | --device mlx | --batch_size , --verbose , --initial_prompt | Mac Apple Silicon |
Insanely Fast | --device insane | --batch-size , --hf_token , --flash , --timestamp | Windows/Linux GPU |
CPU | --device cpu | Standard whisper args | Universal compatibility |
Note: Each backend has different capabilities. MLX is optimized for Apple Silicon with a focused feature set. Insanely Fast uses a transformer-based architecture with specific options. CPU backend supports the full range of standard OpenAI Whisper arguments.
Whisper supports custom prompts to improve transcription accuracy for domain-specific vocabulary, names, or technical terms. This is especially useful when transcribing content with:
# Direct prompt
transcribe-anything video.mp4 --initial_prompt "The speaker discusses artificial intelligence, machine learning, and neural networks."
# Load prompt from file
transcribe-anything video.mp4 --prompt_file my_custom_prompt.txt
from transcribe_anything import transcribe
# Direct prompt
transcribe(
url_or_file="video.mp4",
initial_prompt="The speaker discusses AI, PyTorch, TensorFlow, and deep learning algorithms."
)
# Load prompt from file
with open("my_prompt.txt", "r") as f:
prompt = f.read()
transcribe(
url_or_file="video.mp4",
initial_prompt=prompt
)
The MLX backend supports additional arguments for fine-tuning performance:
# Adjust batch size for better performance/memory trade-off
transcribe-anything video.mp4 --device mlx --batch_size 24
# Enable verbose output for debugging
transcribe-anything video.mp4 --device mlx --verbose
# Use custom prompt for better recognition of specific terms
transcribe-anything video.mp4 --device mlx --initial_prompt "The speaker discusses AI, machine learning, and neural networks."
Argument | Type | Default | Description |
---|---|---|---|
--batch_size | int | 12 | Batch size for processing. Higher values use more memory but may be faster |
--verbose | flag | false | Enable verbose output for debugging |
--initial_prompt | string | None | Custom vocabulary/context prompt for better recognition |
The MLX backend supports these whisper models optimized for Apple Silicon:
tiny
, small
, base
, medium
, large
, large-v2
, large-v3
distil-small.en
, distil-medium.en
, distil-large-v2
, distil-large-v3
Note: The MLX backend uses the lightning-whisper-mlx library which has a focused feature set optimized for Apple Silicon. Advanced whisper options like
--temperature
and--word_timestamps
are not currently supported by this backend.
The insanely-fast-whisper backend supports these specific options:
# Adjust batch size (critical for GPU memory management)
transcribe-anything video.mp4 --device insane --batch-size 8
# Use different model variants
transcribe-anything video.mp4 --device insane --model large-v3
# Enable Flash Attention 2 for faster processing
transcribe-anything video.mp4 --device insane --flash True
# Enable speaker diarization with HuggingFace token
transcribe-anything video.mp4 --device insane --hf_token your_token_here
# Specify exact number of speakers
transcribe-anything video.mp4 --device insane --hf_token your_token --num-speakers 3
# Set speaker range
transcribe-anything video.mp4 --device insane --hf_token your_token --min-speakers 2 --max-speakers 5
# Choose timestamp granularity
transcribe-anything video.mp4 --device insane --timestamp chunk # default
transcribe-anything video.mp4 --device insane --timestamp word # word-level
Argument | Type | Default | Description |
---|---|---|---|
--batch-size | int | 24 | Batch size for processing. Critical for GPU memory management |
--flash | bool | false | Use Flash Attention 2 for faster processing |
--timestamp | choice | chunk | Timestamp granularity: "chunk" or "word" |
--hf_token | string | None | HuggingFace token for speaker diarization |
--num-speakers | int | None | Exact number of speakers (cannot use with min/max) |
--min-speakers | int | None | Minimum number of speakers |
--max-speakers | int | None | Maximum number of speakers |
--diarization_model | string | pyannote/speaker-diarization | Diarization model to use |
Note: The insanely-fast-whisper backend uses a different architecture than standard OpenAI Whisper. It does NOT support standard whisper arguments like
--temperature
,--beam_size
,--best_of
, etc. These are specific to the OpenAI implementation.
The CPU backend uses the standard OpenAI Whisper implementation and supports many additional arguments:
# Language and task options (also available as main arguments)
transcribe-anything video.mp4 --device cpu --language es --task translate
# Generation parameters
transcribe-anything video.mp4 --device cpu --temperature 0.1 --best_of 5 --beam_size 5
# Quality thresholds
transcribe-anything video.mp4 --device cpu --compression_ratio_threshold 2.4 --logprob_threshold -1.0
# Output formatting
transcribe-anything video.mp4 --device cpu --word_timestamps --highlight_words True
# Audio processing
transcribe-anything video.mp4 --device cpu --threads 4 --clip_timestamps "0,30"
Note: The CPU backend supports most standard OpenAI Whisper arguments. These are passed through automatically and documented in the OpenAI Whisper repository.
MLX Backend (--device mlx
):
Insanely Fast Whisper (--device insane
):
--flash True
for better memory efficiency# Basic transcription
transcribe-anything https://www.youtube.com/watch?v=dQw4w9WgXcQ
# Local file
transcribe-anything video.mp4
# Basic MLX usage
transcribe-anything video.mp4 --device mlx
# MLX with custom batch size and verbose output
transcribe-anything video.mp4 --device mlx --batch_size 16 --verbose
# MLX with custom prompt for technical content
transcribe-anything lecture.mp4 --device mlx --initial_prompt "The speaker discusses machine learning, neural networks, PyTorch, and TensorFlow."
# MLX with multiple options (using main arguments for language/task)
transcribe-anything video.mp4 --device mlx --batch_size 20 --verbose --task translate --language es
# Basic insane mode
transcribe-anything video.mp4 --device insane
# Insane mode with custom batch size (important for GPU memory)
transcribe-anything video.mp4 --device insane --batch-size 8
# Insane mode with Flash Attention 2 for speed
transcribe-anything video.mp4 --device insane --batch-size 12 --flash True
# Insane mode with speaker diarization
transcribe-anything video.mp4 --device insane --hf_token your_huggingface_token
# Insane mode with word-level timestamps and speaker diarization
transcribe-anything video.mp4 --device insane --timestamp word --hf_token your_token --num-speakers 3
# High-performance setup with all optimizations
transcribe-anything video.mp4 --device insane --batch-size 16 --flash True --timestamp word
# CPU mode (works everywhere, slower)
transcribe-anything video.mp4 --device cpu
# CPU with custom model and language
transcribe-anything video.mp4 --device cpu --model medium --language fr --task transcribe
If you encounter GPU out-of-memory errors:
# Reduce batch size for MLX
transcribe-anything video.mp4 --device mlx --batch_size 8
# Reduce batch size for insane mode
transcribe-anything video.mp4 --device insane --batch-size 4
# Use smaller model
transcribe-anything video.mp4 --device insane --model small --batch-size 8
For better quality:
# Use larger model
transcribe-anything video.mp4 --device insane --model large-v3
# Enable Flash Attention 2 for better performance
transcribe-anything video.mp4 --device insane --flash True
# Use custom prompt for domain-specific content (works with all backends)
transcribe-anything video.mp4 --initial_prompt "Medical terminology: diagnosis, treatment, symptoms, patient care"
# For CPU backend, you can use standard whisper quality options
transcribe-anything video.mp4 --device cpu --compression_ratio_threshold 2.0 --logprob_threshold -0.5
For faster processing:
# Increase batch size (if you have enough GPU memory)
transcribe-anything video.mp4 --device mlx --batch_size 24
transcribe-anything video.mp4 --device insane --batch-size 16
# Enable Flash Attention 2 for insane mode (significant speedup)
transcribe-anything video.mp4 --device insane --flash True --batch-size 16
# Use smaller model for speed
transcribe-anything video.mp4 --device insane --model small
# Use distilled models for even faster processing
transcribe-anything video.mp4 --device insane --model distil-whisper/large-v2 --flash True
Will output:
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:27.000] We're no strangers to love, you know the rules, and so do I
[00:27.000 --> 00:31.000] I've built commitments while I'm thinking of
[00:31.000 --> 00:35.000] You wouldn't get this from any other guy
[00:35.000 --> 00:40.000] I just wanna tell you how I'm feeling
[00:40.000 --> 00:43.000] Gotta make you understand
[00:43.000 --> 00:45.000] Never gonna give you up
[00:45.000 --> 00:47.000] Never gonna let you down
[00:47.000 --> 00:51.000] Never gonna run around and desert you
[00:51.000 --> 00:53.000] Never gonna make you cry
[00:53.000 --> 00:55.000] Never gonna say goodbye
[00:55.000 --> 00:58.000] Never gonna tell a lie
[00:58.000 --> 01:00.000] And hurt you
[01:00.000 --> 01:04.000] We've known each other for so long
[01:04.000 --> 01:09.000] Your heart's been aching but you're too shy to say it
[01:09.000 --> 01:13.000] Inside we both know what's been going on
[01:13.000 --> 01:17.000] We know the game and we're gonna play it
[01:17.000 --> 01:22.000] And if you ask me how I'm feeling
[01:22.000 --> 01:25.000] Don't tell me you're too much to see
[01:25.000 --> 01:27.000] Never gonna give you up
[01:27.000 --> 01:29.000] Never gonna let you down
[01:29.000 --> 01:33.000] Never gonna run around and desert you
[01:33.000 --> 01:35.000] Never gonna make you cry
[01:35.000 --> 01:38.000] Never gonna say goodbye
[01:38.000 --> 01:40.000] Never gonna tell a lie
[01:40.000 --> 01:42.000] And hurt you
[01:42.000 --> 01:44.000] Never gonna give you up
[01:44.000 --> 01:46.000] Never gonna let you down
[01:46.000 --> 01:50.000] Never gonna run around and desert you
[01:50.000 --> 01:52.000] Never gonna make you cry
[01:52.000 --> 01:54.000] Never gonna say goodbye
[01:54.000 --> 01:57.000] Never gonna tell a lie
[01:57.000 --> 01:59.000] And hurt you
[02:08.000 --> 02:10.000] Never gonna give
[02:12.000 --> 02:14.000] Never gonna give
[02:16.000 --> 02:19.000] We've known each other for so long
[02:19.000 --> 02:24.000] Your heart's been aching but you're too shy to say it
[02:24.000 --> 02:28.000] Inside we both know what's been going on
[02:28.000 --> 02:32.000] We know the game and we're gonna play it
[02:32.000 --> 02:37.000] I just wanna tell you how I'm feeling
[02:37.000 --> 02:40.000] Gotta make you understand
[02:40.000 --> 02:42.000] Never gonna give you up
[02:42.000 --> 02:44.000] Never gonna let you down
[02:44.000 --> 02:48.000] Never gonna run around and desert you
[02:48.000 --> 02:50.000] Never gonna make you cry
[02:50.000 --> 02:53.000] Never gonna say goodbye
[02:53.000 --> 02:55.000] Never gonna tell a lie
[02:55.000 --> 02:57.000] And hurt you
[02:57.000 --> 02:59.000] Never gonna give you up
[02:59.000 --> 03:01.000] Never gonna let you down
[03:01.000 --> 03:05.000] Never gonna run around and desert you
[03:05.000 --> 03:08.000] Never gonna make you cry
[03:08.000 --> 03:10.000] Never gonna say goodbye
[03:10.000 --> 03:12.000] Never gonna tell a lie
[03:12.000 --> 03:14.000] And hurt you
[03:14.000 --> 03:16.000] Never gonna give you up
[03:16.000 --> 03:23.000] If you want, never gonna let you down Never gonna run around and desert you
[03:23.000 --> 03:28.000] Never gonna make you hide Never gonna say goodbye
[03:28.000 --> 03:42.000] Never gonna tell you I ain't ready
from transcribe_anything.api import transcribe
transcribe(
url_or_file="https://www.youtube.com/watch?v=dQw4w9WgXcQ",
output_dir="output_dir",
)
Works for Ubuntu/MacOS/Win32(in git-bash) This will create a virtual environment
> cd transcribe_anything
> ./install.sh
# Enter the environment:
> source activate.sh
The environment is now active and the next step will only install to the local python. If the terminal
is closed then to get back into the environment cd transcribe_anything
and execute source activate.sh
pip install transcribe-anything
transcribe_anything
will magically become available.transcribe_anything <YOUTUBE_URL>
transcribe-anything
now works much better across different configurations and is now much faster. Why? I switched the environment isolation that I was using from my own homespun version built on top of venv
to the AMAZING uv
system. The biggest improvement is the runtime speed and re-installs. UV is just insane at how fast it is for checking the environment. Also it turns out that uv
has strict package dependency checking which found a minor bug where a certain version of one of the pytorch
dependencies was being constantly re-installed because of a dependency conflict that pip was apparently perfectly happy to never warn about. This manifested as certain packages being constantly re-installed with the previous version. uv
identified this as an error immediately and was fixed.
The real reason behind transcribe-anything
's surprising popularity comes from the fact that it just works. And the reason for this is that I can isolate environments for different configurations and install them lazily. If you have the same problem then consider my other tool: https://github.com/zackees/iso-env
--device mlx
. Now supports multiple languages, custom vocabulary via --initial_prompt
, and both transcribe/translate tasks. 10x faster than Whisper CPP!--device mps
(now --device mlx
). Only does english, but is quite fast.uv
, should fix the missing dll's on some windows systems.--hf-token
usage for insanely fast whisper backend.ffmpeg
commands are now static_ffmpeg
commands. Fixes issue.--device insane
, bad entries will be skipped but warn.pytorch-audio
upgrades broke this package. Upgrade to latest version to resolve.distil-whisper/distil-large-v2
--device insane
and python 3.11 installing wrong insanely-fast-whisper
version.transcribe-anything
on Linux.--device insane
. Added tests to ensure this.--save_hf_token
speaker.json
when diarization is enabled.--device insane
now generates a *.vtt translation file--device insane
--device insane
. All tests pass.--device insane
, write out the error.json file into the destination.--device insane
now generates better conforming srt files.insane
mode backend.insanely-fast-whisper
, enable by using --device insane
isolated-environment
. This will also prevent
interference with different versions of torch for other AI tools.--model large
now aliases to --model large-v3
. Use --model large-legacy
to use original large model.cpu
device if gpu
device is not compatible.--embed
to burn the subtitles into the video itself. Only works on local mp4 files at the moment.out.mp3
and instead use a temporary wav file, as that is faster to process. --no-keep-audio has now been removed.--output_dir
not being respected.install_cuda.sh
-> install_cuda.py
install_cuda.sh
script to enable.FAQs
Uses Whisper AI to transcribe speech from video and audio files. Also accepts URLs for YouTube, Rumble, BitChute, clear file links, etc.
We found that transcribe-anything demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.
Security News
/Research
Malicious npm package impersonates Nodemailer and drains wallets by hijacking crypto transactions across multiple blockchains.
Security News
This episode explores the hard problem of reachability analysis, from static analysis limits to handling dynamic languages and massive dependency trees.