Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.
bash scripts/setup.sh
pip install whisper-live
The server supports two backends faster_whisper
and tensorrt
. If running tensorrt
backend follow TensorRT_whisper readme
python3 run_server.py --port 9090 \
--backend faster_whisper
# running with custom model
python3 run_server.py --port 9090 \
--backend faster_whisper \
-fw "/path/to/custom/faster/whisper/model"
# Run English only model
python3 run_server.py -p 9090 \
-b tensorrt \
-trt /home/TensorRT-LLM/examples/whisper/whisper_small_en
# Run Multilingual model
python3 run_server.py -p 9090 \
-b tensorrt \
-trt /home/TensorRT-LLM/examples/whisper/whisper_small \
-m
To control the number of threads used by OpenMP, you can set the OMP_NUM_THREADS
environment variable. This is useful for managing CPU resources and ensuring consistent performance. If not specified, OMP_NUM_THREADS
is set to 1
by default. You can change this by using the --omp_num_threads
argument:
python3 run_server.py --port 9090 \
--backend faster_whisper \
--omp_num_threads 4
By default, when running the server without specifying a model, the server will instantiate a new whisper model for every client connection. This has the advantage, that the server can use different model sizes, based on the client's requested model size. On the other hand, it also means you have to wait for the model to be loaded upon client connection and you will have increased (V)RAM usage.
When serving a custom TensorRT model using the -trt
or a custom faster_whisper model using the -fw
option, the server will instead only instantiate the custom model once and then reuse it for all client connections.
If you don't want this, set --no_single_model
.
lang
: Language of the input audio, applicable only if using a multilingual model.translate
: If set to True
then translate from any language to en
.model
: Whisper model size.use_vad
: Whether to use Voice Activity Detection
on the server.save_output_recording
: Set to True to save the microphone input as a .wav
file during live transcription. This option is helpful for recording sessions for later playback or analysis. Defaults to False
.output_recording_filename
: Specifies the .wav
file path where the microphone input will be saved if save_output_recording
is set to True
.from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
"localhost",
9090,
lang="en",
translate=False,
model="small",
use_vad=False,
save_output_recording=True, # Only used for microphone input, False by Default
output_recording_filename="./output_recording.wav" # Only used for microphone input
)
It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to True
if we want to translate from the source language to English and False
if we want to transcribe in the source language.
client("tests/jfk.wav")
client()
client(rtsp_url="rtsp://admin:admin@192.168.0.1/rtsp")
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8")
GPU
docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
docker run -p 9090:9090 --runtime=nvidia --gpus all --entrypoint /bin/bash -it ghcr.io/collabora/whisperlive-tensorrt
# Build tiny.en engine
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en
# Run server with tiny.en
python3 run_server.py --port 9090 \
--backend tensorrt \
--trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en"
CPU
docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest
Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.
We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or vineet.suryan@collabora.com and marcus.edel@collabora.com.
@article{Whisper
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
publisher = {arXiv},
year = {2022},
}
@misc{Silero VAD,
author = {Silero Team},
title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snakers4/silero-vad}},
email = {hello@silero.ai}
}
FAQs
A nearly-live implementation of OpenAI's Whisper.
We found that whisper-live demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.