
Research
/Security News
Critical Vulnerability in NestJS Devtools: Localhost RCE via Sandbox Escape
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
A real-time speech-to-text translation system built on a modular server–client architecture.
The diagram ommits finer details
Before running the project, you need to install the following system dependencies:
sudo apt-get install portaudio19-dev
brew install portaudio
(RECOMMENDED): install this package inside a virtual environment to avoid dependency conflicts.
python -m venv .venv
source .venv/bin/activate
Install the PyPI package:
pip install live-translation
Verify the installation:
python -c "import live_translation; print(f'live-translation installed successfully\n{live_translation.__version__}')"
NOTE: One can safely ignore similar warnings that might appear on Linux systems when running the client as it tries to open the mic:
ALSA lib pcm_dsnoop.c:567:(snd_pcm_dsnoop_open) unable to open slave ALSA lib pcm_dmix.c:1000:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib pcm_dmix.c:1000:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
demo can be run directly from the command line:
NOTE: This is a convenience demo cli tool to run both the server and the client with default configs. It should only be used for a quick demo. It's highly recommended to start a separate server and client for full customization as shown below.
live-translate-demo
server can be run directly from the command line:
NOTE: Running the server for the first time will download the required models in the Cache folder (e.g.
~/.cache
on linux). The downloading process in the first run might clutter the terminal view leading to scattered and unpredicted locations of the initial server logs. It is advised to rerun the server after all models finish downloading for better view of the initial server logs.
live-translate-server [OPTIONS]
[OPTIONS]
usage: live-translate-server [-h] [--silence_threshold SILENCE_THRESHOLD] [--vad_aggressiveness {0,1,2,3,4,5,6,7,8,9}] [--max_buffer_duration {5,6,7,8,9,10}] [--codec {pcm,opus}]
[--device {cpu,cuda}] [--whisper_model {tiny,base,small,medium,large,large-v2,large-v3,large-v3-turbo}]
[--trans_model {Helsinki-NLP/opus-mt,Helsinki-NLP/opus-mt-tc-big}] [--src_lang SRC_LANG] [--tgt_lang TGT_LANG] [--log {print,file}] [--ws_port WS_PORT]
[--transcribe_only] [--version]
Live Translation Server - Configure runtime settings.
options:
-h, --help show this help message and exit
--silence_threshold SILENCE_THRESHOLD
Number of consecutive seconds to detect SILENCE.
SILENCE clears the audio buffer for transcription/translation.
NOTE: Minimum value is 1.5.
Default is 2.
--vad_aggressiveness {0,1,2,3,4,5,6,7,8,9}
Voice Activity Detection (VAD) aggressiveness level (0-9).
Higher values mean VAD has to be more confident to detect speech vs silence.
Default is 8.
--max_buffer_duration {5,6,7,8,9,10}
Max audio buffer duration in seconds before trimming it.
Default is 7 seconds.
--codec {pcm,opus} Audio codec for WebSocket communication ('pcm', 'opus').
Default is 'opus'.
--device {cpu,cuda} Device for processing ('cpu', 'cuda').
Default is 'cpu'.
--whisper_model {tiny,base,small,medium,large,large-v2,large-v3,large-v3-turbo}
Whisper model size ('tiny', 'base', 'small', 'medium', 'large', 'large-v2', 'large-v3', 'large-v3-turbo).
NOTE: Running large models like 'large-v3', or 'large-v3-turbo' might require a decent GPU with CUDA support for reasonable performance.
NOTE: large-v3-turbo has great accuracy while being significantly faster than the original large-v3 model. see: https://github.com/openai/whisper/discussions/2363
Default is 'base'.
--trans_model {Helsinki-NLP/opus-mt,Helsinki-NLP/opus-mt-tc-big}
Translation model ('Helsinki-NLP/opus-mt', 'Helsinki-NLP/opus-mt-tc-big').
NOTE: Don't include source and target languages here.
Default is 'Helsinki-NLP/opus-mt'.
--src_lang SRC_LANG Source/Input language for transcription (e.g., 'en', 'fr').
Default is 'en'.
--tgt_lang TGT_LANG Target language for translation (e.g., 'es', 'de').
Default is 'es'.
--log {print,file} Optional logging mode for saving transcription output.
- 'file': Save each result to a structured .jsonl file in ./transcripts/transcript_{TIMESTAMP}.jsonl.
- 'print': Print each result to stdout.
Default is None (no logging).
--ws_port WS_PORT WebSocket port the of the server.
Used to listen for client audio and publish output (e.g., 8765).
--transcribe_only Transcribe only mode. No translations are performed.
--version Print version and exit.
client can be run directly from the command line:
live-translate-client [OPTIONS]
[OPTIONS]
usage: live-translate-client [-h] [--server SERVER] [--codec {pcm,opus}] [--version]
Live Translation Client - Stream audio to the server.
options:
-h, --help show this help message and exit
--server SERVER WebSocket URI of the server (e.g., ws://localhost:8765)
--codec {pcm,opus} Audio codec for WebSocket communication ('pcm', 'opus').
Default is 'opus'.
--version Print version and exit.
You can also import and use live_translation directly in your Python code. The following is simple examples of running live_translation's server and client in a blocking fashion. For more detailed examples showing non-blocking and asynchronous workflows, see ./examples/.
NOTE: The examples below assumes the live_translation package has been installed as shown in the Installation.
NOTE: To run a provided example using the Python API, see instructions in the ./examples/ directory.
Server
from live_translation import LiveTranslationServer, ServerConfig
def main():
config = ServerConfig(
device="cpu",
ws_port=8765,
log="print",
transcribe_only=False,
codec="opus",
)
server = LiveTranslationServer(config)
server.run(blocking=True)
# Main guard is CRITICAL for systems that uses spawn method to create new processes
# This is the case for Windows and MacOS
if __name__ == "__main__":
main()
Client
from live_translation import LiveTranslationClient, ClientConfig
def parser_callback(entry, *args, **kwargs):
"""Callback function to parse the output from the server.
Args:
entry (dict): The message from the server.
*args: Optional positional args passed from the client.
**kwargs: Optional keyword args passed from the client.
"""
print(f"📝 {entry['transcription']}")
print(f"🌍 {entry['translation']}")
# Returning True signals the client to shutdown
return False
def main():
config = ClientConfig(
server_uri="ws://localhost:8765",
codec="opus",
)
client = LiveTranslationClient(config)
client.run(
callback=parser_callback,
callback_args=(), # Optional: positional args to pass
callback_kwargs={}, # Optional: keyword args to pass
blocking=True,
)
if __name__ == "__main__":
main()
If you're writing a custom client or integrating this system into another application, you can interact with the server directly using the WebSocket protocol.
The server listens on a WebSocket endpoint (default: ws://localhost:8765
) and expects the client to:
Send: encoded PCM audio using the Opus codec with the following specs:
int16
)NOTE: The server also supports receiving raw PCM using the --codec pcm server option. The specs are identical to above, except not encoded.
Receive: structured JSON messages with timestamp, transcription and translation fields
{
"timestamp": "2025-05-25T12:58:35.259085+00:00",
"transcription": "Good morning, I hope everyone's doing great.",
"translation": "Buenos días, espero que todo el mundo esté bien"
}
For fully working, yet simple, examples in multiple languages, see ./examples/clients
To create more complex clients, look at the python client for guidance.
Available Examples:
To contribute or modify this project, these steps might be helpful:
NOTE: This workflow below is developed with Linux-based systems with typical build tools installed e.g. Make in mind. One might need to install Make and possibly other tools on other systems. However, one can still do things manually without Make, for example, run test manually using
python -m pytest -s tests/
instead ofmake test
. See Makefile for more details.
Fork & Clone the repository:
git clone git@github.com:<your-username>/live-translation.git
cd live-translation
Ceate a virtual environment:
python -m venv .venv
source .venv/bin/activate
Install the package and its dependencies in editable mode:
pip install --upgrade pip
pip install -e .[dev,examples] # Install with optional examples dependencies
This is equivalent to:
make install
Test the package:
make test
Build the package:
make build
NOTE: Building does lint and checks for formatting using ruff. One can do that seprately using
make format
andmake lint
. For linting and formatting rules, see the ruff config.
NOTE: Building generates a .whl file that can be pip installed in a new environment for testing
Check more available make commands
make help
For quick testing, run the server and the client within the virtual environment:
live-translate-server [OPTIONS]
live-translate-client [OPTIONS]
NOTE: Since the package was installed in editable mode, any changes will be reflected when the cli tools are run
For contribution:
This project was tested and developed on the following system configuration:
pyproject.toml
and Prerequisitessrc_lang
and tgt_lang
as it's currently done.large-v3-turbo
Whisper model"). This will help with hardware requirements and deployment decisions. @article{Whisper,
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
publisher = {arXiv},
year = {2022}
}
@misc{Silero VAD,
author = {Silero Team},
title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snakers4/silero-vad}},
email = {hello@silero.ai}
}
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conference of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
CUDA as the DEVICE
is probably needed for heavier models like large-v3-turbo
for Whisper. Nvidia drivers, CUDA Toolkit, cuDNN installation needed if option "cuda"
was to be used. ↩
FAQs
A real-time translation tool using Whisper & Opus-MT
We found that live-translation demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Product
Customize license detection with Socket’s new license overlays: gain control, reduce noise, and handle edge cases with precision.
Product
Socket now supports Rust and Cargo, offering package search for all users and experimental SBOM generation for enterprise projects.