πŸš€ Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more β†’

sonata-asr

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

sonata-asr

SONATA: SOund and Narrative Advanced Transcription Assistant

0.1.1
Maintainers
1

SONATA πŸŽ΅πŸ”Š

License: GPL v3 GitHub stars

SOund and Narrative Advanced Transcription Assistant

SONATA(SOund and Narrative Advanced Transcription Assistant) is advanced ASR system that captures human expressions including emotive sounds and non-verbal cues.

✨ Features

  • πŸŽ™οΈ High-accuracy speech-to-text transcription using WhisperX
  • πŸ˜€ Recognition of 523+ emotive sounds and non-verbal cues
  • 🌍 Multi-language support with 99+ languages
  • πŸ‘₯ SOTA speaker diarization using Silero VAD and WavLM embeddings
  • ⏱️ Rich timestamp information at the word level
  • πŸ”„ Audio preprocessing capabilities

πŸ“š See detailed features documentation

πŸš€ Installation

Install the package from PyPI:

pip install sonata-asr

Or install from source:

git clone https://github.com/hwk06023/SONATA.git
cd SONATA
pip install -e .

πŸ“– Quick Start

Basic Transcription

from sonata.core.transcriber import IntegratedTranscriber

# Initialize the transcriber
transcriber = IntegratedTranscriber(asr_model="large-v3", device="cpu")

# Transcribe an audio file
result = transcriber.process_audio("path/to/audio.wav", language="en")
print(result["integrated_transcript"]["plain_text"])

CLI Usage

# Basic usage
sonata-asr path/to/audio.wav

# With speaker diarization
sonata-asr path/to/audio.wav --diarize

# Set number of speakers if known
sonata-asr path/to/audio.wav --diarize --num-speakers 3

Common CLI Options:

General:
  -o, --output FILE           Save transcript to specified JSON file
  -l, --language LANG         Language code (en, ko, zh, ja, fr, de, es, it, pt, ru)
  -m, --model NAME            WhisperX model size (tiny, small, medium, large-v3, etc.)
  -d, --device DEVICE         Device to run models on (cpu, cuda)
  --text-output               Save transcript to text file (defaults to input_name.txt)
  --preprocess                Preprocess audio (convert format and trim silence)

Diarization:
  --diarize                   Enable SOTA speaker diarization using Silero VAD and WavLM
  --num-speakers NUM          Set exact number of speakers (optional)

Audio Events:
  --threshold VALUE           Threshold for audio event detection (0.0-1.0)
  --custom-thresholds FILE    Path to JSON file with custom audio event thresholds
  --deep-detect               Enable multi-scale audio event detection for better accuracy
  --deep-detect-scales NUM    Number of scales for deep detection (1-3, default: 3)
  --deep-detect-window-sizes  Custom window sizes for deep detection (comma-separated)
  --deep-detect-hop-sizes     Custom hop sizes for deep detection (comma-separated)

πŸ“š See full usage documentation
⌨️ See complete CLI documentation

πŸ—£οΈ Supported Languages

SONATA leverages Whisper large-v3 to support 99+ languages across varying levels of accuracy. Languages like English, Spanish, French, German, and Japanese have excellent transcription performance (5-12% error rates), while other languages have good to moderate accuracy.

Key features of SONATA's language support:

  • Excellent accuracy for high-resource languages
  • Character-based evaluation for languages like Chinese, Japanese, and Korean
  • Specialized handling for language-specific characteristics
  • Advanced auto-detection for multi-language content

🌐 See detailed language support documentation

πŸ”Š Audio Event Detection

SONATA can detect over 500 different audio events, from laughter and applause to ambient sounds and music. The customizable event detection thresholds allow you to fine-tune sensitivity for specific audio events to match your unique use cases, such as podcast analysis, meeting transcription, or nature recording analysis.

🎡 See audio events documentation

πŸ‘₯ Speaker Diarization

SONATA provides state-of-the-art speaker diarization to identify and separate different speakers in recordings. The system uses Silero VAD for speech detection and WavLM embeddings for speaker identification, making it ideal for transcribing multi-speaker content like meetings, interviews, and podcasts.

πŸŽ™οΈ See speaker diarization documentation

πŸš€ Next Steps

  • 🧠 Advanced ASR model diversity
  • 😒 Improved emotive detection
  • πŸ”Š Better speaker diarization
  • ⚑ Performance optimization
  • πŸ› οΈ Fix parallel processing issues in deep detection mode for improved reliability

🀝 Contributing

Contributions are welcome! SONATA offers multiple ways to contribute, including code improvements, documentation, testing, and bug reports. Our comprehensive contribution guide covers:

  • Setting up the development environment
  • Coding standards and best practices
  • Testing procedures
  • Pull request workflow
  • Documentation guidelines
  • Language-specific considerations

Whether you're an experienced developer or new to open source, we welcome your contributions.

πŸ“ See contribution guidelines

πŸ“„ License

This project is licensed under the GNU General Public License v3.0.

πŸ™ Acknowledgements

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts