Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

africanwhisper

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

africanwhisper

A framework for fast fine-tuning and API endpoint deployment of Whisper model specifically developed to accelerate Automatic Speech Recognition(ASR) for African Languages.

  • 0.9.23
  • Source
  • PyPI
  • Socket score

Maintainers
1

African Whisper: ASR for African Languages

Twitter Last commit License

Framework for seamless fine-tuning and deploying Whisper Model developed to advance Automatic Speech Recognition (ASR): translation and transcription capabilities for African languages.

Features

  • 🔧 Fine-Tuning: Fine-tune the Whisper model on any audio dataset from Huggingface, e.g., Mozilla's Common Voice, Fleurs, LibriSpeech, or your own custom private/public dataset etc

  • 📊 Metrics Monitoring: View training run metrics on Wandb.

  • 🐳 Production Deployment: Seamlessly containerize and deploy the model inference endpoint for real-world applications.

  • 🚀 Model Optimization: Utilize CTranslate2 for efficient model optimization, ensuring faster inference times.

  • 📝 Word-Level Transcriptions: Produce detailed word-level transcriptions and translations, complete with timestamps.

  • 🎙️ Multi-Speaker Diarization: Perform speaker identification and separation in multi-speaker audio using diarization techniques.

  • 🔍 Alignment Precision: Improve transcription and translation accuracy by aligning outputs with Wav2vec models.

  • 🛡️ Reduced Hallucination: Leverage Voice Activity Detection (VAD) to minimize hallucination and improve transcription clarity.


The framework implements the following papers:

  1. Robust Speech Recognition via Large-Scale Weak Supervision : Speech processing systems trained to predict large amounts of transcripts of audio on the internet scaled to 680,000 hours of multilingual and multitask supervision.

  2. WhisperX: Time-Accurate Speech Transcription of Long-Form Audio for time-accurate speech recognition with word-level timestamps.

  3. Pyannote.audio: Neural building blocks for speaker diarization for advanced speaker diarization capabilities.

  4. Efficient and High-Quality Neural Machine Translation with OpenNMT: Efficient neural machine translation and model acceleration.

For more details, you can refer to the Whisper ASR model paper.

Documentation

Refer to the Documentation to get started

Contributing

Contributions are welcome and encouraged.

Before contributing, please take a moment to review our Contribution Guidelines for important information on how to contribute to this project.

If you're unsure about anything or need assistance, don't hesitate to reach out to us or open an issue to discuss your ideas.

We look forward to your contributions!

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contact

For any enquiries, please reach out to me through keviinkibe@gmail.com

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc