nwhisper
Native Node.js bindings for OpenAI's Whisper using whisper.cpp. High-performance local speech-to-text with custom model path support.

Features
- Custom Model Path Support: Use your own trained models by providing a custom model file path
- Automatically convert the audio to WAV format with a 16000 Hz frequency to support the whisper model.
- Output transcripts to (.txt .srt .vtt .json .wts .lrc)
- Optimized for CPU (Including Apple Silicon ARM)
- Timestamp precision to single word
- Split on word rather than on token (Optional)
- Translate from source language to english (Optional)
- Convert audio format to wav to support whisper model
- Backward compatible with nodejs-whisper
Installation
sudo apt update
sudo apt install build-essential
- Install nwhisper with npm
npm i nwhisper
- Download whisper model (for standard models)
npx nwhisper download
- NOTE: user may need to install make tool
Windows Installation
npm i nwhisper
- Download whisper model (for standard models)
npx nwhisper download
- Note: Make sure mingw32-make or make is available in your system PATH.
Usage/Examples
See example/basic.ts
(can be run with $ npm run example
)
import path from 'path'
import { transcribe } from 'nwhisper'
const filePath = path.resolve(__dirname, 'YourAudioFileName')
await transcribe(filePath, {
modelName: 'base.en',
autoDownloadModelName: 'base.en',
removeWavFileAfterTranscription: false,
withCuda: false,
logger: console,
whisperOptions: {
outputInCsv: false,
outputInJson: false,
outputInJsonFull: false,
outputInLrc: false,
outputInSrt: true,
outputInText: false,
outputInVtt: false,
outputInWords: false,
translateToEnglish: false,
wordTimestamps: false,
timestamps_length: 20,
splitOnWord: true,
},
})
const modelDir = path.join(process.cwd(), '.models')
await transcribe(filePath, {
modelName: 'tiny.en',
modelDir: modelDir,
whisperOptions: {
outputInSrt: true,
},
})
const modelPath = path.join(__dirname, 'models', 'my-custom-model.bin')
await transcribe(filePath, {
modelPath: modelPath,
whisperOptions: {
outputInSrt: true,
language: 'en',
},
})
await transcribe(filePath, {
modelName: 'tiny.en',
autoDownloadModelName: 'tiny.en',
modelDir: path.join(__dirname, 'models'),
whisperOptions: {
outputInSrt: true,
},
})
const MODELS_LIST = [
'tiny',
'tiny.en',
'base',
'base.en',
'small',
'small.en',
'medium',
'medium.en',
'large-v1',
'large',
'large-v3-turbo',
]
Types
interface IOptions {
modelName?: string // Model name (works with directories)
modelPath?: string // NEW: Direct path to model file
modelDir?: string // NEW: Directory for models (download & use)
autoDownloadModelName?: string // Model to auto-download
removeWavFileAfterTranscription?: boolean
withCuda?: boolean
whisperOptions?: WhisperOptions
logger?: Console
}
interface WhisperOptions {
outputInCsv?: boolean
outputInJson?: boolean
outputInJsonFull?: boolean
outputInLrc?: boolean
outputInSrt?: boolean
outputInText?: boolean
outputInVtt?: boolean
outputInWords?: boolean
translateToEnglish?: boolean
timestamps_length?: number
wordTimestamps?: boolean
splitOnWord?: boolean
}
Custom Model Path Usage
The main feature of nwhisper is the ability to use custom model files. This is useful when you have:
- Fine-tuned models for specific domains
- Custom trained models
- Models in different locations than the default
Example with Custom Model
import { transcribe } from 'nwhisper'
import path from 'path'
const modelDir = path.join(process.cwd(), '.models')
const result = await transcribe('audio.wav', {
modelName: 'tiny.en',
modelDir: modelDir,
whisperOptions: {
outputInSrt: true,
language: 'en'
}
})
const modelPath = path.join(__dirname, 'models', 'my-custom-model.bin')
const result2 = await transcribe('audio.wav', {
modelPath: modelPath,
whisperOptions: {
outputInSrt: true,
language: 'auto'
}
})
const result3 = await transcribe('audio.wav', {
modelName: 'tiny.en',
autoDownloadModelName: 'tiny.en',
modelDir: modelDir,
whisperOptions: {
outputInSrt: true,
language: 'auto'
}
})
Model Priority
- modelPath - Direct file path (highest priority)
- modelDir + modelName - Model directory with model name
- Standard directory - Default whisper.cpp models (fallback)
Important Notes
modelDir
serves dual purpose: download location and model location
- When
modelDir
is specified, models are downloaded to and used from that directory
- Model files should follow whisper.cpp naming (e.g.,
ggml-tiny.en.bin
)
- Models must be compatible with whisper.cpp format
Migration from nodejs-whisper
nwhisper is fully backward compatible with nodejs-whisper. Simply replace the package:
npm uninstall nodejs-whisper
npm install nwhisper
Function Names
- Recommended: Use
transcribe
function for new code
- Legacy:
nodewhisper
function is still available but deprecated
import { transcribe } from 'nwhisper'
await transcribe('audio.wav', { modelName: 'tiny.en' })
import { nodewhisper } from 'nwhisper'
await nodewhisper('audio.wav', { modelName: 'tiny.en' })
No code changes required for existing functionality!
Run locally
Clone the project
git clone https://github.com/teomyth/nwhisper
Go to the project directory
cd nwhisper
Install dependencies
npm install
Start the server
npm run dev
Build project
npm run build
Made with
Feedback
If you have any feedback, please reach out to us at teomyth@gmail.com
Authors
Acknowledgments
This project is a fork of nodejs-whisper by @chetanXpro. We extend our gratitude to the original author for creating the foundation that made nwhisper possible.
Original Project