Launch Week Day 1: Socket for Jira Is Now Available.Learn More
Socket
Book a DemoSign in
Socket

nodejs-whisper

Package Overview
Dependencies
Maintainers
1
Versions
40
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

nodejs-whisper

Node bindings for OpenAI's Whisper. Optimized for CPU.

latest
Source
npmnpm
Version
0.3.0
Version published
Weekly downloads
5.4K
-13.54%
Maintainers
1
Weekly downloads
 
Created
Source

nodejs-whisper

Node.js bindings for OpenAI's Whisper model.

MIT License

Features

  • Automatically convert the audio to WAV format with a 16000 Hz frequency to support the whisper model.
  • Output transcripts to (.txt .srt .vtt .json .wts .lrc)
  • Optimized for CPU (Including Apple Silicon ARM)
  • Timestamp precision to single word
  • Split on word rather than on token (Optional)
  • Translate from source language to english (Optional)
  • Convert audio format to wav to support whisper model

Installation

  • Install make tools
sudo apt update
sudo apt install build-essential
  • Install nodejs-whisper with npm
  npm i nodejs-whisper
  • Download whisper model
  npx nodejs-whisper download
  • NOTE: user may need to install make tool

Windows Installation

npm i nodejs-whisper
  • Download whisper model
npx nodejs-whisper download
  • Note: Make sure mingw32-make or make is available in your system PATH.

Usage/Examples

See example/index.ts (can be run with $ npm run test)

import path from 'path'
import { nodewhisper } from 'nodejs-whisper'

// Need to provide exact path to your audio file.
const filePath = path.resolve(__dirname, 'YourAudioFileName')

await nodewhisper(filePath, {
	modelName: 'base.en', //Downloaded models name
	modelRootPath: '/path/to/whisper/models', // (optional) directory containing the selected ggml model file
	autoDownloadModelName: 'base.en', // (optional) auto download a model if model is not present
	removeWavFileAfterTranscription: false, // (optional) remove wav file once transcribed
	withCuda: false, // (optional) use cuda for faster processing
	logger: console, // (optional) Logging instance, defaults to console
	whisperOptions: {
		outputInCsv: false, // get output result in csv file
		outputInJson: false, // get output result in json file
		outputInJsonFull: false, // get output result in json file including more information
		outputInLrc: false, // get output result in lrc file
		outputInSrt: true, // get output result in srt file
		outputInText: false, // get output result in txt file
		outputInVtt: false, // get output result in vtt file
		outputInWords: false, // get output result in wts file for karaoke
		translateToEnglish: false, // translate from source language to english
		wordTimestamps: false, // word-level timestamps
		timestamps_length: 20, // amount of dialogue per timestamp pair
		splitOnWord: true, // split on word rather than on token
		noGpu: false, // disable GPU inference
	},
})

// Model list
const MODELS_LIST = [
	'tiny',
	'tiny.en',
	'base',
	'base.en',
	'small',
	'small.en',
	'medium',
	'medium.en',
	'large-v1',
	'large',
	'large-v3-turbo',
]

Custom CMake flags can be passed with NODEJS_WHISPER_CMAKE_ARGS.

NODEJS_WHISPER_CMAKE_ARGS="-DGGML_NATIVE=OFF" npm test

When modelRootPath is used with autoDownloadModelName, downloaded models are saved in that directory.

Docker model cache example:

volumes:
    - ./.docker-data/whisper-models:/data/whisper-models
await nodewhisper(filePath, {
    modelName: 'tiny.en',
    autoDownloadModelName: 'tiny.en',
    modelRootPath: '/data/whisper-models',
    whisperOptions: {
        outputInSrt: true,
    },
})

The downloaded model will be stored at /data/whisper-models/ggml-tiny.en.bin, while the package's internal downloader scripts remain available.

Types

 interface IOptions {
	modelName: string
	modelRootPath?: string
	removeWavFileAfterTranscription?: boolean
	withCuda?: boolean
	autoDownloadModelName?: string
	whisperOptions?: WhisperOptions
	logger?: Console
}

 interface WhisperOptions {
	outputInCsv?: boolean
	outputInJson?: boolean
	outputInJsonFull?: boolean
	outputInLrc?: boolean
	outputInSrt?: boolean
	outputInText?: boolean
	outputInVtt?: boolean
	outputInWords?: boolean
	translateToEnglish?: boolean
	timestamps_length?: number
	wordTimestamps?: boolean
	splitOnWord?: boolean
	noGpu?: boolean
}

Run locally

Clone the project

  git clone https://github.com/ChetanXpro/nodejs-whisper

Go to the project directory

  cd nodejs-whisper

Install dependencies

  npm install

Start the server

  npm run dev

Build project

  npm run build

Made with

Feedback

If you have any feedback, please reach out to us at chetanbaliyan10@gmail.com

Authors

Keywords

OpenAI

FAQs

Package last updated on 11 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts