FFmpeg-Stream

Node bindings to ffmpeg command, exposing stream based API.
[!NOTE]
FFmpeg must be installed and available in PATH
.
You can set a custom ffmpeg path via an argument (default is just ffmpeg
).
Examples
import { Converter } from "ffmpeg-stream"
import { createReadStream, createWriteStream } from "node:fs"
async function convert() {
const converter = new Converter()
const converterInput = converter.createInputStream({
f: "image2pipe",
vcodec: "mjpeg",
})
createReadStream(`${__dirname}/cat.jpg`).pipe(converterInput)
const converterOutput = converter.createOutputStream({
f: "image2",
vcodec: "mjpeg",
vf: "crop=300:300,scale=100:100",
})
converterOutput.pipe(createWriteStream(`${__dirname}/cat_thumb.jpg`))
converter.createOutputToFile(`${__dirname}/cat_full.jpg`, {
vf: "crop=300:300",
})
await converter.run()
}
API
-
class Converter
Creates a new instance of the ffmpeg converter class.
Converting won't start until run()
method is called.
-
method createInputStream(options: Options): stream.Writable
Defines an ffmpeg input stream.
Remember to specify the f
option, which specifies the format of the input data.
The returned stream is a writable stream.
-
method createInputFromFile(file: string, options: Options): void
Defines an ffmpeg input using specified path.
This is the same as specifying an input on the command line.
-
method createBufferedInputStream(options: Options): stream.Writable
This is a mix of createInputStream
and createInputFromFile
.
It creates a temporary file and instructs ffmpeg to use it,
then it returns a writable stream attached to that file.
Using this method will cause a huge delay.
-
method createOutputStream(options: Options): stream.Readable
Defines an ffmpeg output stream.
Remember to specify the f
option, which specifies the format of the output data.
The returned stream is a readable stream.
-
method createOutputToFile(file: string, options: Options): void
Defines an ffmpeg output using specified path.
This is the same as specifying an output on the command line.
-
method createBufferedOutputStream(options: Options): stream.Readable
This is a mix of createOutputStream
and createOutputToFile
.
It creates a temporary file and instructs ffmpeg to use it,
then it returns a readable stream attached to that file.
Using this method will cause a huge delay.
-
method run(): Promise<void>
Starts the ffmpeg process.
Returns a Promise which resolves on normal exit or kill, but rejects on ffmpeg error.
-
method kill(): void
Kills the ffmpeg process.
-
type Options
Object of options which you normally pass to the ffmpeg command in the terminal.
Documentation for individual options can be found at ffmpeg site in audio and video category.
For boolean options specify true
or false
.
If you'd like to specify the same argument multiple times you can do so by providing an array of values. E.g. { map: ["0:v", "1:a"] }
FAQ
How to get video duration and other stats
You can use ffprobe
command for now. It might be implemented in the library in the future, though.
Is there a progress
or onFrameEmitted
event
Currently, no.
Something doesn't work
Try running your program with DEBUG=ffmpeg-stream
environment variable.
It will print the ffmpeg command it executes and all the ffmpeg logs.
The command usually looks something like ffmpeg -f … -i pipe:3 -f … pipe:4
.
pipe:number
means it uses standard input/output instead of a file.
Error: Muxer does not support non seekable output
When getting error similar to this:
[mp4 @ 0000000000e4db00] muxer does not support non seekable output
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1 --
encoded 0 frames
Conversion failed!
at ChildProcess.<anonymous> (<DirPath>\node_modules\ffmpeg-stream\lib\index.js:215:27)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:215:12)
ffmpeg says that the combination of options you specified doesn't support streaming. You can experiment with calling ffmpeg directly and specifying -
or pipe:1
as output file. Maybe some other options or different format will work. Streaming sequence of JPEGs over websockets worked flawlessly for me ({ f: "image2pipe", vcodec: "mjpeg" }
).
You can also use createBufferedOutputStream
. That tells the library to save output to a temporary file and then create a node stream from that file. It wont start producing data until the conversion is complete, though.
How to get individual frame data
You have to set output format to mjpeg and then split the stream manually by looking at the bytes. You can implement a transform stream which does this:
import { Transform } from "node:stream"
class ExtractFrames extends Transform {
constructor(magicNumberHex) {
super({ readableObjectMode: true })
this.magicNumber = Buffer.from(magicNumberHex, "hex")
this.currentData = Buffer.alloc(0)
}
_transform(newData, encoding, done) {
this.currentData = Buffer.concat([this.currentData, newData])
while (true) {
const startIndex = this.currentData.indexOf(this.magicNumber)
if (startIndex < 0) break
const endIndex = this.currentData.indexOf(
this.magicNumber,
startIndex + this.magicNumber.length,
)
if (endIndex < 0) break
this.push(this.currentData.slice(startIndex, endIndex))
this.currentData = this.currentData.slice(endIndex)
if (startIndex > 0) console.error(`Discarded ${startIndex} bytes of invalid data`)
}
done()
}
_flush(done) {
this.push(this.currentData)
done()
}
}
And then use it like that:
import { Converter } from "ffmpeg-stream"
const converter = new Converter()
converter
.createOutputStream({ f: "image2pipe", vcodec: "mjpeg" })
.pipe(new ExtractFrames("FFD8FF"))
.on("data", frameData => {
})
converter.run()
How to create an animation from a set of image files
I have images in Amazon S3 bucket (private) so I'm using their SDK to download those.
I get the files in Buffer objects.
Is there any way I can use your package to create a video out of it?
So far I've been downloading the files and then using the following command:
ffmpeg -framerate 30 -pattern_type glob -i '*.jpg' -c:v libx264 -pix_fmt yuv420p out.mp4
But now want to do it from my node js application automatically.
import { Converter } from "ffmpeg-stream"
const frames = ["frame1.jpg", "frame2.jpg", ...etc]
const converter = new Converter()
const converterInput = converter.createInputStream({ f: "image2pipe", r: 30 })
converter.createOutputToFile("out.mp4", {
vcodec: "libx264",
pix_fmt: "yuv420p",
})
const convertingFinished = converter.run()
for (const filename of frames) {
await new Promise((resolve, reject) => {
s3.getObject({ Bucket: "...", Key: filename })
.createReadStream()
.pipe(converterInput, { end: false })
.on("end", resolve)
.on("error", reject)
})
}
converterInput.end()
await convertingFinished
How to stream a video when there's data, otherwise an intermission image
You can turn your main stream into series of jpeg
images with output format mjpeg
and combine it with static image by repeatedly piping a single jpeg
image when there's no data from main stream.
Then pipe it to second ffmpeg process which combines jpeg
images into video.
import * as fs from "node:fs"
import { Converter } from "ffmpeg-stream"
const joiner = new Converter()
const joinerInput = joiner.createInputStream({ f: "mjpeg" })
const joinerOutput = joiner.createOutputStream({ f: "whatever format you want" })
joinerOutput.pipe()
joiner.run()
let streaming = false
async function streamVideo(incomingStream, format) {
if (streaming) throw new Error("We are already streaming something else")
streaming = true
const splitter = new Converter()
incomingStream.pipe(splitter.createInputStream({ f: format }))
splitter.createOutputStream({ f: "mjpeg" }).pipe(joinerInput, { end: false })
try {
await splitter.run()
} finally {
streaming = false
}
}
setInterval(() => {
if (streaming) return
fs.createReadStream("intermission_pic.jpg").pipe(joinerInput, { end: false })
}, 1000 / 30)
I want intermission image with audio and other complicated stuff
You should probably use beamcoder instead.