
Security News
Deno 2.6 + Socket: Supply Chain Defense In Your CLI
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.
Benchmark your code easily with Tinybench, a simple, tiny and light-weight 10KB (2KB minified and gzipped) benchmarking library!
You can run your benchmarks in multiple JavaScript runtimes, Tinybench is completely based on the Web APIs with proper timing using
process.hrtime or performance.now.
Event and EventTarget compatible eventsIn case you need more tiny libraries like tinypool or tinyspy, please consider submitting an RFC
$ npm install -D tinybench
You can start benchmarking by instantiating the Bench class and adding benchmark tasks to it.
import { Bench } from 'tinybench'
const bench = new Bench({ name: 'simple benchmark', time: 100 })
bench
.add('faster task', () => {
console.log('I am faster')
})
.add('slower task', async () => {
await new Promise(resolve => setTimeout(resolve, 1)) // we wait 1ms :)
console.log('I am slower')
})
await bench.run()
console.log(bench.name)
console.table(bench.table())
// Output:
// simple benchmark
// βββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββ¬ββββββββββββββββββββββββ¬βββββββββββββββββββββββββ¬βββββββββββββββββββββββββ¬ββββββββββ
// β (index) β Task name β Latency avg (ns) β Latency med (ns) β Throughput avg (ops/s) β Throughput med (ops/s) β Samples β
// βββββββββββΌββββββββββββββββΌββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββ€
// β 0 β 'faster task' β '63768 Β± 4.02%' β '58954 Β± 15255.00' β '18562 Β± 1.67%' β '16962 Β± 4849' β 1569 β
// β 1 β 'slower task' β '1542543 Β± 7.14%' β '1652502 Β± 167851.00' β '808 Β± 19.65%' β '605 Β± 67' β 65 β
// βββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββ
The add method accepts a task name and a task function, so it can benchmark
it! This method returns a reference to the Bench instance, so it's possible to
use it to create an another task for that instance.
Note that the task name should always be unique in an instance, because Tinybench stores the tasks based
on their names in a Map.
Also note that tinybench does not log any result by default. You can extract the relevant stats
from bench.tasks or any other API after running the benchmark, and process them however you want.
More usage examples can be found in the examples directory.
BenchTaskTaskResultEventsBoth the Task and Bench classes extend the EventTarget object. So you can attach listeners to different types of events in each class instance using the universal addEventListener and removeEventListener methods.
BenchEvents// runs on each benchmark task's cycle
bench.addEventListener('cycle', (evt) => {
const task = evt.task!;
});
TaskEvents// runs only on this benchmark task's cycle
task.addEventListener('cycle', (evt) => {
const task = evt.task!;
});
BenchEventTinybench automatically detects if a task function is asynchronous by
checking if provided function is an AsyncFunction or if it returns a
Promise, by calling the provided function once.
You can also explicitly set the async option to true or false when adding
a task, thus avoiding the detection. This can be useful, for example, for
functions that return a Promise but are actually synchronous.
const bench = new Bench()
bench.add('asyncTask', async () => {
}, { async: true })
bench.add('syncTask', () => {
}, { async: false })
bench.add('syncTaskReturningPromiseAsAsync', () => {
return Promise.resolve()
}, { async: true })
bench.add('syncTaskReturningPromiseAsSync', () => {
// for example running sync logic, which blocks the event loop anyway
// like fs.writeFileSync
// returns promise maybe for API compatibility
return Promise.resolve()
}, { async: false })
await bench.run()
mode is set to null (default), concurrency is disabled.mode is set to 'task', each task's iterations (calls of a task function) run concurrently.mode is set to 'bench', different tasks within the bench run concurrently. Concurrent cycles.bench.threshold = 10 // The maximum number of concurrent tasks to run. Defaults to Number.POSITIVE_INFINITY.
bench.concurrency = 'task' // The concurrency mode to determine how tasks are run.
await bench.run()
console.table()You can convert the benchmark results to a table format suitable for
console.table() using the bench.table() method.
const table = bench.table()
console.table(table)
You can also customize the table output by providing a convert-function to the table method.
import { Bench, type ConsoleTableConverter, formatNumber, mToNs, type Task } from 'tinybench'
/**
* The default converter function for console.table output.
* Modify it as needed to customize the table format.
*/
const defaultConverter: ConsoleTableConverter = (
task: Task
): Record<string, number | string> => {
const state = task.result.state
return {
'Task name': task.name,
...(state === 'aborted-with-statistics' || state === 'completed'
? {
'Latency avg (ns)': `${formatNumber(mToNs(task.result.latency.mean))} \xb1 ${task.result.latency.rme.toFixed(2)}%`,
'Latency med (ns)': `${formatNumber(mToNs(task.result.latency.p50))} \xb1 ${formatNumber(mToNs(task.result.latency.mad))}`,
'Throughput avg (ops/s)': `${Math.round(task.result.throughput.mean).toString()} \xb1 ${task.result.throughput.rme.toFixed(2)}%`,
'Throughput med (ops/s)': `${Math.round(task.result.throughput.p50).toString()} \xb1 ${Math.round(task.result.throughput.mad).toString()}`,
Samples: task.result.latency.samplesCount,
}
: state !== 'errored'
? {
'Latency avg (ns)': 'N/A',
'Latency med (ns)': 'N/A',
'Throughput avg (ops/s)': 'N/A',
'Throughput med (ops/s)': 'N/A',
Samples: 'N/A',
Remarks: state,
}
: {
Error: task.result.error.message,
Stack: task.result.error.stack ?? 'N/A',
}),
...(state === 'aborted-with-statistics' && {
Remarks: state,
}),
}
}
const bench = new Bench({ name: 'custom table benchmark', time: 100 })
// add tasks...
console.table(bench.table(defaultConverter))
By default Tinybench does not keep the samples for latency and throughput to
minimize memory usage. Enable sample retention if you need the raw samples for
plotting, custom analysis, or exporting results.
You can enable samples retention at the bench level by setting the
retainSamples option to true when creating a Bench instance:
const bench = new Bench({ retainSamples: true })
You can also enable samples retention by setting the retainSamples option to
true when adding a task:
bench.add('task with samples', () => {
// Task logic here
}, { retainSamples: true })
Tinybench can utilize different timestamp providers for measuring time intervals.
By default it uses performance.now().
The timestampProvider option can be set when creating a Bench instance. It
accepts either a TimestampProvider object or shorthands for the common
providers hrtimeNow and performanceNow.
If you use bun runtime, you can also use bunNanoseconds shorthand.
You can set the timestampProvider to auto to let Tinybench choose the most
precise available timestamp provider based on the runtime.
import { Bench } from 'tinybench'
const bench = new Bench({
timestampProvider: 'hrtimeNow' // or 'performanceNow', 'bunNanoseconds', 'auto'
})
If you want to provide a custom timestamp provider, you can create an object that implements
the TimestampProvider interface:
import { Bench, TimestampProvider } from 'tinybench'
// Custom timestamp provider using Date.now()
const dateNowTimestampProvider: TimestampProvider = {
name: 'dateNow', // name of the provider
fn: Date.now, // function that returns the current timestamp
toMs: ts => ts, // convert the timestamp to milliseconds
fromMs: ts => ts // convert milliseconds to the format used by fn()
}
const bench = new Bench({
timestampProvider: dateNowTimestampProvider
})
You can also set the now option to a function that returns the current timestamp.
It will be converted to a TimestampProvider internally.
import { Bench } from 'tinybench'
const bench = new Bench({
now: Date.now
})
Tinybench supports aborting benchmarks using AbortSignal at both the bench and task levels:
Abort all tasks in a benchmark by passing a signal to the Bench constructor:
const controller = new AbortController()
const bench = new Bench({ signal: controller.signal })
bench
.add('task1', () => {
// This will be aborted
})
.add('task2', () => {
// This will also be aborted
})
// Abort all tasks
controller.abort()
await bench.run()
// Both tasks will be aborted
Abort individual tasks without affecting other tasks by passing a signal to the task options:
const controller = new AbortController()
const bench = new Bench()
bench
.add('abortable task', () => {
// This task can be aborted independently
}, { signal: controller.signal })
.add('normal task', () => {
// This task will continue normally
})
// Abort only the first task
controller.abort()
await bench.run()
// Only 'abortable task' will be aborted, 'normal task' continues
You can abort benchmarks while they're running:
const controller = new AbortController()
const bench = new Bench({ time: 10000 }) // Long-running benchmark
bench.add('long task', async () => {
await new Promise(resolve => setTimeout(resolve, 100))
}, { signal: controller.signal })
// Abort after 1 second
setTimeout(() => controller.abort(), 1000)
await bench.run()
// Task will stop after ~1 second instead of running for 10 seconds
Both Bench and Task emit abort events when aborted:
const controller = new AbortController()
const bench = new Bench()
bench.add('task', () => {
// Task function
}, { signal: controller.signal })
const task = bench.getTask('task')
// Listen for abort events
task.addEventListener('abort', () => {
console.log('Task aborted!')
})
bench.addEventListener('abort', () => {
console.log('Bench received abort event!')
})
controller.abort()
await bench.run()
Note: When a task is aborted, task.result.aborted will be true, and the task will have completed any iterations that were running when the abort signal was received.
| Mohammad Bagher |
|---|
| Uzlopak | poyoho |
|---|
Feel free to create issues/discussions and then PRs for the project!
Your sponsorship can make a huge difference in continuing our work in open source!
The 'benchmark' package is a popular benchmarking library for JavaScript. It provides a robust API for measuring the performance of code snippets. Compared to tinybench, 'benchmark' offers more advanced features and a more comprehensive API, but it is also larger in size.
The 'perf_hooks' module is a built-in Node.js module that provides an API for measuring performance. It is more low-level compared to tinybench and requires more manual setup, but it is very powerful and flexible for detailed performance analysis.
The 'benny' package is another benchmarking tool for JavaScript. It focuses on simplicity and ease of use, similar to tinybench. However, 'benny' provides a more modern API and better integration with modern JavaScript features like async/await.
FAQs
π A simple, tiny and lightweight benchmarking library!
The npm package tinybench receives a total of 14,022,624 weekly downloads. As such, tinybench popularity was classified as popular.
We found that tinybench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 3 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.

Security News
New DoS and source code exposure bugs in React Server Components and Next.js: whatβs affected and how to update safely.

Security News
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.