What is tinybench?
The tinybench npm package is a lightweight benchmarking tool for JavaScript. It allows developers to measure the performance of their code by running benchmarks and comparing the execution times of different code snippets.
What are tinybench's main functionalities?
Basic Benchmarking
This feature allows you to create a basic benchmark test. You can add multiple tests to the benchmark and run them to measure their performance.
const { Bench } = require('tinybench');
const bench = new Bench();
bench.add('Example Test', () => {
// Code to benchmark
for (let i = 0; i < 1000; i++) {}
});
bench.run().then(() => {
console.log(bench.table());
});
Asynchronous Benchmarking
This feature allows you to benchmark asynchronous code. You can add tests that return promises and the benchmark will wait for them to resolve before measuring their performance.
const { Bench } = require('tinybench');
const bench = new Bench();
bench.add('Async Test', async () => {
// Asynchronous code to benchmark
await new Promise(resolve => setTimeout(resolve, 1000));
});
bench.run().then(() => {
console.log(bench.table());
});
Customizing Benchmark Options
This feature allows you to customize the benchmark options such as the total time to run the benchmark and the number of iterations. This can help in fine-tuning the benchmarking process.
const { Bench } = require('tinybench');
const bench = new Bench({ time: 2000, iterations: 10 });
bench.add('Custom Options Test', () => {
// Code to benchmark
for (let i = 0; i < 1000; i++) {}
});
bench.run().then(() => {
console.log(bench.table());
});
Other packages similar to tinybench
benchmark
The 'benchmark' package is a popular benchmarking library for JavaScript. It provides a robust API for measuring the performance of code snippets. Compared to tinybench, 'benchmark' offers more advanced features and a more comprehensive API, but it is also larger in size.
perf_hooks
The 'perf_hooks' module is a built-in Node.js module that provides an API for measuring performance. It is more low-level compared to tinybench and requires more manual setup, but it is very powerful and flexible for detailed performance analysis.
benny
The 'benny' package is another benchmarking tool for JavaScript. It focuses on simplicity and ease of use, similar to tinybench. However, 'benny' provides a more modern API and better integration with modern JavaScript features like async/await.
Tinybench
Benchmark your code easily with Tinybench, a simple, tiny and light-weight 7KB
(2KB
minified and gzipped)
benchmarking library!
You can run your benchmarks in multiple JavaScript runtimes, Tinybench is
completely based on the Web APIs with proper timing using process.hrtime
or
performance.now
.
- Accurate and precise timing based on the environment
Event
and EventTarget
compatible events- Statistically analyzed values
- Calculated Percentiles
- Fully detailed results
- No dependencies
In case you need more tiny libraries like tinypool or tinyspy, please consider submitting an RFC
Installing
$ npm install -D tinybench
Usage
You can start benchmarking by instantiating the Bench
class and adding
benchmark tasks to it.
import { Bench } from 'tinybench';
const bench = new Bench({ time: 100 });
bench
.add('faster task', () => {
console.log('I am faster')
})
.add('slower task', async () => {
await new Promise(r => setTimeout(r, 1))
console.log('I am slower')
})
.todo('unimplemented bench')
await bench.warmup();
await bench.run();
console.table(bench.table());
console.table(
bench.table((task) => ({'Task name': task.name}))
);
The add
method accepts a task name and a task function, so it can benchmark
it! This method returns a reference to the Bench instance, so it's possible to
use it to create an another task for that instance.
Note that the task name should always be unique in an instance, because Tinybench stores the tasks based
on their names in a Map
.
Also note that tinybench
does not log any result by default. You can extract the relevant stats
from bench.tasks
or any other API after running the benchmark, and process them however you want.
Docs
Bench
The Benchmark instance for keeping track of the benchmark tasks and controlling
them.
Options:
export type Options = {
time?: number;
iterations?: number;
now?: () => number;
signal?: AbortSignal;
throws?: boolean;
warmupTime?: number;
warmupIterations?: number;
setup?: Hook;
teardown?: Hook;
};
export type Hook = (task: Task, mode: "warmup" | "run") => void | Promise<void>;
async run()
: run the added tasks that were registered using the add
methodasync runConcurrently(limit: number = Infinity)
: similar to the run
method but runs concurrently rather than sequentiallyasync warmup()
: warm up the benchmark tasksreset()
: reset each task and remove its resultadd(name: string, fn: Fn, opts?: FnOpts)
: add a benchmark task to the task map
Fn
: () => any | Promise<any>
FnOpts
: {}
: a set of optional functions run during the benchmark lifecycle that can be used to set up or tear down test data or fixtures without affecting the timing of each task
beforeAll?: () => any | Promise<any>
: invoked once before iterations of fn
beginbeforeEach?: () => any | Promise<any>
: invoked before each time fn
is executedafterEach?: () => any | Promise<any>
: invoked after each time fn
is executedafterAll?: () => any | Promise<any>
: invoked once after all iterations of fn
have finished
remove(name: string)
: remove a benchmark task from the task maptable()
: table of the tasks resultsget results(): (TaskResult | undefined)[]
: (getter) tasks results as an arrayget tasks(): Task[]
: (getter) tasks as an arraygetTask(name: string): Task | undefined
: get a task based on the nametodo(name: string, fn?: Fn, opts: FnOptions)
: add a benchmark todo to the todo mapget todos(): Task[]
: (getter) tasks todos as an array
Task
A class that represents each benchmark task in Tinybench. It keeps track of the
results, name, Bench instance, the task function and the number of times the task
function has been executed.
constructor(bench: Bench, name: string, fn: Fn, opts: FnOptions = {})
bench: Bench
name: string
: task namefn: Fn
: the task functionopts: FnOptions
: Task optionsruns: number
: the number of times the task function has been executedresult?: TaskResult
: the result objectasync run()
: run the current task and write the results in Task.result
objectasync warmup()
: warm up the current tasksetResult(result: Partial<TaskResult>)
: change the result object valuesreset()
: reset the task to make the Task.runs
a zero-value and remove the Task.result
object
export interface FnOptions {
beforeAll?: (this: Task) => void | Promise<void>;
beforeEach?: (this: Task) => void | Promise<void>;
afterEach?: (this: Task) => void | Promise<void>;
afterAll?: (this: Task) => void | Promise<void>;
}
TaskResult
the benchmark task result object.
export type TaskResult = {
error?: unknown;
totalTime: number;
min: number;
max: number;
hz: number;
period: number;
samples: number[];
mean: number;
variance: number;
sd: number;
sem: number;
df: number;
critical: number;
moe: number;
rme: number;
p75: number;
p99: number;
p995: number;
p999: number;
};
Events
Both the Task
and Bench
objects extend the EventTarget
object, so you can attach listeners to different types of events
in each class instance using the universal addEventListener
and
removeEventListener
.
export type BenchEvents =
| "abort"
| "complete"
| "error"
| "reset"
| "start"
| "warmup"
| "cycle"
| "add"
| "remove"
| "todo";
export type TaskEvents =
| "abort"
| "complete"
| "error"
| "reset"
| "start"
| "warmup"
| "cycle";
For instance:
bench.addEventListener("cycle", (e) => {
const task = e.task!;
});
task.addEventListener("cycle", (e) => {
const task = e.task!;
});
BenchEvent
export type BenchEvent = Event & {
task: Task | null;
};
process.hrtime
if you want more accurate results for nodejs with process.hrtime
, then import
the hrtimeNow
function from the library and pass it to the Bench
options.
import { hrtimeNow } from 'tinybench';
It may make your benchmarks slower, check #42.
Prior art
Authors
Credits
Contributing
Feel free to create issues/discussions and then PRs for the project!
Your sponsorship can make a huge difference in continuing our work in open source!