mitata
cross-runtime benchmarking lib
Install
bun add mitata
npm install mitata
Examples
import { run, bench, group, baseline } from 'mitata';
bench('noop', () => {});
bench('noop2', () => {});
group('group', () => {
baseline('baseline', () => {});
bench('Date.now()', () => Date.now());
bench('performance.now()', () => performance.now());
});
group({ name: 'group2', summary: false }, () => {
bench('new Array(0)', () => new Array(0));
bench('new Array(1024)', () => new Array(1024));
});
await run({
avg: true,
json: false,
colors: true,
min_max: true,
collect: false,
percentiles: false,
});
output:
terminal screenshot (click to view)
cpu: Apple M1
runtime: bun 0.0.83 (arm64-darwin)
benchmark time (avg) (min … max)
---------------------------------------------------------
noop 323.34 ps/iter (304.1 ps … 8.64 ns)
noop2 387.15 ps/iter (304.1 ps … 16.79 ns)
• group
---------------------------------------------------------
baseline 782.71 ps/iter (304.1 ps … 6.34 ns)
Date.now() 31.04 ns/iter (30.9 ns … 34.59 ns)
performance.now() 92.71 ns/iter (91.72 ns … 101.45 ns)
summary for group
baseline
39.66x faster than Date.now()
118.45x faster than performance.now()
• group2
---------------------------------------------------------
new Array(0) 7.47 ns/iter (6.46 ns … 269.07 ns)
new Array(1024) 286.39 ns/iter (241.37 ns … 560.7 ns)
JIT bias
If you run benchmarks like this, you might notice that they get slower (only few nanoseconds) after the first few runs.
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
bench('noop', () => {});
I call this behavior "JIT bias". In simple words, v8 and JSC JIT expect us to pass the same function, so they optimize for it, but we break that promise and get deoptimized.
License
MIT © Evan