_I'm transitioning to a full-time open source career. Your support would be greatly appreciated π_ <source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/embed/tiers.svg?org=tinyli
The tinybench npm package is a lightweight benchmarking tool for JavaScript. It allows developers to measure the performance of their code by running benchmarks and comparing the execution times of different code snippets.
What are tinybench's main functionalities?
Basic Benchmarking
This feature allows you to create a basic benchmark test. You can add multiple tests to the benchmark and run them to measure their performance.
const { Bench } = require('tinybench');
const bench = new Bench();
bench.add('Example Test', () => {
// Code to benchmark
for (let i = 0; i < 1000; i++) {}
});
bench.run().then(() => {
console.log(bench.table());
});
Asynchronous Benchmarking
This feature allows you to benchmark asynchronous code. You can add tests that return promises and the benchmark will wait for them to resolve before measuring their performance.
This feature allows you to customize the benchmark options such as the total time to run the benchmark and the number of iterations. This can help in fine-tuning the benchmarking process.
const { Bench } = require('tinybench');
const bench = new Bench({ time: 2000, iterations: 10 });
bench.add('Custom Options Test', () => {
// Code to benchmark
for (let i = 0; i < 1000; i++) {}
});
bench.run().then(() => {
console.log(bench.table());
});
The 'benchmark' package is a popular benchmarking library for JavaScript. It provides a robust API for measuring the performance of code snippets. Compared to tinybench, 'benchmark' offers more advanced features and a more comprehensive API, but it is also larger in size.
The 'perf_hooks' module is a built-in Node.js module that provides an API for measuring performance. It is more low-level compared to tinybench and requires more manual setup, but it is very powerful and flexible for detailed performance analysis.
The 'benny' package is another benchmarking tool for JavaScript. It focuses on simplicity and ease of use, similar to tinybench. However, 'benny' provides a more modern API and better integration with modern JavaScript features like async/await.
I'm transitioning to a full-time open source career. Your support would be greatly appreciated π
Tinybench π
Benchmark your code easily with Tinybench, a simple, tiny and light-weight 7KB (2KB minified and gzipped)
benchmarking library!
You can run your benchmarks in multiple JavaScript runtimes, Tinybench is
completely based on the Web APIs with proper timing using process.hrtime or
performance.now.
Accurate and precise timing based on the environment
Event and EventTarget compatible events
Statistically analyzed values
Calculated Percentiles
Fully detailed results
No dependencies
In case you need more tiny libraries like tinypool or tinyspy, please consider submitting an RFC
Installing
$ npm install -D tinybench
Usage
You can start benchmarking by instantiating the Bench class and adding
benchmark tasks to it.
The add method accepts a task name and a task function, so it can benchmark
it! This method returns a reference to the Bench instance, so it's possible to
use it to create an another task for that instance.
Note that the task name should always be unique in an instance, because Tinybench stores the tasks based
on their names in a Map.
Also note that tinybench does not log any result by default. You can extract the relevant stats
from bench.tasks or any other API after running the benchmark, and process them however you want.
Docs
Bench
The Benchmark instance for keeping track of the benchmark tasks and controlling
them.
Options:
exporttypeOptions = {
/**
* time needed for running a benchmark task (milliseconds) @default 500
*/
time?: number;
/**
* number of times that a task should run if even the time option is finished @default 10
*/
iterations?: number;
/**
* function to get the current timestamp in milliseconds
*/
now?: () =>number;
/**
* An AbortSignal for aborting the benchmark
*/
signal?: AbortSignal;
/**
* Throw if a task fails (events will not work if true)
*/
throws?: boolean;
/**
* warmup time (milliseconds) @default 100ms
*/
warmupTime?: number;
/**
* warmup iterations @default 5
*/
warmupIterations?: number;
/**
* setup function to run before each benchmark task (cycle)
*/
setup?: Hook;
/**
* teardown function to run after each benchmark task (cycle)
*/
teardown?: Hook;
};
exporttypeHook = (task: Task, mode: "warmup" | "run") =>void | Promise<void>;
async run(): run the added tasks that were registered using the add method
async runConcurrently(threshold: number = Infinity, mode: "bench" | "task" = "bench"): similar to the run method but runs concurrently rather than sequentially. See the Concurrency section.
async warmup(): warm up the benchmark tasks
async warmupConcurrently(threshold: number = Infinity, mode: "bench" | "task" = "bench"): warm up the benchmark tasks concurrently
reset(): reset each task and remove its result
add(name: string, fn: Fn, opts?: FnOpts): add a benchmark task to the task map
Fn: () => any | Promise<any>
FnOpts: {}: a set of optional functions run during the benchmark lifecycle that can be used to set up or tear down test data or fixtures without affecting the timing of each task
beforeAll?: () => any | Promise<any>: invoked once before iterations of fn begin
beforeEach?: () => any | Promise<any>: invoked before each time fn is executed
afterEach?: () => any | Promise<any>: invoked after each time fn is executed
afterAll?: () => any | Promise<any>: invoked once after all iterations of fn have finished
remove(name: string): remove a benchmark task from the task map
table(): table of the tasks results
get results(): (TaskResult | undefined)[]: (getter) tasks results as an array
get tasks(): Task[]: (getter) tasks as an array
getTask(name: string): Task | undefined: get a task based on the name
todo(name: string, fn?: Fn, opts: FnOptions): add a benchmark todo to the todo map
get todos(): Task[]: (getter) tasks todos as an array
Task
A class that represents each benchmark task in Tinybench. It keeps track of the
results, name, Bench instance, the task function and the number of times the task
function has been executed.
runs: number: the number of times the task function has been executed
result?: TaskResult: the result object
async run(): run the current task and write the results in Task.result object
async warmup(): warm up the current task
setResult(result: Partial<TaskResult>): change the result object values
reset(): reset the task to make the Task.runs a zero-value and remove the Task.result object
exportinterfaceFnOptions {
/**
* An optional function that is run before iterations of this task begin
*/
beforeAll?: (this: Task) =>void | Promise<void>;
/**
* An optional function that is run before each iteration of this task
*/
beforeEach?: (this: Task) =>void | Promise<void>;
/**
* An optional function that is run after each iteration of this task
*/
afterEach?: (this: Task) =>void | Promise<void>;
/**
* An optional function that is run after all iterations of this task end
*/
afterAll?: (this: Task) =>void | Promise<void>;
}
TaskResult
the benchmark task result object.
exporttypeTaskResult = {
/*
* the last error that was thrown while running the task
*/
error?: unknown;
/**
* The amount of time in milliseconds to run the benchmark task (cycle).
*/totalTime: number;
/**
* the minimum value in the samples
*/min: number;
/**
* the maximum value in the samples
*/max: number;
/**
* the number of operations per second
*/hz: number;
/**
* how long each operation takes (ms)
*/period: number;
/**
* task samples of each task iteration time (ms)
*/samples: number[];
/**
* samples mean/average (estimate of the population mean)
*/mean: number;
/**
* samples variance (estimate of the population variance)
*/variance: number;
/**
* samples standard deviation (estimate of the population standard deviation)
*/sd: number;
/**
* standard error of the mean (a.k.a. the standard deviation of the sampling distribution of the sample mean)
*/sem: number;
/**
* degrees of freedom
*/df: number;
/**
* critical value of the samples
*/critical: number;
/**
* margin of error
*/moe: number;
/**
* relative margin of error
*/rme: number;
/**
* p75 percentile
*/p75: number;
/**
* p99 percentile
*/p99: number;
/**
* p995 percentile
*/p995: number;
/**
* p999 percentile
*/p999: number;
};
Events
Both the Task and Bench objects extend the EventTarget object, so you can attach listeners to different types of events
in each class instance using the universal addEventListener and
removeEventListener.
/**
* Bench events
*/exporttypeBenchEvents =
| "abort"// when a signal aborts
| "complete"// when running a benchmark finishes
| "error"// when the benchmark task throws
| "reset"// when the reset function gets called
| "start"// when running the benchmarks gets started
| "warmup"// when the benchmarks start getting warmed up (before start)
| "cycle"// when running each benchmark task gets done (cycle)
| "add"// when a Task gets added to the Bench
| "remove"// when a Task gets removed of the Bench
| "todo"; // when a todo Task gets added to the Bench/**
* task events
*/exporttypeTaskEvents =
| "abort"
| "complete"
| "error"
| "reset"
| "start"
| "warmup"
| "cycle";
For instance:
// runs on each benchmark task's cycle
bench.addEventListener("cycle", (e) => {
const task = e.task!;
});
// runs only on this benchmark task's cycle
task.addEventListener("cycle", (e) => {
const task = e.task!;
});
if you want more accurate results for nodejs with process.hrtime, then import
the hrtimeNow function from the library and pass it to the Bench options.
import { hrtimeNow } from'tinybench';
It may make your benchmarks slower, check #42.
Concurrency
When mode is set to null (default), concurrency is disabled.
When mode is set to 'task', each task's iterations (calls of a task function) run concurrently.
When mode is set to 'bench', different tasks within the bench run concurrently. Concurrent cycles.
// options way (recommended)
bench.threshold = 10// The maximum number of concurrent tasks to run. Defaults to Infinity.
bench.concurrency = "task"// The concurrency mode to determine how tasks are run. // await bench.warmup()await bench.run()
// standalone method way// await bench.warmupConcurrently(10, "task")await bench.runConcurrently(10, "task") // with runConcurrently, mode is set to 'bench' by default
The npm package tinybench receives a total of 4,660,668 weekly downloads. As such, tinybench popularity was classified as popular.
We found that tinybench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 3 open source maintainers collaborating on the project.
Package last updated on 02 Aug 2024
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
An advanced npm supply chain attack is leveraging Ethereum smart contracts for decentralized, persistent malware control, evading traditional defenses.
By Kush Pandya, Philipp Burckhardt, Kirill Boychenko, Orlando BarreraΒ - Β Oct 31, 2024
The npm package for the LottieFiles Player web component was hit with a supply chain attack after a software engineer's npmjs credentials were compromised.