
Security News
AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.
flexi-bench
Advanced tools
FlexiBench is a flexible benchmarking library for JavaScript and TypeScript. It is designed to be simple to use, but also powerful and flexible. It is inspired by common testing framework APIs, and aims to provide a similar experience for benchmarking.
FlexiBench is a flexible benchmarking library for JavaScript and TypeScript. It is designed to be simple to use, but also powerful and flexible. It is inspired by common testing framework APIs, and aims to provide a similar experience for benchmarking.
npm install --save-dev flexi-bench
yarn add --dev flexi-bench
pnpm add --save-dev flexi-bench
The runner API is the simplest way to run benchmarks. It allows running a single benchmark or a suite of benchmarks.
const { suite, benchmark, setup, teardown } = require('flexi-bench');
// A `suite` call at the top level will be evaluated as a whole suite.
suite('My Suite', () => {
// Nested `benchmark` calls will not be evaluated until the parent `suite` is evaluated.
benchmark('My Benchmark', (b) => {
setup(() => {
// some setup to run before the entire benchmark
});
teardown(() => {
// some teardown to run after the entire benchmark
});
b.withAction(() => {
// some action to benchmark
});
});
});
// Top-level `benchmark` calls will be evaluated immediately.
benchmark('My Benchmark', (b) => {
b.withIterations(10).withAction(() => {
// some action to benchmark
});
});
Within the callbacks for each suite or benchmark you can utilize the builder API for full customization of the benchmark. Variations can either be added directly via the builder API, or by nesting a variation call within the benchmark call, or at the suite level to apply the variation to all benchmarks in the suite.
const { suite, benchmark, variation } = require('flexi-bench');
suite('My Suite', () => {
benchmark('My Benchmark', (b) => {
b.withIterations(10).withAction(() => {
// some action to benchmark
});
variation('with NO_DAEMON', (v) =>
v.withEnvironmentVariable('NO_DAEMON', 'true'),
);
});
});
More detailed documentation will come soon. For now, here is a simple example:
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: () => {
// some action to benchmark
},
});
await benchmark.run();
Most options for the benchmark can also be provided by a builder API:
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark')
.withIterations(10)
.withSetup(() => {
// some setup to run before the entire benchmark
})
.withAction(() => {
// some action to benchmark
});
Some benchmarks will require some work before the benchmark to setup the environment, and some work after the benchmark to clean up. This can be done with the setup/setupEach and teardown/teardownEach methods:
benchmark('My Benchmark', (b) => {
setup(() => {
// some setup to run before the entire benchmark
});
setupEach(() => {
// some setup that is ran before each iteration of the benchmark
});
teardown(() => {
// some teardown to run after the entire benchmark
});
teardownEach(() => {
// some teardown that is ran after each iteration of the benchmark
});
});
These can also be set using the builder API:
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark')
.withIterations(10)
.withSetup(() => {
// some setup to run before the entire benchmark
})
.withSetupEach(() => {
// some setup that is ran before each iteration of the benchmark
})
.withTeardown(() => {
// some teardown to run after the entire benchmark
})
.withTeardownEach(() => {
// some teardown that is ran after each iteration of the benchmark
});
The action for a benchmark provides the actual event that is being benchmarked. FlexiBench can currently benchmark 2 types of actions:
An action can be specified via the action property of the benchmark or variation constructor, or via .withAction on the builder API. Actions specified on the running variation will override the action specified on the parent benchmark.
For example, the following code will run the action specified on the variation, and not the action specified on the benchmark. This would result in the output bar being printed 10 times, instead of foo:
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: () => {
console.log('foo');
},
}).withVariation('with NO_DAEMON', (v) =>
v.withAction(() => {
console.log('bar');
}),
);
If the process you are benchmarking is available as a JavaScript function, you can pass it directly to the action property or execute it within the action callback. For example, if you are benchmarking a function that sorts an array, you can do the following:
const { benchmark, setup } = require('flexi-bench');
benchmark('My Benchmark', (b) => {
let array;
setup(() => {
array = Array.from({ length: 1000 }, () => Math.random());
});
b.withIterations(10).withAction(() => {
array.sort();
});
});
If the process you are benchmarking is not available as a JavaScript function, you can run it as a command. This can be done by passing a string to the action property. For example, if you are benchmarking the runtime of a cli command, you can do the following:
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: 'echo "Hello, World!"',
});
await benchmark.run();
While it would be possible to write an action that runs a command using child_process methods, using the syntactic sugar provided by flexi-bench is more convenient and provides a few benefits.
Utilizing the syntactic sugar also means that flexi-bench is aware that you are indeed running a command. This sounds obvious, but opens up some neat possibilities since we are running the command from within flexi-bench. For example, we can add variations based on tailoring CLI options which would not be possible if you ran your command directly using child_process methods on your own.
const { Benchmark } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: 'echo',
})
.withVariation('with argument', (v) => v.withArgument('Hello, Earth!'))
.withVariation('with argument', (v) => v.withArgument('Hello, Mars!'));
await benchmark.run();
When running commands via the syntactic sugar, the command is invoked with a function similar to below:
const child = spawn(action, variation.cliArgs, {
shell: true,
windowsHide: true,
});
child.on('exit', (code) => {
if (code === 0) {
resolve();
} else {
reject(`Action failed with code ${code}`);
}
});
For more information on the simple command API, see the example: ./examples/simple-command.ts
A suite is a collection of benchmarks that can be run together:
const { Benchmark, Suite } = require('flexi-bench');
const suite = new Suite('My Suite')
.addBenchmark(
new Benchmark('My Benchmark 1', {
iterations: 10,
action: () => {
// some action to benchmark
},
}),
)
.addBenchmark(
new Benchmark('My Benchmark 2', {
iterations: 10,
action: () => {
// some action to benchmark
},
}),
);
Suites can also be created using the runner API:
const { suite, benchmark } = require('flexi-bench');
suite('My Suite', () => {
benchmark('My Benchmark 1', (b) => {
b.withIterations(10).withAction(() => {
// some action to benchmark
});
});
benchmark('My Benchmark 2', (b) => {
b.withIterations(10).withAction(() => {
// some action to benchmark
});
});
});
Variations allow running the same benchmark with different configurations:
const { Benchmark, Variation } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: () => {
// some action to benchmark
},
}).withVariation('with NO_DAEMON', (v) =>
v.withEnvironmentVariable('NO_DAEMON', 'true'),
);
Variations can do most things that the main benchmark can do, including having their own setup and teardown functions, or even a custom action.
Some helper functions are provided on the Variation class to make it easier to set up variations:
const { Benchmark, Variation } = require('flexi-bench');
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: () => {
// some action to benchmark
},
}).withVariations(
// Adds 4 variations with all possible combinations of the given environment variables
Variation.FromEnvironmentVariables([
['NO_DAEMON', ['true', 'false']],
['OTHER_VAR', ['value1', 'value2']],
]),
);
For more complex scenarios where you need to pass objects or data directly to the benchmark action, use the context API instead of environment variables:
const { Benchmark, Variation } = require('flexi-bench');
// Define different implementations
const loopProcessor = {
process: (data) => {
/* loop */
},
};
const reduceProcessor = {
process: (data) => {
/* reduce */
},
};
// Use FromContexts for clean, declarative variation setup
const benchmark = new Benchmark('Process Data')
.withIterations(100)
.withVariations(
Variation.FromContexts('processor', [
['loop', loopProcessor],
['reduce', reduceProcessor],
]),
)
.withAction((variation) => {
// Use get() to retrieve context data
const processor = variation.get('processor');
processor.process(data);
});
The context API provides:
Variation.FromContexts(key, [[name, value], ...]) - Create variations with context (cleanest API)withContext(key, value) - Attach custom data to a single variationget(key) - Retrieve context data (returns T | undefined)getOrDefault(key, defaultValue) - Retrieve with fallback valueUse FromContexts when creating multiple variations with the same context key - it's cleaner and more concise than individual withVariation calls with withContext.
Variations can also be added to suites. Variations added to a suite will be applied to all benchmarks in the suite.
For example, the below suite would run each benchmark with 'NO_DAEMON' set to true, and then with 'OTHER_VAR' set to 'value1' for a total of 4 benchmark runs in the suite:
const { Benchmark, Suite, Variation } = require('flexi-bench');
const suite = new Suite('My Suite')
.addBenchmark(
new Benchmark('My Benchmark 1', {
iterations: 10,
action: () => {
// some action to benchmark
},
}),
)
.addBenchmark(
new Benchmark('My Benchmark 2', {
iterations: 10,
action: () => {
// some action to benchmark
},
}),
)
.withVariation('with NO_DAEMON', (v) =>
v.withEnvironmentVariable('NO_DAEMON', 'true'),
)
.withVariation('with OTHER_VAR', (v) =>
v.withEnvironmentVariable('OTHER_VAR', 'value1'),
);
Reporters control how benchmark results are output. FlexiBench provides several built-in reporters:
const {
Benchmark,
BenchmarkConsoleReporter,
SuiteConsoleReporter,
} = require('flexi-bench');
// For single benchmarks
const benchmark = new Benchmark('My Benchmark', {
iterations: 10,
action: () => {
/* ... */
},
reporter: new BenchmarkConsoleReporter(),
});
// For suites
const suite = new Suite('My Suite')
.withReporter(new SuiteConsoleReporter())
.addBenchmark(benchmark);
Both console reporters support the NO_COLOR environment variable for disabling colors:
NO_COLOR=1 node my-benchmark.js
Or explicitly via options:
new SuiteConsoleReporter({ noColor: true });
new BenchmarkConsoleReporter({ noColor: true });
const {
MarkdownBenchmarkReporter,
MarkdownSuiteReporter,
} = require('flexi-bench');
// For single benchmark output
const benchmarkReporter = new MarkdownBenchmarkReporter({
outputFile: 'results.md',
fields: ['min', 'average', 'p95', 'max'],
append: true, // Set to true to avoid overwriting when running multiple benchmarks
});
// For suite-level output (recommended)
const suiteReporter = new MarkdownSuiteReporter({
outputFile: 'results.md',
title: 'Benchmark Results',
fields: ['min', 'average', 'p95', 'max', 'iterations'],
});
When a benchmark has multiple variations, both MarkdownBenchmarkReporter and MarkdownSuiteReporter automatically generate a comparison table showing:
This makes it easy to see which implementation performs best at a glance.
For CI/CD integration:
const { JsonSuiteReporter } = require('flexi-bench');
const reporter = new JsonSuiteReporter({
outputFile: 'results.json',
pretty: true,
includeMetadata: true, // Includes timestamp, platform, node version
});
Use multiple reporters simultaneously:
const {
CompositeReporter,
SuiteConsoleReporter,
MarkdownSuiteReporter,
JsonSuiteReporter,
} = require('flexi-bench');
const suite = new Suite('My Suite').withReporter(
new CompositeReporter([
new SuiteConsoleReporter(),
new MarkdownSuiteReporter({ outputFile: 'results.md' }),
new JsonSuiteReporter({ outputFile: 'results.json' }),
]),
);
Create custom reporters by implementing the SuiteReporter interface:
import { SuiteReporter, Result } from 'flexi-bench';
class MyCustomReporter implements SuiteReporter {
// Optional lifecycle hooks
onSuiteStart?(suiteName: string): void {
console.log(`Starting suite: ${suiteName}`);
}
onBenchmarkStart?(benchmarkName: string): void {
console.log(`Running: ${benchmarkName}`);
}
onBenchmarkEnd?(benchmarkName: string, results: Result[]): void {
console.log(`Completed: ${benchmarkName}`);
}
// Required: called after all benchmarks complete
report(results: Record<string, Result[]>): void {
// Process results here
for (const [name, result] of Object.entries(results)) {
console.log(`${name}: ${result[0].average}ms average`);
}
}
}
The Result type contains comprehensive information about benchmark runs:
interface Result {
// Basic metrics
label: string; // Name of benchmark or variation
min: number; // Minimum duration (ms)
max: number; // Maximum duration (ms)
average: number; // Average duration (ms)
p95: number; // 95th percentile duration (ms)
raw: (number | Error)[]; // Raw durations/errors
// Failure information
failed?: boolean; // Whether any iteration failed
failureRate?: number; // Rate of failures (0-1)
// Metadata (useful for custom reporters)
iterations?: number; // Number of iterations run
totalDuration?: number; // Total wall-clock time (ms)
benchmarkName?: string; // Name of parent benchmark
variationName?: string; // Name of variation
// Subresults from performance observer
subresults?: Result[];
}
The Result type is exported from the main package:
import { Result } from 'flexi-bench';
Compare different implementations of the same interface:
const {
Suite,
Benchmark,
Variation,
MarkdownSuiteReporter,
} = require('flexi-bench');
// Define your implementations
const implementations = {
loop: (data) => {
/* loop implementation */
},
reduce: (data) => {
/* reduce implementation */
},
};
const suite = new Suite('Implementation Comparison')
.withReporter(new MarkdownSuiteReporter({ outputFile: 'results.md' }))
.addBenchmark(
new Benchmark('Process Data')
.withIterations(100)
// Create variations with context - no environment variables needed!
.withVariations(
Variation.FromContexts('impl', [
['loop', implementations.loop],
['reduce', implementations.reduce],
]),
)
.withAction((variation) => {
// Retrieve the implementation directly from context
const impl = variation.get('impl');
const data = [
/* test data */
];
impl(data);
}),
);
See examples folder.
./examples/benchmark.ts - Full benchmark suite with environment variable variations./examples/performance-observer.ts - Using PerformanceObserver API./examples/simple-command.ts - Benchmarking CLI commands./examples/multiple-reporters.ts - Using CompositeReporter for multiple outputs./examples/custom-reporter.ts - Creating custom reporters with Result type./examples/cookbook-different-implementations.ts - Comparing implementations./examples/no-color-support.ts - Disabling colors in CI environments./examples/markdown-reporter-append.ts - Using append mode for MarkdownReporter./examples/markdown-comparison.ts - Demonstrates automatic comparison tables with variationsFAQs
FlexiBench is a flexible benchmarking library for JavaScript and TypeScript. It is designed to be simple to use, but also powerful and flexible. It is inspired by common testing framework APIs, and aims to provide a similar experience for benchmarking.
We found that flexi-bench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.

Research
/Security News
Chrome extension CL Suite by @CLMasters neutralizes 2FA for Facebook and Meta Business accounts while exfiltrating Business Manager contact and analytics data.

Security News
After Matplotlib rejected an AI-written PR, the agent fired back with a blog post, igniting debate over AI contributions and maintainer burden.