
Security News
Meet Socket at Black Hat Europe and BSides London 2025
Socket is heading to London! Stop by our booth or schedule a meeting to see what we've been working on.
modestbench
Advanced tools
modestbench
“A full-ass benchmarking framework for Node.js”
by @boneskull
In summary, modestbench wraps tinybench and enhances it with a bunch of crap so you don't have to think.
The usual suspects:
npm install --save-dev modestbench
The modestbench CLI provides a init command. This command:
.modestbench/ to .gitignore to exclude historical data from version control# Initialize with examples and configuration
modestbench init
# Or specify project type and config format
modestbench init advanced --config-type typescript
Project Types:
basic - Simple setup for small projects (100 iterations, human reporter)advanced - Feature-rich with multiple reporters and structured output (1000 iterations, warmup, human + JSON reporters)library - Optimized for library performance testing (5000 iterations, high warmup, human + JSON reporters, organized suite structure)PRO TIP: The convention for modestbench benchmark files is to use the
.bench.jsor.bench.tsextension.
modestbench supports two formats for defining benchmarks:
For quick benchmarks with just a few tasks, you can use the simplified format:
// benchmarks/example.bench.js
export default {
'Array.push()': () => {
const arr = [];
for (let i = 0; i < 1000; i++) {
arr.push(i);
}
return arr;
},
'Array spread': () => {
let arr = [];
for (let i = 0; i < 1000; i++) {
arr = [...arr, i];
}
return arr;
},
};
When you need to organize benchmarks into groups with setup/teardown hooks:
// benchmarks/example.bench.js
export default {
suites: {
'Array Operations': {
benchmarks: {
'Array.push()': () => {
const arr = [];
for (let i = 0; i < 1000; i++) {
arr.push(i);
}
return arr;
},
'Array spread': () => {
let arr = [];
for (let i = 0; i < 1000; i++) {
arr = [...arr, i];
}
return arr;
},
},
},
},
};
When to use each format:
- Simplified format: Quick benchmarks, single file with related tasks, no setup/teardown needed
- Suite format: Complex projects, multiple groups of benchmarks, need setup/teardown hooks, or want explicit organization
# Run all benchmarks
modestbench
# Run with specific options
modestbench --iterations 5000 --reporter human --reporter json

Jump to:
See the examples directory for additional guides and sample code.
Run benchmarks with sensible defaults:
# Run benchmarks in current directory and bench/ (top-level only)
modestbench
# Run all benchmarks in a directory (searches recursively)
modestbench benchmarks/
# Run benchmarks from multiple directories
modestbench src/perf/ tests/benchmarks/
# Run specific files
modestbench benchmarks/critical.bench.js
# Mix files, directories, and glob patterns
modestbench file.bench.js benchmarks/ "tests/**/*.bench.ts"
# With options
modestbench \
--config ./config.json \
--iterations 2000 \
--reporter human \
--reporter json \
--reporter csv \
--output ./results \
--tag performance \
--tag algorithm \
--concurrent
Supported file extensions:
.js, .mjs, .cjs.ts, .mts, .ctsThe --limit-by flag controls whether benchmarks are limited by time, iteration count, or both:
# Limit by iteration count (fast, predictable sample size)
modestbench --iterations 100
# Limit by time budget (ensures consistent time investment)
modestbench --time 5000
# Limit by whichever comes first (safety bounds)
modestbench --iterations 1000 --time 10000
# Explicit control (overrides smart defaults)
modestbench --iterations 500 --time 5000 --limit-by time
# Require both thresholds (rare, for statistical rigor)
modestbench --iterations 100 --time 2000 --limit-by all
Smart Defaults:
--iterations provided → limits by iteration count (fast)--time provided → limits by time budgetany mode)Modes:
iterations: Stop after N samples (time budget set to 1ms)time: Run for T milliseconds (collect as many samples as possible)any: Stop when either threshold is reached (defaults to iterations behavior for fast completion)all: Require both time AND iterations thresholds (tinybench default behavior)Run specific subsets of benchmarks using tags:
# Run only benchmarks tagged with 'fast'
modestbench --tag fast
# Run benchmarks with multiple tags (OR logic - matches ANY)
modestbench --tag string --tag array --tag algorithm
# Exclude specific benchmarks
modestbench --exclude-tag slow --exclude-tag experimental
# Combine: run fast benchmarks except experimental ones
modestbench --tag fast --exclude-tag experimental
Key Features:
--tag uses OR logic when specified multiple times (matches ANY specified tag)--exclude-tag takes precedence over --tagSee Tagging and Filtering for detailed examples.
Control where and how benchmark results are saved:
# Write to a directory (creates results.json, results.csv, etc.)
modestbench --reporter json --reporter csv --output ./results
# Custom filename for single reporter
modestbench --reporter json --output-file my-benchmarks.json
# Custom filename in specific directory
modestbench --reporter json --output ./results --output-file benchmarks-2024.json
# Custom filename with absolute path
modestbench --reporter json --output-file /tmp/my-benchmarks.json
# With subdirectories
modestbench --reporter csv --output ./results --output-file reports/performance.csv
# Short flag alias (using -r for --reporter)
modestbench -r json --of custom.json
Key Options:
--output <dir>, -o <dir> - Directory to write output files (default: stdout)--output-file <filename>, --of <filename> - Custom filename for output
--reporter json)--output, the filename is relative to that directoryLimitations:
--output-file only works with a single reporter--output <dir> (defaults to results.json, results.csv, etc.)modestbench automatically tracks benchmark results over time in a local .modestbench/ directory. This history enables you to:
# List recent runs
modestbench history list
# Show detailed results
modestbench history show <run-id>
# Compare two runs
modestbench history compare <run-id-1> <run-id-2>
# Export historical data
modestbench history export --format csv --output results.csv
# Clean old data
modestbench history clean --older-than 30d
modestbench supports performance budgets to prevent regressions and enforce performance standards in CI/CD.
Define budgets in your modestbench.config.json:
{
"budgetMode": "fail",
"budgets": {
"benchmarks/critical.bench.js/default/parseConfig": {
"absolute": {
"maxTime": "10ms",
"minOpsPerSec": 100000
}
}
}
}
Budget Types:
Absolute Budgets: Fixed thresholds
maxTime - Maximum mean execution time (e.g., "10ms", "500us")minOpsPerSec - Minimum operations per secondmaxP99 - Maximum 99th percentile latencyRelative Budgets: Comparison against baseline
maxRegression - Maximum performance degradation (e.g., "10%", 0.1)Budget Modes:
fail (default) - Exit with error code if budgets failwarn - Show warnings but don't failreport - Include in output without failing# Save current run as a baseline
modestbench baseline set production-v1.0
# List all baselines
modestbench baseline list
# Compare against a baseline
modestbench run --baseline production-v1.0
# Analyze history and suggest budgets
modestbench baseline analyze
Identify hot code paths that are good candidates for benchmarking:
# Profile a command
modestbench analyze "npm test"
# Profile a specific script
modestbench analyze "node ./src/server.js"
# Analyze existing profile
modestbench analyze --input isolate-0x123-v8.log
# Filter and customize output
modestbench analyze "npm test" \
--filter-file "src/**" \
--min-percent 2.0 \
--top 50 \
--group-by-file
Profile Options:
[command] - Command to profile (e.g., npm test, node script.js)--input, -i - Path to existing V8 profile log file--filter-file - Filter functions by file glob pattern--min-percent - Minimum execution percentage to show (default: 1.0)--top, -n - Number of top functions to show (default: 25)--group-by-file - Group results by source file--color - Enable/disable color outputHow It Works:
The profile command uses Node.js's built-in V8 profiler to identify functions that consume the most execution time. It automatically filters out Node.js internals and node_modules to focus on your code.
Functions that appear at the top of the profile report are good candidates for benchmarking, as optimizing them will have the most impact on overall performance.
Create modestbench.config.json:
{
"bail": false, // Stop execution on first failure
"exclude": ["node_modules/**"], // Patterns to exclude from discovery
"excludeTags": ["slow", "experimental"], // Tags to exclude from execution
"iterations": 1000, // Number of samples per benchmark
"limitBy": "iterations", // Limit mode: 'iterations', 'time', 'any', 'all'
"outputDir": "./benchmark-results", // Directory for results and reports
"pattern": "benchmarks/**/*.bench.{js,ts}", // Glob pattern to discover benchmark files
"quiet": false, // Minimal output mode
"reporters": ["human", "json"], // Output reporters to use
"tags": ["fast", "critical"], // Tags to include (if empty, all benchmarks run)
"time": 5000, // Time budget in ms per benchmark
"timeout": 30000, // Task timeout in ms
"verbose": false, // Detailed output with debugging info
"warmup": 50, // Warmup iterations before measurement
}
Configuration Options:
pattern - Glob pattern(s) to discover benchmark files (can be string or array)exclude - Glob patterns for files/directories to exclude from discoveryexcludeTags - Array of tags to exclude from execution; benchmarks with ANY of these tags will be skipped (default: [])iterations - Number of samples to collect per benchmark task (default: 100)time - Time budget in milliseconds per benchmark task (default: 1000)limitBy - How to limit benchmarks: "iterations" (sample count), "time" (time budget), "any" (whichever comes first), or "all" (both thresholds required)warmup - Number of warmup iterations before measurement begins (default: 0)timeout - Maximum time in milliseconds for a single task before timing out (default: 30000)bail - Stop execution on first benchmark failure (default: false)reporters - Array of reporter names to use for output (available: "human", "json", "csv")outputDir - Directory path for saving benchmark results and reportsquiet - Minimal output mode, suppresses non-essential messages (default: false)tags - Array of tags to include; if non-empty, only benchmarks with ANY of these tags will run (default: [])verbose - Detailed output mode with additional debugging information (default: false)Note: Smart defaults apply for
limitBybased on which options you provide. See Controlling Benchmark Limits for details.
modestbench supports multiple configuration file formats, powered by cosmiconfig:
modestbench.config.json, .modestbenchrc.json, .modestbenchrcmodestbench.config.yaml, modestbench.config.yml, .modestbenchrc.yaml, .modestbenchrc.ymlmodestbench.config.js, modestbench.config.mjs, .modestbenchrc.js, .modestbenchrc.mjsmodestbench.config.ts"modestbench" fieldGenerate a configuration file using:
modestbench init --config-type json # JSON format
modestbench init --config-type yaml # YAML format
modestbench init --config-type js # JavaScript format
modestbench init --config-type ts # TypeScript format
Configuration Discovery: modestbench automatically searches for configuration files in the current directory and parent directories, following standard conventions.
Real-time progress bars with color-coded results and performance summaries.
Structured data perfect for programmatic analysis and integration:
{
"results": [
{
"file": "example.bench.js",
"suite": "Array Operations",
"task": "Array.push()",
"hz": 1234567.89,
"stats": {
"mean": 0.00081,
"stdDev": 0.00002,
"marginOfError": 2.45
}
}
],
"run": {
"id": "run-2025-10-07-001",
"timestamp": "2025-10-07T10:30:00.000Z",
"duration": 15420,
"status": "completed"
}
}
Tabular data for spreadsheet analysis and historical tracking.
const state = {
data: [],
sortedData: [],
};
export default {
suites: {
Sorting: {
setup() {
state.data = generateTestData(1000);
},
benchmarks: {
// Shorthand syntax for simple benchmarks
'Quick Sort': () => quickSort(state.data),
'Merge Sort': () => mergeSort(state.data),
},
},
Searching: {
setup() {
state.sortedData = generateSortedData(10000);
},
benchmarks: {
'Binary Search': () => binarySearch(state.sortedData, 5000),
'Linear Search': () => linearSearch(state.sortedData, 5000),
},
},
},
};
export default {
suites: {
'Async Performance': {
benchmarks: {
// Shorthand syntax works with async functions too
'Promise.resolve()': async () => {
return await Promise.resolve('test');
},
// Full syntax when you need config, tags, or metadata
'Fetch Simulation': {
async fn() {
const response = await simulateApiCall();
return response.json();
},
config: {
iterations: 100, // Fewer iterations for slow operations
},
},
},
},
},
};
modestbench supports a powerful tagging system that lets you organize and selectively run benchmarks. Tags can be applied at three levels: file, suite, and task. Tags automatically cascade from parent to child, so tasks inherit tags from their suite and file.
Tags can be added at any level:
export default {
// File-level tags (inherited by all suites and tasks)
tags: ['performance', 'core'],
suites: {
'String Operations': {
// Suite-level tags (inherited by all tasks in this suite)
tags: ['string', 'fast'],
benchmarks: {
// Task inherits: ['performance', 'core', 'string', 'fast', 'regex']
'RegExp Test': {
fn: () => /pattern/.test(str),
tags: ['regex'], // Task-specific tags
},
// Task inherits: ['performance', 'core', 'string', 'fast']
'String Includes': () => str.includes('pattern'),
},
},
},
};
Use --tag (or -t) to include only benchmarks with specific tags (OR logic - matches ANY tag):
# Run fast algorithms
modestbench run --tag fast
# Run benchmarks tagged with 'string' OR 'array'
modestbench run --tag string --tag array
# Using short aliases
modestbench run -t fast -t optimized
Use --exclude-tag (or -e) to skip benchmarks with specific tags:
# Exclude slow benchmarks
modestbench run --exclude-tag slow
# Exclude experimental and unstable benchmarks
modestbench run --exclude-tag experimental --exclude-tag unstable
Combine both to fine-tune your benchmark runs (exclusion takes precedence):
# Run fast benchmarks, but exclude experimental ones
modestbench run --tag fast --exclude-tag experimental
# Run algorithm benchmarks but skip slow reference implementations
modestbench run --tag algorithm --exclude-tag slow --exclude-tag reference
export default {
tags: ['file-level'], // All tasks get this tag
suites: {
'Fast Suite': {
tags: ['fast'], // Tasks get: ['file-level', 'fast']
benchmarks: {
'Task A': {
fn: () => {},
tags: ['math'], // This task has: ['file-level', 'fast', 'math']
},
'Task B': () => {}, // This task has: ['file-level', 'fast']
},
},
},
};
Filtering Behavior:
--tag math → Runs only Task A (matches 'math')--tag fast → Runs both Task A and Task B (both have 'fast')--tag file-level → Runs both tasks (both inherit 'file-level')--exclude-tag math → Runs only Task B (Task A excluded)Suite setup() and teardown() only run if at least one task in the suite matches the filter. This prevents unnecessary setup work for filtered-out suites.
name: Performance Tests
on: [push, pull_request]
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 18
- run: npm ci
- run: npm run build
- name: Run Benchmarks
run: |
modestbench \
--reporter json \
--reporter csv \
--output ./results
- name: Upload Results
uses: actions/upload-artifact@v3
with:
name: benchmark-results
path: ./results/
// scripts/check-regression.js
import { execSync } from 'child_process';
import { readFileSync } from 'fs';
// Run current benchmarks
execSync('modestbench --reporter json --output ./current');
const current = JSON.parse(readFileSync('./current/results.json'));
// Load baseline results
const baseline = JSON.parse(readFileSync('./baseline/results.json'));
// Check for significant regressions
for (const result of current.results) {
const baselineResult = baseline.results.find(
(r) => r.file === result.file && r.task === result.task,
);
if (baselineResult) {
const regression = (baselineResult.hz - result.hz) / baselineResult.hz;
if (regression > 0.1) {
// 10% regression threshold
console.error(
`Performance regression detected in ${result.task}: ${(regression * 100).toFixed(1)}%`,
);
process.exit(1);
}
}
}
console.log('No performance regressions detected ✅');
import { modestbench, HumanReporter } from 'modestbench';
// initialize the engine
const engine = modestbench();
engine.registerReporter('human', new HumanReporter());
// Execute benchmarks
const result = await engine.execute({
pattern: '**/*.bench.js',
iterations: 1000,
});
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repository
git clone https://github.com/boneskull/modestbench.git
cd modestbench
# Install dependencies
npm install
# Run tests
npm test
# Build the project
npm run build
# Run examples
npm run examples
AccurateEngine statistical analysis inspired by the excellent work of bench-nodeCopyright © 2025 Christopher Hiller. Licensed under the Blue Oak Model License 1.0.0.
FAQs
A full-ass benchmarking framework for Node.js
The npm package modestbench receives a total of 256 weekly downloads. As such, modestbench popularity was classified as not popular.
We found that modestbench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket is heading to London! Stop by our booth or schedule a meeting to see what we've been working on.

Security News
OWASP’s 2025 Top 10 introduces Software Supply Chain Failures as a new category, reflecting rising concern over dependency and build system risks.

Research
/Security News
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.