Aztec Benchmark

CLI tool for running Aztec contract benchmarks.
Use this tool to execute benchmark files written in TypeScript. For comparing results and generating reports in CI, use the separate companion GitHub Action: defi-wonderland/aztec-benchmark.
Table of Contents
Installation
yarn add --dev @defi-wonderland/aztec-benchmark
npm install --save-dev @defi-wonderland/aztec-benchmark
CLI Usage
After installing, run the CLI using npx aztec-benchmark. By default, it looks for a Nargo.toml file in the current directory and runs benchmarks defined within it.
npx aztec-benchmark [options]
Configuration (Nargo.toml)
Define which contracts have associated benchmark files in your Nargo.toml under the [benchmark] section:
[benchmark]
token = "benchmarks/token_contract.benchmark.ts"
another_contract = "path/to/another.benchmark.ts"
The paths to the .benchmark.ts files are relative to the Nargo.toml file.
Options
-c, --contracts <names...>: Specify which contracts (keys from the [benchmark] section) to run. If omitted, runs all defined benchmarks.
--config <path>: Path to your Nargo.toml file (default: ./Nargo.toml).
-o, --output-dir <path>: Directory to save benchmark JSON reports (default: ./benchmarks).
-s, --suffix <suffix>: Optional suffix to append to report filenames (e.g., _pr results in token_pr.benchmark.json).
Examples
Run all benchmarks defined in ./Nargo.toml:
npx aztec-benchmark
Run only the token benchmark:
npx aztec-benchmark --contracts token
Run token and another_contract benchmarks, saving reports with a suffix:
npx aztec-benchmark --contracts token another_contract --output-dir ./benchmark_results --suffix _v2
Writing Benchmarks
Benchmarks are TypeScript classes extending BenchmarkBase from this package.
Each entry in the array returned by getMethods can either be a plain ContractFunctionInteractionCallIntent
(in which case the benchmark name is auto-derived) or a NamedBenchmarkedInteraction object
(which includes the interaction and a custom name for reporting).
import {
Benchmark,
type BenchmarkContext,
type NamedBenchmarkedInteraction
} from '@defi-wonderland/aztec-benchmark';
import type { PXE } from '@aztec/pxe/server';
import type { Contract } from '@aztec/aztec.js/contracts';
import type { AztecAddress } from '@aztec/aztec.js/addresses';
import type { ContractFunctionInteractionCallIntent } from '@aztec/aztec.js/authorization';
import { createStore } from '@aztec/kv-store/lmdb-v2';
import { createPXE, getPXEConfig } from '@aztec/pxe/server';
import { createAztecNodeClient, waitForNode } from '@aztec/aztec.js/node';
import { registerInitialSandboxAccountsInWallet, type TestWallet } from '@aztec/test-wallet/server';
interface MyBenchmarkContext extends BenchmarkContext {
pxe: PXE;
wallet: TestWallet;
deployer: AztecAddress;
contract: Contract;
}
export default class MyContractBenchmark extends Benchmark {
async setup(): Promise<MyBenchmarkContext> {
console.log('Setting up benchmark environment...');
const { NODE_URL = 'http://localhost:8080' } = process.env;
const node = createAztecNodeClient(NODE_URL);
await waitForNode(node);
const l1Contracts = await node.getL1ContractAddresses();
const config = getPXEConfig();
const fullConfig = { ...config, l1Contracts };
fullConfig.proverEnabled = false;
const pxeVersion = 2;
const store = await createStore('pxe', pxeVersion, {
dataDirectory: 'store',
dataStoreMapSizeKb: 1e6,
});
const pxe: PXE = await createPXE(node, fullConfig, { store });
const wallet: TestWallet = await TestWallet.create(node);
const accounts: AztecAddress[] = await registerInitialSandboxAccountsInWallet(wallet);
const [deployer] = accounts;
const deployedContract = await YourSpecificContract
.deploy(wallet, )
.send({ from: deployer })
.deployed();
const contract = await YourSpecificContract.at(deployedContract.address, wallet);
console.log('Contract deployed at:', contract.address.toString());
return { pxe, deployer, contract };
}
getMethods(context: MyBenchmarkContext): Promise<Array<ContractFunctionInteractionCallIntent | NamedBenchmarkedInteraction>> {
if (!context || !context.contract) {
console.error("Benchmark context or contract not initialized in setup(). Skipping getMethods.");
return [];
}
const { contract, deployer } = context;
const recipient = deployer;
const interactionPlain = { caller: deployer, action: contract.methods.transfer(recipient, 100n) }
const interactionNamed1 = { caller: deployer, action: contract.methods.someOtherMethod("test_value_1") };
const interactionNamed2 = { caller: deployer, action: contract.methods.someOtherMethod("test_value_2") };
return [
interactionPlain,
{ interaction: interactionNamed1, name: "Some Other Method (value 1)" },
{ interaction: interactionNamed2, name: "Some Other Method (value 2)" },
];
}
async teardown(context: MyBenchmarkContext): Promise<void> {
console.log('Cleaning up benchmark environment...');
if (context && context.pxe) {
await context.pxe.stop();
}
}
}
Note: Your benchmark code needs a valid Aztec project setup to interact with contracts.
Your BenchmarkBase implementation is responsible for constructing the ContractFunctionInteractionCallIntent objects.
If you provide a NamedBenchmarkedInteraction object, its name field will be used in reports.
If you provide a plain ContractFunctionInteractionCallIntent, the tool will attempt to derive a name from the interaction (e.g., the method name).
Wonderland's Usage Example
You can find how we use this tool for benchmarking our Aztec contracts in aztec-standards.
Benchmark Output
Your BenchmarkBase implementation is responsible for measuring and outputting performance data (e.g., as JSON). The comparison action uses this output.
Each entry in the output will be identified by the custom name you provided (if any) or the auto-derived name.
Action Usage
This repository includes a GitHub Action (defined in action/action.yml) designed for CI workflows. It automatically finds and compares benchmark results (conventionally named with _base and _latest suffixes) generated by previous runs of aztec-benchmark and produces a Markdown comparison report.
Inputs
threshold: Regression threshold percentage (default: 2.5).
output_markdown_path: Path to save the generated Markdown comparison report (default: benchmark-comparison.md).
Outputs
comparison_markdown: The generated Markdown report content.
markdown_file_path: Path to the saved Markdown file.
Example Usage (in PR workflow)
This action is typically used in a workflow that runs on pull requests. It assumes a previous step or job has already run the benchmarks on the base commit and saved the results with the _base suffix (e.g., in ./benchmarks/token_base.benchmark.json).
Workflow Steps:
- Checkout the base branch/commit.
- Run
npx aztec-benchmark -s _base (saving outputs to ./benchmarks).
- Checkout the PR branch/current commit.
- Use this action (
./action), which will:
a. Run npx aztec-benchmark -s _latest to generate current benchmarks.
b. Compare the new _latest files against the existing _base files.
c. Generate the Markdown report.
- name: Checkout Current Code
uses: actions/checkout@v4
- name: Install Dependencies
run: yarn install --frozen-lockfile
- name: Generate Latest Benchmarks, Compare, and Create Report
uses: defi-wonderland/aztec-benchmark-diff/action
id: benchmark_compare
with:
threshold: '2.0'
output_markdown_path: 'benchmark_diff.md'
- name: Comment Report on PR
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ github.event.pull_request.number }}
body-file: ${{ steps.benchmark_compare.outputs.markdown_file_path }}
Refer to the action/action.yml file for the definitive inputs and description.