
Security News
The Next Open Source Security Race: Triage at Machine Speed
Claude Opus 4.6 has uncovered more than 500 open source vulnerabilities, raising new considerations for disclosure, triage, and patching at scale.
@aztec/archiver
Advanced tools
The archiver fetches onchain data from L1 and stores it locally in a queryable form. It pulls: - **Checkpoints** (containing L2 blocks) from `CheckpointProposed` events on the Rollup contract - **L1-to-L2 messages** from `MessageSent` events on the Inbox
The archiver fetches onchain data from L1 and stores it locally in a queryable form. It pulls:
CheckpointProposed events on the Rollup contractMessageSent events on the Inbox contractThe interfaces L2BlockSource, L2LogsSource, and ContractDataSource define how consumers access this data. Interface L2BlockSink allows other subsystems, such as the validator client, to push not-yet-checkpointed blocks into the archiver.
The archiver emits events for other subsystems to react to state changes:
L2PruneUnproven: Emitted before unwinding checkpoints due to epoch prune. Contains the epoch number and affected blocks. Subscribers (e.g., world-state) use this to prepare for the unwind.L2PruneUncheckpointed: Emitted when provisional blocks are pruned due to checkpoint mismatch or slot expiration. Contains the slot number and affected blocks.L2BlockProven: Emitted when the proven checkpoint advances. Contains the block number, slot, and epoch.InvalidAttestationsCheckpointDetected: Emitted when a checkpoint with invalid attestations is encountered during sync.Note that most subsystems handle these events not by subscribing but by polling the archiver using an L2BlockStream. This means that, if the node stops while a subsystem has not yet processed an event, the block stream will detect it and have the subsystem reprocess it.
The archiver runs a periodic sync loop with two phases:
addBlock()sync()
├── processQueuedBlocks() # Handle blocks pushed via addBlock()
└── syncFromL1()
├── handleL1ToL2Messages() # Sync messages from Inbox contract
├── handleCheckpoints() # Sync checkpoints from Rollup contract
├── pruneUncheckpointedBlocks() # Prune provisional blocks from expired slots
├── handleEpochPrune() # Proactive unwind before proof window expires
└── checkForNewCheckpointsBeforeL1SyncPoint() # Handle L1 reorg edge case
Each sync iteration pins the current L1 block number at the start and uses it as an upper bound for all queries. This ensures consistent data retrieval even if L1 advances during the iteration.
Two independent syncpoints track progress on L1:
blocksSynchedTo: L1 block number for checkpoint eventsmessagesSynchedTo: L1 block ID (number + hash) for messagesMessages are synced from the Inbox contract via handleL1ToL2Messages():
MessageSent events in batches and storeCheckpoints are synced from the Rollup contract via handleCheckpoints():
CheckpointProposed events in batchesinHash matches expected value (see below)The inHash is a hash of all L1-to-L2 messages consumed by a checkpoint. The archiver computes the expected inHash from locally stored messages and compares it against the checkpoint header. A mismatch indicates a bug (messages out of sync with checkpoints) and causes a fatal error.
The blocksSynchedTo syncpoint is updated:
Note that the blocksSynchedTo pointer is NOT updated during normal sync when there are no new checkpoints. This protects against small L1 reorgs that could add a checkpoint on an L1 block we have flagged as already synced.
The archiver implements L2BlockSink, allowing other subsystems to push blocks before they appear on L1:
archiver.addBlock(block); // Queues block for processing
Queued blocks are processed at the start of each sync iteration. This allows the sequencer to make blocks available locally before checkpoint publication. This is used to make the proposed chain (ie blocks that have been broadcasted via p2p but not checkpointed on L1) available to consumers.
Blocks added via addBlock() are considered "provisional" until they appear in an L1 checkpoint. These provisional blocks may need to be reconciled when:
When handleCheckpoints() processes incoming checkpoints, it compares archive roots of local blocks against the checkpoint's blocks. If they differ, local blocks are pruned and replaced with the checkpoint's blocks. After checkpoint sync, pruneUncheckpointedBlocks() removes any remaining provisional blocks from slots that have ended. Both cases emit L2PruneUncheckpointed.
When querying the archiver, be aware of the distinction between proposed and checkpointed blocks:
getBlockHeader('latest') / getBlockNumber(): Returns the latest block including proposed blocksgetCheckpointedL2BlockNumber(): Returns only the count of checkpointed blocks (synced from L1)Use checkpointed queries when the result must reflect L1 state (e.g., determining if an epoch is complete for proving). Use 'latest' when you need the most recent block regardless of L1 confirmation (e.g., serving RPC queries to users).
Both message and checkpoint sync detect L1 reorgs by comparing local state against L1. When detected, they find the last common ancestor and rollback.
Messages: Each stored message includes its rolling hash. During sync, if the local last message's rolling hash doesn't match L1, the archiver walks backwards through local messages, querying L1 for each one, until it finds a message with a matching rolling hash. Everything after that message is deleted, and the syncpoint is rolled back.
Checkpoints: When the archiver queries the Rollup contract for the archive root at the local pending checkpoint number, and it doesn't match the local archive root, the local checkpoint is no longer in L1's chain. The archiver walks backwards through local checkpoints, querying archiveAt() for each, until it finds one that matches. All checkpoints after that are unwound.
Example: The archiver has synced up to checkpoint 15. An L1 reorg replaces checkpoint 14 and 15 with different content. On the next sync:
archiveAt(15) and gets a different archive root than stored locallyarchiveAt(14) — still differentarchiveAt(13) — matchesIf a prune would occur on the next checkpoint submission (checked via canPruneAtTime), the archiver preemptively unwinds to the proven checkpoint.
This handles the case where an epoch's proof submission window has passed without a valid proof being submitted. The Rollup contract will prune all unproven checkpoints on the next submission. Rather than wait for that to happen and then react, the archiver detects this condition and unwinds proactively. This keeps the local state consistent with what L1 will look like after the next checkpoint. It also means that the sequencer or validator client will build the next block with the unwind having already taken place.
Example: The proven checkpoint is 10, and pending checkpoints 11-15 exist locally. The proof submission window for the epoch containing checkpoint 11 will expire. On sync:
canPruneAtTime() with the next L1 block's timestamp — returns trueL2PruneUnproven event so subscribed subsystems can reactIf after processing all logs the local checkpoint count is less than L1's pending checkpoint count, an L1 reorg may have added checkpoints behind the syncpoint. The archiver rolls back the syncpoint to re-fetch.
This handles a subtle L1 reorg scenario: an L1 reorg doesn't replace existing checkpoints but adds new ones in blocks the archiver already processed. Since the archiver only queries events starting from blocksSynchedTo, it would miss these new checkpoints.
[!NOTE] This scenario only occurs when
blocksSynchedTowas advanced without storing a checkpoint — specifically when the chain is empty or when an invalid checkpoint was processed (and skipped). In normal operation,blocksSynchedTois set to the L1 block of the last stored checkpoint, so any L1 reorg that adds checkpoints would also change the archive root and be caught by the L1 Reorgs check (step 3 in checkpoint sync).
Example: The archiver has synced to L1 block 1000 with checkpoint 10. An L1 reorg at block 950 adds a new checkpoint 11 in block 960 (which was previously empty). On sync:
blocksSynchedTo to the L1 block of checkpoint 10When the archiver encounters a checkpoint with invalid attestations, it skips it and continues processing subsequent checkpoints. It also advances blocksSynchedTo past the invalid checkpoint to avoid re-downloading it on every iteration.
This handles delayed attestation verification: the Rollup contract no longer validates committee attestations on L1. Instead, attestations are posted in calldata and L2 nodes verify them during sync. Checkpoints with invalid attestations (insufficient signatures, wrong signers, or invalid signatures) are skipped. An honest proposer will eventually call invalidate on the Rollup contract to remove these checkpoints.
The archiver exposes pendingChainValidationStatus for the sequencer to know if there's an invalid checkpoint that needs purging before posting a new one. If invalid, this status contains the data needed for the invalidate call. When multiple consecutive invalid checkpoints exist, the status references the earliest one (invalidating it automatically purges descendants).
[!WARNING] If a malicious committee attests to a descendant of an invalid checkpoint, nodes should ignore these descendants unless proven. This is not yet implemented — nodes assume honest committee majority.
Example: Chain has progressed to checkpoint 10. Then:
pendingChainValidationStatus points to 11FAQs
Archiver is a service which is used to fetch data on-chain data and present them in a nice-to-consume form.
The npm package @aztec/archiver receives a total of 4,518 weekly downloads. As such, @aztec/archiver popularity was classified as popular.
We found that @aztec/archiver demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 6 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Claude Opus 4.6 has uncovered more than 500 open source vulnerabilities, raising new considerations for disclosure, triage, and patching at scale.

Research
/Security News
Malicious dYdX client packages were published to npm and PyPI after a maintainer compromise, enabling wallet credential theft and remote code execution.

Security News
gem.coop is testing registry-level dependency cooldowns to limit exposure during the brief window when malicious gems are most likely to spread.