Request for Comment: A Canary in the Circuits

Thanks to @Maddiaa*, Lin and Artem for their helpful discussions and reviews. Reviews ≠ endorsements,*

Objective: Inspired by the recent State Migration RFP, we want to improve security of L1 assets portalled to L2 against “unforeseeable bugs” in the ZK/AVM layer. This is achieved by enforcing a Trusted Sequencer (TS) signature check on L2 asset actions, verifiable on L1, before any asset state updates are finalized. Recovering L2 portal assets on L1 does not depend on the TS. We call this A Double-Proofed Token Portal for portalling tokens to, within, and from Aztec.

Caveat: This initial solution trades off privacy compared to L2 tokens allowing completely private transfers — any use of our double-proofed token on L2 reveals that the token is being used. The increased token security that the solution provides could be seen as an acceptable trade-off. Importantly, use of such a double-proof token portal would be completely opt-in, with no Aztec protocol changes or Foundation required and full privacy-preserving portals will still exist.


Motivation: Soundness Bugs in a Standalone Proof System

As protocol designers on Aztec, one of the most likely sources of risk is the private execution environment (PXE). This PXE allows Aztec users to produce a set of input nullifiers, a set of output notes, and a proof string, and convince the public aztec virtual machine (AVM), which manages global state changes, “these random looking byte strings are valid state updates because the attached proof says so. Add the state updates to the global state tree”.

If everything is working as expected, the PXE ensures that at least the following “PXE checks” hold:

  1. the input nullifiers represent actual notes

  2. the notes are being spent by the person authorised to spend them

  3. the output notes represent correctly constructed notes corresponding to the input notes

  4. all contract functions that the user needed to call to create those notes were correctly called.

The AVM doesn’t necessarily get to see if any notes were actually consumed, what notes were consumed, who the authorised spenders of those notes were, what functions the user claims to call, or if the output notes were correctly generated according to those functions.

If something doesn’t work as expected, it may be possible to generate a proof that validates correctly even if one or more of these checks failed, or were not carried out — a soundness bug.

Double-Proving State Transitions as a Defense

Vitalik has advocated for multi-proving for a while (here and here at least). The reason is simple, proof systems (like all systems) break. Unlike other systems, a broken proof system can be devastating and hard/impossible to observe before exploitation. During the early stages of a ZK rollup, we likely need to secure assets with a second system to protect against over-reliance on a single proof system.


Double-Proof Token Portal

1. Core Components

  • Trusted Sequencer (TS): Verifies and signs L2 Token Portal transactions.

    • The protocol’s security rests on the observation that even an arbitrarily buggy AVM/ZK system cannot forge a signature for a key it does not possess. Since the TS private key exists exclusively outside the L2 environment, a ZK soundness bug can produce invalid proofs but cannot produce a valid TS signature for a state transition the TS never endorsed. Conversely, a compromised TS can sign invalid transitions, but the AVM/validators will reject them during L2 execution, preventing them from appearing in ZK-proven blocks.
  • Second Proof Requirement for Users: In addition to the normal PXE proof, users of the L2 Token portal are required to generate a second invariant check ZK proof, which proves to the TS that the user owns the notes they are spending, that the nullifiers are correctly consumed, that new notes are correctly constructed and that the sum of input note values == sum of output note values, any other contract calls were made correctly. This invariant check proof is verified by the TS using a standalone verifier, and is required for signing. We discuss invariant proofs further in the “TS Invariant Checks & Privacy Preservation” section.

    • Note: It is possible to attach these invariant check proofs to L2 portal transactions and require that they are verified on L1 when L1 portal withdrawals are triggered to avoid over-reliance on sequencer trust. This comes at further increased complexity and cost, but may be deemed necessary by the most risk-averse token portals. For the remainder of the document, we do not consider re-verification of invariant check proofs on-chain.

    • Note-Note: We consider deploying something quite similar to an invariant check verifier to the L1 portal in the Appendix - Phase 2.

  • L1 Rollup Contract: Verifies ZK proofs of AVM execution and stores L2→L1 messages (Roots + Signatures) as available but not consumed data. It does not execute them.

  • L1 Portal Contract: The sovereign asset custodian. It holds the funds and stores the LastFinalizedRoot. It only updates when explicitly triggered (Lazy git!).

2. Data Structure: The Checkpoint Message

Every L2 Portal update (transfer, swap, or withdrawal) generates an L2→L1 Message containing a state update commitment H(NewL2PortalStateRoot, ActionData, SequencerSignature), where:

  1. NewStateRoot: The Merkle Root of the L2 Portal’s Note/Nullifier tree after the transaction.

  2. ActionData: Specifics of the action (e.g., “Withdraw 10 ETH to User A” or “Internal Transfer”).

  3. SequencerSignature: Sig_{TS} = Sign(H(NewStateRoot, ActionData)).


3. Normal Operation (Happy Path)

Phase A: L2 Execution & Storage

  1. L2 Execution: TS signs valid L2 portal transactions before they are submitted to L2. Validators/AVM execute L2 portal transactions(including L2-only transfers), validating the TS signature. L2 portal transactions emit L2→L1 state update commitment messages

  2. L2 Proving: Execution of the L2 state is proven

  3. L1 Submission: L2 blocks, their associated proofs,and the L2→L1 Messages are submitted to the L1 Rollup Contract.

  4. L1 Validation: The Rollup Contract verifies the ZK proof. If valid, the block is finalized, and messages are stored in the L1 rollup contract (but not pushed to the Portal).

Phase B: Withdrawal (On-Demand Push from Users)

  1. Initiation: A user (or relayer) calls processMessage(Message, NewL2PortalStateRoot, ActionData, SequencerSignature, BlockProof) on the L1 Portal Contract.

  2. Verification (The Gatekeeper):

    • Inclusion Check: Verifies the Message exists in a valid block inside the L1 Rollup Contract, and the message sender (information included with all L2→L1 messages) is the L2PortalContract.

    • Message Construction Check: Verified Message = H(NewL2PortalStateRoot, ActionData, SequencerSignature)

    • Authority Check: Verifies ecrecover(H(NewStateRoot, ActionData), SequencerSignature) == SEQUENCER_ADDRESS.

  3. Execution:

    • If checks pass, the portal executes any withdrawals in ActionData.

    • Updates LastFinalizedRoot = NewL2PortalStateRoot.


4. Failure Handling & Emergency Mode

Scenario: Detection of Invalid State / Bug

  • Trigger: The TS or a Watchtower detects a bug in the AVM (e.g., an invalid transition was proven valid by ZK).

  • Action: The detector calls initiateHalt() on the L1 Portal.

Phase C: The Fraud Window (Resolution)

  1. State Freeze: The Portal enters “Resolution Mode”. No normal withdrawals are allowed.

  2. Contest Period: A time window T (e.g., 7 days) opens.

  3. Root Race:

    • Anyone can submit a candidateRoot by pointing to a valid, stored L2→L1 message from the L1 Rollup history.

    • Criteria: The message must have a valid ZK inclusion proof AND a valid TS Signature.

    • Selection: The Portal tracks the latest valid candidateRoot submitted during window T.

Phase D: Ossification & Recovery

  1. Ossification: After time T, the “Winning Root” becomes the Permanent Ossified Root**.**

  2. L2 Abandonment: The failed L2 instance is considered “dead” from the perspective of the L1 portal. No further messages from the Rollup Contract are accepted.

  3. Direct Withdrawal (this can be replaced by a recovery mechanism on a new rollup instance):

    • Users submit Merkle Proofs directly to the L1 Portal showing they own unspent Notes inside the Ossified Root — this might also require uploading the entire L2 nullifier tree corresponding to the ossified root, to avoid double spends.

    • The Portal verifies the Merkle path and releases funds directly on L1.

    • An efficient recovery/migration mechanism is out of the scope of this document. Our core focus is allowing the L1 Portal to maintain a reliable and valid view of the L2 Portal state.

Phase EH (Escape Hatch): Rejection of updates produced during the EH

During the EH, we lose the validator re-execution protection of normal sequencing. This means that a malicious portal sequencer who identifies a ZK bug could technically buy an EH, push an invalid state update, sign that state update, prove it, and potentially rug all portal funds on L1.

To protect against this, the L1 portal contract must be aware of any epochs that are allocated to an EH proposer and reject L2 portal updates produced during the EH. This awareness is possible as EH proposers are tracked in the L1 rollup contract.

More than this, any L2 portal updates produced during an EH that are presented to the L1 portal should trigger Ossification and Recovery, as this is evidence of misbehaviour somewhere in the protocol.

Scenario: Trusted Sequencer starts censoring

Phase L(iveness):

  • The lazy way: Users wait until ossification, and then initiate withdrawals from L1. This could take months before withdrawals, but avoids any further complexity

  • The robust way: To explain this properly requires careful treatment, which we postpone to the Appendix.


Security Analysis TLDR

Threat Defense Mechanism
Malicious User / Prover L1 Rollup: ZK Proof rejects invalid transactions. L1 Portal: Inclusion check ensures message is from a valid block.
AVM / Circuit Bug Sequencer Veto: TS will not sign the invalid state update. The L1 Portal rejects the message due to missing/invalid SequencerSignature.
Sequencer Corruption L2 Constraints: AVM/Validators will not prove a block where TS tries to steal funds (invalid state transition).
Total System Failure Ground control to Major Tom, your circuit’s dead, there’s something wrong…

TS Invariant Checks & Privacy Preservation

To prevent the Trusted Sequencer (TS) from becoming a custodian of user private keys, the protocol must mandate a “Verify, Don’t Trust” approach for authorization. The TS does not sign transactions based on raw private key access; instead, it verifies a lightweight, client-side ZK proof of authorization.

1. Shielded Authorization

Instead of passing their spending key (nsk_m) to the TS, the user generates a standalone ZK proof (separate from the main PXE proof) that asserts authority over the notes. The TS validates this proof before signing the L2→L1 message.

  • Inputs Revealed to TS: Note Owner, Storage Slot, Note Content, Randomness. (Note: These reveal the existence of a note to the TS, but do not grant spending power).

  • Inputs Kept Private: The User’s Secret Key (nsk_m).

2. A List of TS Invariants For the L1<>L2 portal design, the TS must enforce the following invariants on every transition it signs. If any fail, the TS should reject the signature request.

Invariant Description Verification Method (Without Keys)
Value Conservation \sum InputAmounts == \sum OutputAmounts Check plaintext amounts provided in the request.
Authorization Sender actually owns the input notes. Verify ZK Proof: npk_m == nsk_m * G (Public key corresponds to private key witness).+2
Nullifier Integrity Nullifiers are unique and derived deterministically. Verify ZK Proof: nullifier == poseidon(note_hash, app_key, ...).+1
Note Validity Output notes are constructed correctly. Check that output commitments match the claimed amounts/owners.

Why L2→L1 Messages Are Mandatory

You might ask: Why can’t the L1 Portal just read the L2 State Root directly like an Optimistic Rollup? Why do we need explicit messages for every L2 portal update?

In the above design using L2→L1 messages to store Portal updates, we create a papertrail of signed L2 portal updates that can be easily interacted with from an L1 smart contract’s perspective.

If instead we needed to look at the L2 Portal contract’s state at a given checkpointed state root and recover it, this is non-trivial. This is a consequence of Aztec’s Indexed Merkle Trees. Every time a contract is updated, its state is scattered somewhere else in the indexed tree, and must be recovered using the indexing key (AFAIU). From checkpoint to checkpoint, a DEX’s storage slot within the L2 state tree might have a completely different index, which makes it impractical for permissionlessly tracking its state just based on L2 state roots.

Future Work and Open Questions

  • Make this protocol partial-note compatible. It might already be, but this needs further analysis.

  • Is the additional complexity of standalone invariant proofs acceptable for users and developers? Users would need to generate PXE proofs in parallel to the invariant proofs, which consist of non-PXE ZK proofs (needed for authorisation while keeping user keys secure) passed to the TEE/sequencer for attestation. Although we do expect security conscious users to understand and potentially accept this additional complexity, less technical users may forego the additional security for a simpler user experience. We must gather feedback on whether or not the proposed invariant checker is an acceptable add-on.

  • Migration Path: If there is demand for such a double-proof portal, sunsetting the portal and moving to complete PXE dependency will be a great problem to have. The most likely path here will be a governance decision to remove the TS signature requirement, but with governance overrides blocked within the first 6-12-18 months, as long as might be necessary.

Appendix

Handling Trusted Sequencer Censorship

Forced Inclusion Protocol

If the TS censors a user’s L2 portal transactions, the user can force an exit via L1 without TS cooperation. The mechanism works in two phases: first attempt to force the TS to act, then fall back to L1 withdrawal if the TS fails.

Timing Parameters

Parameter Description Suggested Value
D Deadline for TS to process the forced inclusion on L2 and push the resulting L2→L1 message to L1 72 hours
T Ossification contest period (from main protocol) 7 days

Phase 1: Forced Nullifier Block

  1. User submits a Forced Inclusion Request (FIR) to the L1 Portal containing a nullifier they wish to block. A bond is required to prevent spam.

  2. The FIR creates an L1→L2 message containing the nullifier.

  3. If the TS is cooperative, it consumes the L1→L2 message on L2. The L2 Portal contract checks:

    • Nullifier already in the nullifier set? → Emit L2→L1 message: ("already_spent", nullifier). FIR is resolved. Bond is forfeited (user filed a FIR for a spent note).

    • Nullifier not in the set? → Add nullifier to the set. Emit L2→L1 message: ("blocked", nullifier, NewL2PortalStateRoot). The note is now frozen on L2.

  4. The TS must push this L2→L1 message from 3. to the L1 Portal before deadline D.

  5. If deadline D expires with no response: The L1 Portal enters Ossification and Recovery. The TS has demonstrably failed.

Phase 2: L1 Withdrawal Against Blocked Nullifier — Requires a custom proof verifier in the L1 portal contract

Once the L1 Portal receives a ("blocked", nullifier, NewL2PortalStateRoot) message, the user can withdraw by proving three things on L1:

  1. Note membership: A Merkle proof that a note X exists in the L2 Portal’s note tree, verified against the committed NewL2PortalStateRoot.

  2. Nullifier-note linkage: A ZK proof that the blocked nullifier is correctly derived from note X. This is necessary because nullifier derivation requires the user’s secret key (nullifier = poseidon(note_hash, nsk_app, ...)), which cannot be revealed. The circuit is small (one Poseidon hash + key derivation); verification costs ~200–300k gas via a dedicated on-chain verifier.

  3. Value extraction: The user opens the note preimage to reveal the withdrawal amount. The L1 contract verifies H(preimage) == leaf.

The L1 Portal verifies all three checks and releases the corresponding funds.

Anti-Griefing

FIRs require a bond (e.g. 0.1 ETH).

  • Refunded if the TS fails to respond (deadline D expires) or if the nullifier is successfully blocked and withdrawal completes.

  • (Partially) Forfeited if the L2 Portal returns "already_spent" (user filed a FIR for a note they already consumed). Partial forfeit might be necessary if this spend could been initiated after the FIR by a griefing sequencer. However, given transactions naturally expire on Aztec due to the need to point to a recent historical state root, use of the FIR can come with the risk that “unexpired transactions could result in loss of the FIR bond”.

2 Likes