Request for Comments: Aztec Sequencer Selection and Prover Coordination Protocols


This document outlines a proposal for Aztec’s block production, integrating immutable smart contracts on Ethereum’s mainnet (L1) to establish Aztec as a Layer-2 Ethereum network. It aims to serve as the latest source of truth for the Fernet sequencer selection protocol, and reflect the decision to implement the Sidecar prover coordination protocol.

Notably, this is written as a proposal within the context of the first instance or deployment of Aztec. Therefore, the definitions and protocol may evolve over time with each version or release. The “latest source of truth” once an initial implementation is available will be either the GitHub codebase &/or the documentation website.

How it works

Sequencers can register permissionlessly via Aztec’s L1 contracts, entering a queue before becoming eligible for a random leader election (“Fernet”). Sequencers are free to leave, adhering to an exit queue or period. Roughly every 7-10 minutes (subject to reduction as proving and execution speeds stabilize and/or improve) sequencers create a random hash using RANDAO and their public keys. The highest-ranking hash determines block proposal eligibility. Selected sequencers either collaborate with third-party proving services or self-prove their block. They commit to a prover’s L1 address, which stakes an economic deposit. Failure to submit proofs on time results in deposit forfeiture. Once L1 contracts validate proofs and state transitions, the cycle repeats for subsequent block production (forever, and ever…).

Get involved

Please leave comments, suggested improvements, or questions in this forum by Friday, February 16, 2024. Contact if you wish to schedule a meeting.


Full Nodes

Aztec full nodes are nodes that maintain a copy of the current state of the network. They fetch blocks from the data layer, verify and apply them to its local view of the state. They also participate in the P2P network to disburse transactions and their proofs. Can be connected to a Private Execution Environment (PXE) which can build transaction witness using the data provided by the node (data membership proofs).

Estimated hardware requirements

:desktop_computer: Minimum Recommended
CPU 16cores 32cores
Network 32 mb/s 128 mb/s
Storage 3TB 5TB
RAM 32gb 64gb

These hardware requirements are current best guesses and subject to change.


Aztec Sequencer’s are full nodes that propose blocks, execute public functions and choose provers, within the Aztec Network. It is the actor coordinating state transitions and proof production, helping the rollup progress it’s state forward. Aztec is currently planning on implementing a protocol called Fernet (Fair Election Randomized Natively on Ethereum trustlessly), which is permissionless and anyone can participate. Additionally, sequencers play a role participating within Aztec Governance, determining how to manage protocol upgrades. The details of Fernet are further articulated below.

Estimated hardware requirements

:desktop_computer: Minimum Recommended
CPU 16cores 32cores
Network 64 mb/s 256 mb/s
Storage 3TB 5TB
RAM 32gb 64gb

These hardware requirements are current best guesses and subject to change.


An Aztec Prover is most often a full node that is producing Aztec-specific zero knowledge (zk) proofs (rollup proofs). Notably, they do not have to be full nodes in thee vent that they have reliable access to a full node, or the latest Aztec state. The current protocol, called Sidecar, suggests facilitating out of protocol proving, similar to out of protocol PBS. Provers in this case are fully permissonless and could be anyone - such as a vertically integrated sequencer, or a proving marketplace such as nil, gevulot, or kalypso, as long as they choose to support the latest version of Aztec’s full nodes and proving system.

Estimated hardware requirements

:desktop_computer: Minimum Recommended
CPU 16 cores 32cores
Network 64 mb/s 256 mb/s
Storage 3TB 5TB
RAM 32gb 64gb

These hardware requirements are current best guesses and subject to change.

Other types of network nodes

  • Validating Light nodes
    • Maintain a state root and process block headers (validate proofs), but do not store the full state.
    • The L1 bridge is a validating light node.
    • Can be used to validate correctness of information received from a data provider.
  • Transaction Pool Nodes
    • Maintain a pool of transactions that are ready to be included in a block.
  • Archival nodes
    • A full node that also stores the full history of the network
    • Used to provide historical membership proofs, e.g., prove that x was included in block n.
    • In the current model, it is expected that there are standardized interfaces by which well known sequencers, i.e., those operated by well respected community members or service providers, are frequently and regularly uploading historical copies of the Aztec state to immutable and decentralized storage providers such as: IPFS, Filecoin, Arweave, etc.
  • The specific details of such additional node types/participant roles is TBD and likely to be facilitated via RFP, similar to how sequencer selection or prover coordination was determined.


Sequencers must stake a to be determined amount of a native token on Layer-1 to join the protocol. Within the initial Fernet implementation, an entryPeriod can be used for limiting the ability to quickly get outsized influence over governance decisions, but is not strictly necessary. It’s expected this entryPeriod is initially set at 7 days.

In the initial implementation, Provers don’t need to register via staking on L1 contracts like Sequencers do, but must commit a bond during the prover commitment phase articulated below. This ensures economic guarantees for timely proof generation, and therefore short-term liveness. If the prover is unable or unwilling to produce a proof for which they committed to in the allotted time their bond will be slashed.


participant Anyone
participant Contract as Aztec L1 Contract
participant Network as Aztec Network

Anyone ->> Contract: register as a sequencer
Anyone --> Anyone: Wait 7 days
Anyone ->> Network: eligible as a sequencer

Looking beyond the initial implementation, Aztec may choose to implement a consensus network within the Layer-2 sequencers, and in that case would likely need to add a registration queue to ensure stability of the sequencer set while producing blocks (please refer to the future improvements section for more information). It is also possible that registration for provers could be added in the future.

Block production overview

Every staked sequencers participate in the following phases, comprising an Aztec slot:

  1. Proposal: Sequencers generate a hash of every other sequencer’s public keys and RANDAO values. They then compare and rank these, seeing if they possibly have a “high” ranking random hash. If they do, they may choose to submit a block proposal to Layer-1. The highest ranking proposal will become canonical.
  2. Prover commitment: After an off-protocol negotiation with the winning sequencer, a prover submits a commitment to a particular Ethereum address that has intentions to prove the block. This commitment includes a signature from the sequencer and an amount X of funds that get slashed if the block is not finalized.
  3. Reveal: Sequencer uploads the block contents required for progressing the chain to whatever DA layer is decided to be implemented, e.g., Ethereum’s 4844 blobs.
    • It is an active area of debate and research whether or not this phase is necessary, without intentions to implement “build ahead” or the ability to propose multiple blocks prior to the previous block being finalized. A possible implementation includes a block reward that incentivizes early reveal, but does not necessarily require it - turning the ability to reveal the block’s data into another form of potential timing game.
  4. Proving: The prover or prover network coordinates out of protocol to build the recursive proof tree. After getting to the last, singular proof that reflects the entire blocks’s state transitions they then upload the proof of the block to the L1 smart contracts.
  5. Finalization: The smart contracts verify the block’s proof, which triggers payouts to sequencer and prover, and the address which submits the proofs (likely the prover, but could be anyone such as a relay). Once finalized, the cycle continues!
    • For data layers that is not on the host, the host must have learned of the publication from the Reveal before the Finalization can begin.
  6. Backup: Should no prover commitment be put down, or should the block not get finalized, then an additional phase is opened where anyone can submit a block with its proof, in a “based-rollup” or backup mode. In the backup phase, the first rollup verified will become canonical, and the cycle will begin with the next slot’s proposal phase.

participant Contract as Aztec L1 Contract
participant Network as Aztec Network
participant Sequencers
participant Provers

loop Happy Path Block Production
    Sequencers --> Sequencers: Generate random hashes and rank them
    Sequencers ->> Contract: Highest ranking sequencers propose blocks
    Note right of Contract: Proposal phase is over!
    Contract ->> Network: calculates highest ranking proposal
    Sequencers ->> Provers: negotiates the cost to prove
    Sequencers ->> Contract: commits to a proverAddress
    Provers ->> Contract: proverAddress deposits slashable stake
    Note right of Network: "preconfirmed" this block is going to be next!
    Sequencers --> Sequencers: executes public functions
    Provers --> Provers: generates rollup proofs
    Provers ->> Contract: submits proofs
    Contract --> Contract: validates proofs and state transition
    Note right of Contract: "block confirmed!"


In order to leave the protocol sequencers can exit via another L1 transaction. After signaling their desire to exit, they will no longer be considered active and move to an exiting status.

When a sequencer moves to exiting they are no longer eligible for block proposal elections. They additionally might have to await for an additional delay before they can exit., eg 3-7 days. Notably this delay may be in addition to any delays that are required to exit stake from governance or the network’s upgrade mechanism.


participant Anyone as Sequencer
participant Contract as Aztec L1 Contract
participant Network as Aztec Network

Anyone ->> Contract: exit() from being a sequencer
Note left of Contract: Sequencer no longer eligible for Fernet elections
Anyone --> Anyone: Wait 3-7 days
Anyone ->> Network: exit successful, stake unlocked

Confirmation rules

There are various stages in the block production lifecycle that a user and/or application developer can gain insights into where their transaction is, and when it is considered confirmed.

Notably there are no consistent, industry wide definitions for confirmation rules. Articulated here is an initial proposal for what the Aztec community could align on in order to best set expectations and built towards a consistent set of user experiences/interactions. Alternative suggestions encouraged!

Below, we outline the stages of transaction confirmation:

  1. Executed locally
  2. Submitted to the network
    • At this point, users no longer need to actively do anything
  3. In the highest ranking proposed block
  4. In the highest ranking proposed block, with a valid prover commitment
  5. In the highest ranking proposed block with effects available on the DA Layer
  6. In a proven block that has been verified / validated by the L1 rollup contracts
  7. In a proven block that has been finalized on the L1

participant Anyone as User
participant P2P Network
participant Sequencer
participant Network as Aztec Network
participant Contract as Ethereum

Anyone --> Anyone: generates proof locally
Anyone ->> P2P Network: send transaction
P2P Network --> Sequencer: picks up tx
Sequencer ->> Network: sequencer puts tx in a block
Network --> Network: executes and proves block
Network ->> Contract: submits to L1 for verification
Contract --> Contract: verifies block
Contract --> Contract: waits N more blocks
Contract --> Contract: finalizes block
Network --> Contract: updates state to reflect finality
Anyone ->> Network: confirms on their own node or block explorer


In the current Aztec design, it’s expected that block rewards in the native token are allocated to the sequencer, the prover, and the entity/address submitting the rollup to L1 for verification. Sequencers retain the block’s fees and MEV (Maximal Extractable Value). A potential addition in consideration is the implementation of MEV or fee burn, discussed at the end of this post. The ratio of the distribution is to be determined, via modeling and simulation.

Future Aztec versions will receive rewards based on their staked amount, as determined by the Aztec upgrade mechanism. This ensures that early versions remain eligible for rewards, provided they have active stake and usage. Changes to the network’s economic structure, especially those affecting block production and sequencer burden, require thorough consideration due to the network’s upgrade and governance model relying on an honest majority assumption and at a credibly neutral sequencer set for “easy” proposals.

With the rest of the protocol mostly well defined, Aztec Labs now expects to begin a series of sprints dedicated towards economic analysis and modeling with Blockscience throughout Q1-Q2 2024. This will result in a public report and associated changes to this documentation to reflect the latest thinking.


Within the Aztec Network, “MEV” (Maximal Extractable Value) can be considered “mitigated”, compared to “public” blockchains where all transaction contents and their resulting state transitions are publicly visible or computable by all network participants. In Aztec’s case, MEV is generally only applicable to public functions and those transactions that touch publicly viewable state.

It is expected that any Aztec sequencer client software will initially ship with some form of first price or priority gas auction for transaction ordering, ie a “naive” or unsophisticated mechanism. Meaning that in general, transactions paying higher fees will get included earlier in the network’s transaction history. Similar to Ethereum’s Layer-1 ecosystem, eventually an opt-in, open source implementation of “out of protocol proposer builder separation” (PBS) such as mev-boost will likely emerge within the community, giving sequencers an easier to access mechanism to earn more money during their periods as sequencers. This is an active area of research.


It is likely that this proving ecosystem will emerge around a [flashbots mev-boost][] like ecosystem, specifically tailored towards the needs of sequencers negotiating the cost for a specific proof or set of proofs. Currently referred to as proof-boost or goblin-boost (due to goblin plonk…).

Specifically, Proof boost is expected to be open source software sequencers can optionally run alongside their clients that will facilitate a negotiation for the rights to prove this block, therefore earning block rewards in the form of the native protocol token. After the negotiation, the sequencer will commit to an address, and that address will need to put up an economic commitment (deposit) that will be slashed in the event that the block’s proofs are not produced within the alloted timeframe.

Initially it’s expected that the negotiations and commitment could be facilitated by a trusted relay, similar to L1 block building, but options such as onchain proving pools are under consideration. Due to the out of protocol nature of Sidecar, these designs can be iterated and improved upon outside the scope of other Aztec related governance or upgrades - as long as they maintain compatibility with the currently utilized proving system(s). Eventually, any upgrade or governance mechanism may choose to enshrine a specific well adopted proving protocol, if it makes sense to do so.

Constraining Randao

The RANDAO values used in the score as part of the Proposal phase must be constrained by the L1 contract to ensure that the computation is stable throughout a block. This is to prevent a sequencer from proposing the same L2 block at multiple L1 blocks to increase their probability of being chosen. Furthermore, we wish to constrain the RANDAO ahead of time, such that sequencers will know whether they need to produce blocks or not. This is to ensure that the sequencer can ramp up their hardware in time for the block production.

As only the last RANDAO value is available to Ethereum contracts we cannot simply read an old value. Instead, we must compute update it as storage in the contract. The simplest way to do so is by storing the RANDAO at every block, and then use the RANDAO for block number - n when computing the score for block number n. For the first n blocks, the value could pre-defined.

Known issue: RANDAO’s Biasability. Further reading: 1, 2, 3. At the moment we believe that this is not a serious issue, however improvements would likely be considered &/or prioritized.


Happy path


participant Anyone
participant Contract as Aztec L1 Contract
participant Network as Aztec Network
participant Sequencers
participant Provers

Anyone ->> Contract: register()
Anyone --> Anyone: Wait 7 days
Anyone ->> Network: eligible as a sequencer
loop Happy Path Block Production
    Sequencers --> Sequencers: Generate random hashes and rank them
    Sequencers ->> Contract: Highest ranking sequencers propose blocks
    Note right of Contract: Proposal phase is over!
    Contract ->> Network: calculates highest ranking proposal
    Sequencers ->> Provers: negotiates the cost to prove
    Sequencers ->> Contract: commits to a proverAddress
    Provers ->> Contract: proverAddress deposits slashable stake
    Note right of Network: "preconfirmed" this block is going to be next!
    Sequencers --> Sequencers: executes public functions
    Provers --> Provers: generates rollup proofs
    Provers ->> Contract: submits proofs
    Contract --> Contract: validates proofs and state transition
    Note right of Contract: "block confirmed!"
Sequencers ->> Contract: exit()
Sequencers --> Sequencers: wait 7 days

Voting on upgrades

In the initial implementation of Aztec, the L1 smart contracts are planned to be fully immutable. However, sequencers are expected vote on upgrades (ie. what should be the next immutable instance of the network) alongside block proposals, in order to reflect social consensus as closely as possible on L2. If sequencers wish to vote alongside an upgrade, they signal by updating their client software &/or an environment configuration variable. If they wish to vote no or abstain, they do nothing. Because the “election” is randomized through the hash ranking algorithm in Fernet, the voting acts as a random sampling throughout the current sequencer set. This implies that the specific duration of the vote must be sufficiently long (and RANDAO sufficiently randomized) to ensure that the sampling is reasonably distributed, eg a majority of sequencers must vote yes by upgrading their client implementation over a minimum duration of 7 days.

More information about the Aztec network upgrade mechanism is expected to be published alongside it’s own RFC within Q1-Q2 of 2024. In this context, it’s important to know that sequencers will actively be involved in the upgrade mechanism via voting, and it’s intentions to try and mirror L1 social consensus as closely as possible.


participant Contract as Aztec L1 Contract
participant Network as Aztec Network
participant Sequencers
participant Provers

loop Happy Path Block Production
    Sequencers --> Sequencers: Generate random hashes and rank them
    Sequencers ->> Contract: Propose block + indicate that they desire to upgrade
    Note right of Contract: Proposal phase is over!
    Contract ->> Network: calculates highest ranking proposal + vote
    Sequencers ->> Provers: negotiates the cost to prove
    Sequencers ->> Contract: commits to a proverAddress
    Provers ->> Contract: proverAddress deposits slashable stake
    Note right of Network: "preconfirmed" this block is going to be next!
    Sequencers --> Sequencers: executes public functions
    Provers --> Provers: generates rollup proofs
    Provers ->> Contract: submits proofs
    Contract --> Contract: validates proofs and state transition
    Note right of Contract: "block confirmed! votes counted for upgrade!"

Backup mode

In the event that no one submits a valid block proposal, we introduce a “backup” mode which enables a first come first serve race to submit the first proof to the L1 smart contracts.


participant Anyone
participant Contract as Aztec L1 Contract
participant Network as Aztec Network
participant Sequencers

loop Happy Path Block Production
    Sequencers --> Sequencers: Generate random hashes and rank them
    Contract --> Contract: Waited n L1 blocks.... Proposal phase is over
    Contract --> Network: calculates highest ranking proposal
    Note left of Network: No one proposed a block... backup mode enabled!
    Anyone ->> Contract: submits a rollup...
    Contract --> Contract: validates proofs and state transition
    Note right of Contract: "block confirmed!"

We also introduce a similar backup mode in the event that there is a valid proposal, but no valid prover commitment (deposit) by the end of the prover commitment phase.


participant Anyone
participant Contract as Aztec L1 Contract
participant Network as Aztec Network
participant Sequencers
participant Provers

loop Happy Path Block Production
    Sequencers --> Sequencers: Generate random hashes and rank them
    Sequencers ->> Contract: Highest ranking sequencers propose blocks
    Note right of Contract: Proposal phase is over!
    Contract ->> Network: calculates highest ranking proposal
    Sequencers ->> Provers: negotiates the cost to prove
    Contract --> Contract: Waited 5 L1 blocks.... Prover commitment phase is over
    Note left of Network: No one committed to prove this block... backup mode enabled!
   Anyone ->> Contract: submits a rollup...
    Contract --> Contract: validates proofs and state transition
    Note right of Contract: "block confirmed!"

Known potential issue: L1 censorship. L1 builders may choose to not allow block proposals to land on the L1 contracts within a sufficient amount of time, triggering “backup” mode - where they could have a block pre-built and proven, waiting L1 submission at their leisure. This scenario requires some careful consideration and modeling. A known and potential mitigation includes a longer proposal phase, with a relatively long upper bounds to submit a proposal - at the cost of slowing down Aztec’s block production.

Future improvements and outstanding questions

Aztec is an incredibly ambitious engineering project, and a number of tradeoffs have been made in the designs articulated above in order to ensure that the network can be iterated on and shipped as quickly as possible. Currently, it’s expected an initial reference implementation should be completed within the next 3-4 months. Due to these tradeoffs, there are a number of “obvious improvements” to the network that will be prioritized based on the completion of other critical network functionality.

  1. Should Aztec introduce a consensus network?
  • Generally, the answer here seems to be yes. The question has historically been ROI versus the engineering time/effort spent, as a complex implementation could theoretically delay a network launch.
  1. Should Aztec implement a version of MEV or fee burn?
  • Generally, the answer here seems to be yes. It seems like a fee burn model may make more sense than MEV burn, due to the inability to easily quantify MEV on private functions/transactions. Additionally it must be carefully considered how to ensure that the sequencer set doesn’t consolidate on a few sophisticated actors, due to their involvement in the upgrade mechanisms.
  1. Should Aztec consolidate the proposal & prover commitment phases? What about other phases?
  • Maybe! This is less clearly understood, at the moment. Reducing L1 interactions is usually nice as it saves some cost and complexity, however, this could potentially make “MEV-stealing” easier to achieve and warrants further considerations.
  1. How long should each phase be? Should they be enforced, or incentivized?
  • The current thinking is that guaranteeing a block gets produced on a regular cadence is a nice to have feature of the network for infrastructure providers, application developers, and end users. However, it may not be pragmatic to enforce, and may make more sense to have a “decaying reward system” that tries to incentivize block production happening as fast or regular as possible. This is an open area of consideration, and is related to item #3 above.
  1. How should mev-boost & prover-boost get integrated?
  • MEV-boost is open source and off the shelf, however, Aztec’s environment may be sufficiently different that it warrants a fork or reimplementation of MEV-boost. Prover or proof-boost does not yet have any well known analogues and needs significant design + implementation considerations, which will likely be prioritized later in the year. If you’re reading this and feel strongly about a path forward here, please get in touch.

In the event that other core network functionality is completed sooner than expected, some (or all of these) improvements may be candidates for the initial network release. While these ideas and options are open for improvement, suggestion, debate, etc. they will be considered/prioritized through the lens of engineering feasibility and changes to deliverable timelines.


“The information set out herein is for discussion purposes only and does not represent any binding indication or commitment by Aztec Labs and its employees to take any action whatsoever, including relating to the structure and/or any potential operation of the Aztec protocol or the protocol roadmap. In particular: (i) nothing in these posts is intended to create any contractual or other form of legal relationship with Aztec Labs or third parties who engage with such posts (including, without limitation, by submitting a proposal or responding to posts), (ii) by engaging with any post, the relevant persons are consenting to Aztec Labs’ use and publication of such engagement and related information on an open-source basis (and agree that Aztec Labs will not treat such engagement and related information as confidential), and (iii) Aztec Labs is not under any duty to consider any or all engagements, and that consideration of such engagements and any decision to award grants or other rewards for any such engagement is entirely at Aztec Labs’ sole discretion. Please do not rely on any information on this forum for any purpose - the development, release, and timing of any products, features or functionality remains subject to change and is currently entirely hypothetical. Nothing on this forum should be treated as an offer to sell any security or any other asset by Aztec Labs or its affiliates, and you should not rely on any forum posts or content for advice of any kind, including legal, investment, financial, tax or other professional advice.”


could it possibly be lower hardware requirements?


Hey Cooper. Thanks for putting this together to comprehensively.
There is a small typo here (should say “the event”

Limited comments:
Is there adequate incentive to run an archival node? Whilst it increases requirements I wonder if the history should be held with full nodes? Then there is an incentive. Clearly other participants can use trimmed versions.

If you have an answer to this you probably wont want to answer here… but how can we ensure the network is decentralised from day one? Sequencers need the tokens from “you”, which means you know them, which is an attack vector. An interesting problem to try and solve.

Thanks again.


Hi Cooper,
Thanks for the RfC. We’ve been thinking about it with @teemupai and the Gevulot team, and collected a couple of thoughts:

  1. Hardware requirements:
    Could Aztec provide more details on the hardware requirements for proving per proof type? This is important given that in most cases the proof tree construction will likely be distributed across many machines.
  2. Access to state:
    Given the size of Aztec’s state that provers need to access to generate the base rollup proof, Gevulot will need to implement one of the following measures:
    • Have provers run Aztec full nodes. This is likely a good short-term solution.
    • Deploy a prover with state built-in, but this requires new logic and is a poor long-term solution.
    • Include generic state storage in the prover node which will be replicated across all nodes. This is likely the best long-term solution.
  3. Bond:
    Regarding the prover commitment, what is the expectation for who will put it up? Given that it is a single commitment for the entire proof tree and the actual proving can and probably should be distributed across many provers, it seems the natural provider of the commitment would be the sequencer who then retains the ability to distribute the proof tree construction flexibly. Is this the general idea?

I’m with the Gevulot project, but can shed some light on the full node question. In Aztec the base rollup proof requires access to at least a significant amount of the entire state (Maybe @cooper-aztecLabs can clarify if and how the state could be split up into smaller chunks, while still being able to complete non-membership proofs etc). Given the size of the state is measured in TBs not MB or GB, you unfortunately cannot hand storage over to an archive node, because you would need to send the entire state over the network every time you generate a proof.


Haha, I know our project, I have joined our telegram group.


Hey, great read!

If every sequencer is responsible for proving (or outsourcing the proof of) the block they propose, and if this proving is sequential, considering that the proving time is likely to be longer than block time, then can’t the time to “hard finality” increase superlinearly with block height? With the number of blocks growing, won’t we end up with a system that becomes slower and slower in terms of finalization? Maybe I’m missing something.

Since the Aztec prover is a full node and it’s likely that Aztec sequencers will outsource block production (referencing the MEV Boost part), what are the advantages of having a dedicated role to the sequencer (if provers could just propose blocks and generate proofs themselves) ?

Assigning the role of proposing blocks to the prover could potentially avoid the need to deal with the fee burn mechanism mentioned (which aims to fight the imbalance in incentive structures I guess). It could also reduce complexity by eliminating what seems like a double PBS i.e sequencers outsourcing both block production and proof generation.

Note that MEV on Aztec might differ from what we’re used to, so maybe the argument about imbalanced incentive structures is not as relevant as it seems - interesting research problem there.

How does the generation of proofs work? If it involves competitive proving, then I guess the most efficient prover always wins i.e

In the long term, if the same prover consistently wins, it could disincentivize others from participating. They might wonder why they should keep burning energy if they don’t stand a chance to win, potentially leading to a very limited number of provers in the network imho.

But maybe this is not how the proof generation is planned to work, so curious to learn more about this!

One way I like to think about it is to consider these different types of confirmations/finality:

Sequencer’s Promise: This refers to the sequencer giving a promise to the user that their transaction will be ordered and processed as intended.

Order Finality: At this stage, a user’s transaction has been definitively placed in sequence relative to other transactions.

Execution Settlement: This is the point at which the bridge smart contract acknowledges the final and irreversible execution of a user’s transaction i.e the proof has been verified on Ethereum.

I think this is cool and more efficient than just assigining another prover with the same amount of time, the only thing I would keep in mind is that it goes back to competitive proving. Therefore, it’s likely to end up in a situation where the fastest prover, who submits the proof first, is always the one who wins. This tendency could lead to a dependence on a single prover for the fallback mechanism.

Would say yes, but regarding the implementation, pre-confirmations should be taken into consideration to keep the latency as low as possible imho.

If there is a clear separation between sequencers and provers, yes imo. This holds except if you have only vertically integrated sequencers.


This response was written as a report and is additionally available via DOI:10611109

I. Executive Summary

This response to Aztec Labs’ Request for Comments (RfC) is the result of an ongoing research collaboration between Aztec Labs and BlockScience (full disclosure: BlockScience is contracted by Aztec Labs for research, including the provision of this response). Other work artifacts resulting from this collaboration are a report on aiding the decision for a Sequencer Selection protocol, as well as an ongoing simulation effort aimed at aiding risk management decisions and mechanism design of an economically sustainable Aztec infrastructure.

Through the selection of Fernet and Sidecar, Aztec Labs has decided upon a workflow for block production. Our response focuses on identifying whether this workflow appears to fulfill previously-defined requirements, while also providing additional views on potential risks that might become relevant for future risk management.

We note that an absence of any subsequently-identified risk from this response likely results from time and scope limitations in the analysis, rather than the response pointing toward the identification of a “perfect workflow”. Risks are present in any economic infrastructure, whether they have been identified for risk management or not.

Our conclusion is that the proposed workflow generally addresses the prespecified requirements, allowing entities to permissionlessly participate in the building process to produce Layer 2 (L2) blocks.

We identify potential factors that might contribute to a centralization of the Sequencer and Prover roles, such as increasing returns to scale, early incumbency, and opportunity cost structures. Much of the centralization tendency of the builder process rests upon the potential of the process to support conditions under which a ‘natural monopoly’ can emerge, driving vertical integration of the Sequencer and Prover roles.

Additionally, we provide a loose classification of types of censorship, and highlight factors that could contribute to each type (as well as potential factors influencing censorship risk management). In particular, we note the possibility that adopting “based sequencing” for the fallback ‘race mode’ can potentially augment the risk of censorship (e.g. from L1 builders). The main driver of this observation is the potential market power of an L1 builder that could profit from inducing race mode with based sequencing (it should be noted that this is not a reflection of the utility of based sequencing as a protocol design choice).

Finally, we comment on our observations with respect to liveness and failure modes. Fernet-Sidecar enables reliable block production in the Aztec network through a cascade of modalities. This cascade is presented as a state machine with parameterized, realtime state transition triggers. Both fixed block intervals and variable block intervals are discussed as simple changes to the state transition logic. While trigger parameterization provides a means of governing Aztec block production, determining appropriate values for the parameters will require computational simulations.

II. Overview

This response is divided into four sections, each loosely covering a (non-exclusive) subset of the requirements of Fernet-Sidecar. These sections and the relevant end-to-end building requirements are:

  1. Decentralization: this section provides a perspective on how to evaluate the impact of centralization on participation in the builder process, focusing primarly upon identification of any factors which might contribute to centralization/vertical integration of the sequencer and/or prover roles (such as barriers to competitive entry and increasing returns to scale).

    • Associated Requirements:
      • Compatibility between the Fernet 14 sequencer selection protocol and Sidecar
      • Provision for permissionless participation
      • Ability to verify the entity generating a proof
      • Clearly articulated incentives for participants
      • Ability for sequencers to produce proofs by means other than engaging directly in a prover market (Sidecar)
  2. Censorship: a loose classification of censorship types is defined, to assist in the assessment of tendencies toward centralization from a market power perspective, and in the risk management of censorship and its associated costs.

    • Associated Requirements:
      • Provision for permissionless participation
      • Privacy for participants in the proving protocol
      • Definition of the entity submitting completed work to L1
      • Clearly articulated incentives for participants
      • Ability for sequencers to produce proofs by means other than engaging directly in a prover market (Sidecar)
  3. Liveness: Fernet-Sidecar protocol is considered as a state machine with realtime state transition events. In this way, the protocol is related to recovery, resilience, and adaptability required for reliable, on-going block production in the Aztec network.

    • Associated Requirements:
      • Graceful recovery in the event of an interruption/fault
      • Graceful reorg resilience
      • Scaling with available computing power
  4. Futureproofing: Discussion of futureproofing is distributed throughout the liveness theme. Sidecar itself provides flexibility through an unpermissioned pool of Provers. Prover Selection is decoupled from cryptography implementation allowing to take future cryptographic improvements and challenges into account. Protocol parameters are listed with regards to state transition events.

    • Associated Requirements:
      • Flexible for future cryptography improvements
      • Clearly articulated protocol parameters

III. Themes

A. Decentralization

One may frame the ‘danger’ of centralization within the context of competitiveness between participants. When there are relatively few participants that can profitably enter a market there is monopolistic or oligopolistic competition or monopoly, and a driving goal of decentralization is to ensure both free access to and ‘fair play’ within such a competitive environment.

In a market with few firms the main threat to competitiveness is incumbent centralization, i.e. the tendency for an early participant in an ecosystem to understand and develop a profit-maximizing vertical structure that:

  1. Provides returns to scale by internalizing costs and reducing entity-to-entity frictions, and
  2. Leverages their early incumbency to deter potential competitors from entering the market.

While each of these characteristics alone may be sufficient to create a centralized market (this can be e.g. passing from monopolistic competition to oligopoly, and ultimately to monopoly) both are typically exploited by a participant to attempt to control as much of the entire revenue stream as possible.

Returns to Scale

Scaling here is best interpreted as a technological constraint–a participant considers whether or not it is possible to leverage e.g. infrastructure, business practices, funding sources etc. to achieve a centralized, or vertical, ‘stack’ of service provision that captures the end-to-end development of a product. In the Fernet-Sidecar implementation, identification of the technological constraints will determine the extent to which vertical integration is possible. Examples include:

  • The substitutability of infrastructure used for sequencing in the proving process (high substitutability drives down the hardware costs of vertical integration);
  • The ‘complexity’ costs of replicating a sequencer or prover role (lower costs make ‘sybilization’ of the sequencer and/or prover markets more likely to generate expected revenue increases that more than offset the increased costs above managing a single entity);
  • The race-mode-specific infrastructure costs (a sequencer could strategically influence events to result in the failure to provide a proving commitment bond, or fail themselves to reveal their proposed block content, in order to then win the ‘first past the post’ proof race that follows).

A full analysis of the viability of these strategies requires an understanding of the cost side of the sequencer and prover entities, as well as any mitigation strategies to disincentivze sybilization and the commencement of race mode.

Early Incumbency

Vertical integration allows one internal business unit to absorb losses from another business unit, provided that future gains made possible with these losses are high enough. This can be leveraged to reduce the profitability of a potential competitor, deterring them from attempting to become part of the building process. A classical example is of an incumbent, such as a chip manufacturing company, ‘dumping’ high output at low prices to drive out rivals and deter potential entrants (who may have cost constraints that cannot be satisfied at low prices), and then recouping more than the associated losses by acting as a monopolist thereafter, restricting output and increasing prices. The Fernet-Sidecar implementation may favor early incumbents because of:

  • The implementation and ‘fine-tuning’ advantages that an incumbent garners from a longer presence in the building process, and
  • The advantages of a sequencer potentially developing and leading a prover market, initially offering below-cost proving that deters other provers from entering the market.

The latter case most closely represents the classical incumbent example described, as developing a prover market may incur losses on the prover side that the sequencer side initially absorbs. These losses are then expected to be more than offset by the long-run prover market monopoly. (For an analysis of such ‘predatory pricing’ strategies and algorithms see e.g. Leslie, C. R. [2023], “Predatory Pricing Algorithms”, New York University Law Review v. 98 no. 1.)


One typical strategy to mitigate the potential benefit of centralization is to enforce–or generate market conditions which enforce–pricing such that ‘normal’ economic profit is attained, up to what is required for the market to generate its output (in this case, the composite good ‘ordered transactions / proven transactions’). Enforcement is typically itself centralized (such as regulating a natural monopoly by requiring average-cost pricing), but market conditions may exist which ‘self-enforce’ (such as contestable markets). It will be necessary to understand the cost structure of a vertically integrated sequencer, in particular its degree of increasing returns to scale, in order to assess if one or the other enforcement modality is ultimately feasible.

Teasing out the particulars of this cost structure will require an examination of the cost structure of a participant:

  • Adopting the role of a sequencer and a prover;
  • Adopting the role of an L1 participant (builder/validator) and an L2 participant (sequencer/prover).

If one or more of these cases implies increasing returns to scale from vertical integration, decentralization may be unstable as larger entities are formed by adopting multiple roles. The same conclusion also obtains if one or more of these cases implies that end-to-end proving is a natural monopoly, i.e. that the cost of proving is lowest when performed by a single vertically-integrated entity. This would act both as a deterrent to entry by other potential provers, and also capture the demand for proving services in the market. It is worth noting that these two situations are often identical identical, when the natural monopoly is driven by large initial infrastructure costs or high barriers to entry, as this can imply (at least weakly) increasing returns to scale. (For an insightful textbook treatment of returns to scale and natural monopoly, see e.g. Tirole, J. [1988], The Theory of Industrial Organization, MIT Press.)

Centralization and Opportunity Costs

Is centralization a ‘bad thing’ for the ecosystem? Because centralization may prevent participants from meaningfully engaging in the block building process, it can be considered as antithetical to the motivation of creating the Fernet-Sidecar permissionless workflow. Against this consideration would have to be weighed the following:

  1. The ability of the workflow to allow multiple, decentralized, distinct (not ‘sybilized’) participants to earn an excess of revenue over cost (economic profit) that is at least as good as the next best available activity given their resources (builder opportunity cost);
  2. The ability of the workflow to provide sufficient quality of service (timeliness, cost) to users of the network that they prefer a decentralized Aztec over the next best available ecosystem to process their transactions (user opportunity cost).

Note that as users may not necessarily care about the centralization of the network, the user opportunity cost will at least implicitly include the perceived quality of service from an alternative Aztec topology, such as a centralized service, even if such a service is not immediately available. This is because builder participants will likely recognize that this signals an opportunity for centralization if this results in an increased quality of service for the user.

Further understanding of these opportunity costs will likely suggest possible mechanisms that can be adopted to

  • Maintain a required quality of service that exceeds that required due to user opportunity cost;
  • Provide a minimum reward in a decentralized ecosystem that exceeds that required due to builder opportunity cost; and
  • Dissuade any tendency to centralization from e.g. increasing returns to scale and/or a natural monopoly tendency, as described above.

Current mitigations already implemented into Fernet-Sidecar, such as builder staking and slashing, may or may not be sufficient once these opportunity costs have been understood.

B. Censorship

In general, censoring transactions is undertaken by a builder role (sequencer or prover) under one of three circumstances:

  1. There is a monetary gain from censoring, i.e. censorship is financially incentivized; or
  2. There is a non-monetary gain from censoring, i.e. censorship is ideologically incentivized; or
  3. There is no gain from censorship but it occurs, i.e. censorship is unintentional.

These circumstances cannot be perfectly identified by an observer of a builder who–ex post–is seen to have engaged in censorship, although there exist strategies for probabilistically detecting or classifying censorship circumstances.

Given this, it is usually more fruitful to assess:

  1. The likelihood of incentivized censorship, with an eye toward features of the builder workflow that reduce this likelihood, while understanding
  2. The rate of unintentional censorship that may be considered ‘natural’ for the builder workflow under consideration, in order to provide a workflow that is resilient to censorship disruptions.

Financially Incentivized Censorship


Although a sequencer is perhaps less incentivized to censor individual transactions (since transactions with private function calls prevent introspection for e.g. MEV opportunities), there is still a preference ordering that is induced by transactions according to the results of an optimization across fees (e.g. ‘MEV-boost’-ing). This is designed to reveal the willingness to pay for Aztec services (as is usual for auction-style resource allocation mechanisms), but does mean that resource-constrained users may find their de facto access to proposed blocks restricted. Since this possibility is present in nearly all such protocol systems it is difficult to see what, if any, mitigation strategies are available, or if any should be investigated (as doing so would destroy a large part of the incentive for a sequencer to participate).

In a sense, Fernet is already well-suited to place this type of censorship at or near the lower bound for an L2-rollup-to-L1 paradigm: all of the economic motivation rests at the L1 cost level, and intra-transaction information is not available to exploit by Aztec’s own design. Thus, if the role of a participant is limited to that of the L2 sequencer, then censorship risk at the individual transaction level is relatively low.

Sequencer as L1 Builder

The next aggregation layer above individual transactions is to treat the proposed L2 block itself as an object of censorship, and here the risk is more prevalent due to the timing/timeout structure of the Fernet-Sidecar implementation. The idea is that a sequencer is also an L1 Builder that is large enough to command a significant share of L1 block production. Given the economies of scale for L1 services, an L1 Builder may also be incentivized to participate as one or more sequencers. When ‘their’ sequencer role has the highest ranking proposal, the L1 Builder may be responsible for its inclusion in the L1 contracts, and the proposal is submitted as intended. But when another sequencer has the highest ranking proposal, the L1 Builder ‘holds up’ the L1 submission until the backup ‘race mode’ is triggered. At this stage the L1 Builder can submit their own original proposal permissionlessly, with the result that (depending upon the market power of the L1 Builder in the L1 space) a large fraction of sequencer proposals that arrive on L1 are built by the L1 Builder.

Much of the above argument rests upon

  1. The market power of the L1 builder, i.e. the degree to which they can control L1 block production; and
  2. The structure of ‘race mode’.

Market power occurs because an L1 builder can act as a ‘bottleneck’ to L2 rollup inclusion even under normal operating circumstances, and this can be exploited to threaten L2 block censorship and a concomitant default to race mode. Provided that the structure of race mode admits circumstances where the L1 builder could earn a return (by e.g. submitting their own ‘ready-to-go’ block), this threat can be credible, and a sequencer who is not the same entity as the L1 builder may agree to a side payment to prevent the threat from being carried out. In this fashion an L1 builder and a separate sequencer may end up behaving “as if” they are a single entity.

Race mode structure has an impact because of the ease (or difficulty) of an L1 builder generating revenue if a threat is carried out. To the extent that the current version of race mode, utilizing a form of “based sequencing” (cf. e.g. J. Drake’s proposal at ETH Research) allows an L1 builder to submit a block permissionlessly, such a credible threat mechanism may be profitable at “relatively” low levels of market power on the part of the L1 builder. (This is reminiscent of bargaining games, cf. e.g. Myerson, R. B. [1997], Game Theory: Analysis of Conflict, Harvard UP for an introduction.)

Sequencer as Prover

On a smaller scale there may be a similar danger with a prover market dominated by a single large entity (“Prover”) who also acts as one or more sequencers. When their sequencer has the highest ranking proposal, they can be ‘selected’ by the sequencer (i.e. by the Prover itself) to prove the proposal, garnering the associated returns. When another sequencer has the highest ranking proposal, it may be that only the Prover is available in the prover market, who charges a high price for proving (higher than they charge themselves as sequencers). This price would be just under the reward that a sequencer would earn by proving the proposal themselves, providing the weakest incentive possible for the unfortunate sequencer to use the Prover.

Thus, the proving market microstructure can be a critical component of the capacity of a prover to censor sequencer blocks due to monopoly power (see ‘Decentralization’ above). If possible one would prefer a prover market that is naturally decentralized (e.g. with constant returns to scale production) and low entry costs. It is important to note that this depends upon the technology for proving–if it should be the case that the most natural market structure is monopoly, then one might entertain possible mitigations such as a protocol tax of the ‘winning’ prover, to drive profits down to what would be obtained in a decentralized equilibrium.

Ideologically Incentivized Censorship

One may imagine circumstances under which a series of transactions, or a proposed block of transactions, is censored for reasons other than financial gain. For example, it may be that an entity providing transactions is, for one or another reason, subject to external scrutiny that can render participants within the builder workflow liable for providing services to that entity. Another example is ‘for the LOLs’, i.e. behavior antithetical to the spirit and purpose of the ecosystem that does not confer a benefit (and may even be costly to execute).

Ideologically incentivized censorship occupies a middle ground between financially incentivized and unintentional censorship. On the one hand, it is hoped that the ecosystem is resilient to unintended ‘shocks’ (as discussed further below), and so to a certain extent censorship ‘for the LOLs’ can be treated as unintentional censorship. On the other hand, it is not difficult to see censoring an entity due to potential legal repercussions as ultimately a financial incentive (especially viewed as an opportunity cost), albeit several steps removed (e.g. legal consequences resulting in new costs, reduced profits, possible exit etc.). In the latter case, then, ideologically incentivized censorship may not be first order financially incentivized, but may be so to higher order.

Risk Management

Fernet-Sidecar’s resilience to ideologically incentivized censorship relies upon the characteristics—including trigger events—of the backup ‘race mode’. This is discussed further below in the context of unintentional censorship. Similarly, ideologically incentivized censorship that is (say) up to second-order financially incentivized may be treated from the same point of view as that above for first-order financially incentivized censorship. The middle ground, where censorship is not taken for direct financial gain but possibly for indirect gains (or insurance) ‘down the road’ can be determined by the risk management profile that transactions handled by Aztec provides.

For example, it may be possible to partition transactions into distributions by visible attribute (e.g. by entity, size, frequency etc.). If this is the case, then a statistic may be generated that can be referenced per sequencer to assess if one or another attribute is probabilistically being censored, and to design a punishment strategy (such as stake slashing) that uses this statistic to deter, in expectation, attribute-based censorship. While this approach sacrifices determinism, it provides Aztec itself with the power to use its protocol incentive mechanisms to approach the problem of censorship from a risk management perspective, leveraging population (statistical) approaches that can be drawn from traditional risk management practice (cf. e.g. McNeil, A. J. [2015], Quantitative Risk Management: Concepts, Techniques and Tools, Princeton UP).

Unintentional Censorship

Censorship that occurs without a manifestation of participant ‘will’ (or, in the loosest sense, a will driven by random, exogenous factors) are considered unintentional, and may be treated as if they are ‘shocks’ to the builder process. Examples would include

  • Completely unrationalizable but explicit transactions exclusion behavior (‘for the LOLs’);
  • Infrastructure failures; and
  • Low-probability clustering events, such as the happenstance that low gas fee transactions are correlated with one or more attributes of the transactions actor, so that ex post the appearance is that a type of actor has been censored.

The goal of the builder process in this context is resilience—unintentional censorship events should not persistently degrade the perceived performance of the network, by reducing trust in the sequencer’s proposal mechanism, transactions revelation flow, or prover proof submission sub-processes. To that end, such events should not

  1. Be correlated across time or transactions attributes (which would indicate a systemic tendency toward censorship in particular contexts), so that the network is not confronted with repeated censorship events, leading to degraded performance;
  2. Create cascade effects (externalities) where a single censorship event ‘snowballs’ into large-scale changes in transactions type or volume, again degrading performance;
  3. Unduly penalize the builder (sequencer or prover) that has the unintentional censorship event attributed to their activity, so that participation in the builder process is not disincentivized from any penalties assessed due to censorship.

Shock resilience is a well-understood desired property of engineered systems, and many techniques relying upon e.g. signal processing can be fruitfully applied to a system to assess its resilience to disturbances such as unintentional censorship. The goal of such an analysis in the present context would be to minimize the risks of #1 - #3 above. For example, Aztec’s design goal of handling private function calls is potentially the most important mitigating factor as it removes the possibility of correlations between many transactions attributes that are hidden from the builder process, reducing the risk of #1 above.

Similarly, Fernet-Sidecar’s timing games for sequencer transactions revelation and prover proof submission are potential ‘levers’ that can be adjusted to minimize the possibility of cascade effects (#2 above). This is because such timing intervals provide ‘buffer periods’ where fluctuations can potentially be dampened, preventing amplification of an event over time. It can be a useful exercise in signal processing to posit particular shock functional forms, representing censorship events, and examining the impact of different timing intervals selected on the resulting risk of event propagation.

Finally, it should be noted that although slashing is a mechanism that can be used as a punishment strategy to ostensibly deter censorship, it should be assessed relative to the probability that censorship is unintentional, so that slashing does not become a disincentive to participation. This is a complex issue as censorship would not only have to be detected as an event, but its ‘type’ would need to be classified with a high degree of confidence prior to triggering a potential slashing event. Classification methods exist which are designed to help minimize making “Type I” or “Type II” probabilistic errors, although they cannot eliminate the risk (#3 above) entirely.

C. Liveness

To discuss liveness, Aztec can be seen as a state machine with some realtime aspects to state transition triggers. Aztec’s realtime clock is block production by its L1, the Ethereum network; so, this discussion of the realtime aspects of the triggering conditions will be in terms of Ethereum network block production rate. With this definition of liveness, any discussion of L1 liveness is out of scope.

Under the Fernet-Sidecar proposal pair for Sequencers and Provers, respectively, the sequence of state transitions required for a live Aztec network are shown by the “Aztec Network block production lifecycle” diagram in the RfC. These states are listed in the following table.

Table I: Fernet-Sidecar protocol states. Each protocol state is listed by unique name along with a brief description. State exit triggers are listed along with next state successful (i.e., “happy-path”) and alternate transition modes. Triggers highlighted in square brackets are optional. These are discussed below under “Fixed vs. Variable Block Time”.

Ordinal State Name State Description Transition Trigger Success Modes Alternante Modes
1 Proposal Any Sequencer can propose a next block by posting a CtS^1^ to L1 Elapsion of n_1 L1 blocks [==OR p_1 proposals posted==] At least one CtS is posted \rightarrow Proposal Selection No CtS is posted \rightarrow Empty Mode
2 Proposal Selection Proposal CtS Scores^2^ are evaluated. The Proposal with the highest CtS Score (b_s) is selected Immediate \rightarrow Prover Commitment None
3 Prover Commitment The Sequencer of b_s (s_s) chooses a Prover (p_s) who posts a CtP^3^ to L1 Elapsion of n_3 L1 blocks [==OR CtP posted==] CtP posted \rightarrow Reveal CtP not posted \rightarrow Race Mode
4 Reveal s_s posts the full contents of the b_s to L1 Elapsion of n_4 L1 blocks [==OR Full b_s posted==] Full b_s posted \rightarrow Proving Full b_s not posted \rightarrow Race Mode
5 Proving p_s computes proof of b_s Elapsion of n_5 L1 blocks [==OR Proof of b_s posted==] \rightarrow Proof Submission None
6 Proof Submission p_s posts proof of b_s to L1 Immediate Proof posted \rightarrow Finalized Proof not posted \rightarrow Empty Mode
7 Race Mode Anyone can post a block with proof to L1 First block posted \rightarrow Finalize None
8 Empty Mode An empty block is posted to L1 Immediate \rightarrow Finalize None
9 Finalized Vulnerability of proof tx to L1 reorg is reduced to an acceptable minimum Posted Block and Proof reach threshold L1 block height (h_t) \rightarrow Proposal None

^1 A CtS (Claim to Sequence) is a submission for a proposed next Aztec block that consists at least of the header of the proposed block and a random number generated by the Sequencer using a Verifiable Random Function (VRF).
^2 CtS Score is the result of a function that returns a unique ordinal based on a CtS.
^3 A CtP (Commitment to Prove) is a promise by a Prover to generate the necessary proofs for a Sequencer’s proposed block. A CtP identifies the Prover and contains a signature from the sequencer and a posted bond that can be slashed if the proposed block is not finalized.

Under normal operation, aka “the happy path”, the Aztec network will transition through States 1-6 in order. At the end of State 6, both the full contents of a new Aztec network block and its proof will have been verified and successfully posted to the Ethereum blockchain. If any of the failure modes listed for States 1-6 occurs, then the happy path is exited by transition to either State 7, Race Mode, or State 8, Empty Mode. In Race Mode, anyone can submit a block/proof pair. In Empty Mode, the Aztec block slot is left empty. First valid pair received is posted to L1 as the next block.

As currently proposed, Race Mode only ends when a block and its proof are submitted to L1. Since the case of no block/proof pair being submitted cannot be ruled out, Race Mode would benefit from specification of a timeout trigger with a duration of n_7. Should the Race Mode timeout expire, the pending Aztec block slot goes empty. If a timeout transition trigger is added to Race Mode, then the row for Ordinal 7 in Table I would change as follows:

  • Transition Trigger would change to “Elapsion of n_7 L1 blocks [==OR Full Block and Proof posted==]”
  • Success Mode would change to “Full Block and Proof posted \rightarrow Finalize”
  • Alternate Mode would change to “Full Block and Proof not posted \rightarrow Empty Mode”.

Fixed vs. Variable Block Time

Table I lists default, time-based triggers for all persistant states. Timeout delay is parameterized using the symbol, n_i, where i is the state ordinal. Timeouts are in units of L1 blocks.

Optional, event-based triggers based on protocol participant behavior are shown in Table I (highlighted and in square brackets). The triggers shown are combined by a simple OR gate. Such event-based triggers switch Aztec’s block building protocol from having fixed block time to having variable block time with the time-based triggers acting as timeouts and setting a maximum block time.

Technically, Aztec can be implemented in such a way that switching between fixed and variable block timing is isolated to specific code blocks for state transition triggers. Depending upon the expected rate of change or desired resistance to change of the triggering logic, changes can be governed as more static, hard-coded changes to the protocol, or they can be governed as more dynamic, configurational changes to the protocol’s trigger parameters. Potential transition trigger parameters for each state are listed in Table I. For a maximally dynamic system, transition trigger parameters could be adjusted algorithmically in response to:

  • observed environmental factors such as
    • L1 gas fees,
    • time to last L1 missed slot,
    • L1 missed slot frequency,
    • L1 liveness (e.g. as defined here),
    • etc.
  • and observed Aztec Network (L2) factors such as
    • L2 transaction volume,
    • time to last L2 race mode block,
    • L2 race mode block frequency,
    • time to last L2 empty block,
    • L2 empty block frequency,
    • etc.

Note that this list is not meant to be complete, but suggestive. Any such parameter should first be evaluated for satisfaction of requirements, system safety, and system goal trade-offs between it and other parameters, ideally using a simulation framework such as cadCAD (complex adaptive dynamics Computer-Aided Design).

However, such dynamism is ill advised as a first starting point. Understanding indicated by model simulations of parameter changes that reflects real network experience (and mitigation effects on potential risks identified in sections above) are a necessary precursor. Short of that, this level of dynamism adds complexity without an apparent purpose.

As mentioned in our prior report, because Aztec relies on its L1 network, Ethereum, as a source of truth to propagate state updates through the Aztec network, Aztec block production should operate synchronously with Ethereum. Fernet-Sidecar operates synchronously with Ethereum, and so the Ethereum network contains a sufficient log of Aztec block production for recovery from an Aztec Network interruption or fault assuming that copies of the local mempool are persisted.

Similary, Aztec can operate with optimistic L1 reorg resilience by local rollback and replay of a single Aztec block assuming that copies of local private state are persisted. Pessimistic L1 reorg resilience can be achieved by updating Aztec network state with a lag of one Aztec block.

Sidecar enables scaling blocks to available compute by allowing Sequencers to negotiate deals with Prover networks. It must be noted, however, that a limiting resource for Aztec block production is likely to be available L1 blob storage (considering as of yet unclear blob market competition). Sidecar also incentivizes limiting block size to available L1 blob storage as Provers risk having their proving bond slashed if L1 builders are unable (or unwilling) to fit the required transactions into L1 blocks.

Finally, Sidecar as a Prover Selection protocol provides ample flexibility for future cryptography improvements through non-enshrinement of any particular implementation. Instead, Provers would simply avoid posting a commitment to prove, if any such improvement rendered them unable to complete their job. This allows for flexibility within the Prover Pool, selecting only from those who feel both economically and technologically able to participate.

IV. Final Thoughts & Concluding Remarks

In this response we provide perspectives and identify features of the Fernet-Sidecar builder process that may be worth considering during the ongoing design and implementation iterations of the workflow. Some perspectives indicate that one or more features carry associated risks that can be mitigated through simulation efforts already in progress, while others might require ongoing monitoring and risk management efforts throughout the Aztec network lifecycle.

Overall, we see continuous progress on research and design efforts surrounding Sequencer and Prover coordination. The described workflow addresses its specified requirements and provides an important foundation for further research. Although such a novel design space naturally confers unique research, design and engineering challenges, addressing these challenges benefits both Aztec and the larger L2 community-- some of these challenges are already enumerated towards the end of this request for comments. We enjoy the possibility to contribute and collaborate on many of these challenges where possible, and welcome any feedback to this (now extensive!) forum post.


This is AJ, founder of Radius. We are building a shared sequencing layer for rollups, especially using encrypted mempool and zk to provide MEV-censoring resistance, synchronous composability and fast pre-confirmation for users on the rollups.

I have reviewed Aztec’s Fernet: sequencer selection proposal and Sidecar: prover coordination protocol, and they seem to meet the initially set requirements well. The way ideas were proposed and actively discussed through the community was very impressive. Moreover, through this comment, I would like to propose co-research between Radius and Aztec.

The Shared sequencing layer that Radius is designing has a similar structure to Aztec’s Sequencer’s Fernet + Sidecar, and it can be seen as somehow generalizing Aztec’s structure to stack for other zk-rollups. From the rollup’s perspective, there are a couple of options to choose to build zk-rollups in a modular way: Sequencer, Builder, Prover, DA and Settlement layer. Among these options, Aztec has made such a choice, which will be the same concern for many zk-rollups to be developed in the future. Radius intends to simplify these concerns to smoothly achieve the roadmap for Rollup-centric Ethereum.

Co-research Areas

  • We are also conducting research on cost efficiency through out-of-protocol proving, similar to Sidecar. (Since proof-related costs ultimately propagate to users as costs, optimization is essential.) We are exploring designing an architecture similar to the mentioned Proof-boost.
  • We are considering an Encrypted mempool for user’s tx protection. (This involves keeping the user’s tx private until the block inclusion of the Tx is guaranteed, using delay encryption.) We would like to discuss whether Aztec’s private tx method could also protect rollup user Tx.
  • For cost reduction, we are researching proof aggregation. In this case, the Prover does not directly transmit the Proof to the L1 contract; instead, the Sequencer stores them and proceeds with Validation through Proof aggregation at an appropriate time.


  • When considering Economic Sustainability, generating profit through benign MEV extraction seems important. Without it, we might need to increase user fees to ensure continuous Sequencer participation or rely on native token rewards, both of which could negatively impact users due to inflation.
  • I’m curious if there has been consideration for a structure where the Sequencer receives and verifies Proof from the Prover directly and then submits it to the L1 contract. From the protocol’s perspective, since the Prover is a third party, it seems appropriate for the Sequencer to manage and operate the entire process. Additionally, various cost-saving measures, such as Proof aggregation, could be applied under the Sequencer’s leadership.
  • If a Sequencer is selected through Fernet, I’m curious about when users can receive pre-confirmation. Even if a user receives pre-confirmation from a specific sequencer, there might be a higher-ranked sequencer later.
    • Users could calculate sequencers’ rankings themselves, but this would be an overhead from the user’s perspective.
  • I’m wondering about the reason for assigning block rewards to Provers in Proof-boost. In Radius’s initial design, Provers are considered a kind of commodity, so the Sequencer pays the Prover for the service of generating Proof.
  • Allowing multiple Sequencers to propose seems to incur wasted costs. It might be better to assign responsibility for that epoch to the highest-ranked sequencer and impose a penalty if they fail to propose in time. Is there a reason for considering multiple Sequencers’ proposals?
  • Even with Random Leader election, the possibility of censoring-harmful MEV for public transactions seems high. Disempowering the right of the selected sequencer appears necessary, what do you think?
    • Of course, using Private tx would solve the issue, but for public tx as well, utilizing a concept like an encrypted mempool could make the system more robust.

Great Write up!, Could Aztec provide additional details on the hardware requirements for each proof type? This is crucial as the construction of proof trees will likely be distributed across multiple machines.
Regarding access to state, considering the size of Aztec’s state that provers need to access for generating the base rollup proof, implement one of the following measures:

  1. Have provers run Aztec full nodes, which could be a suitable short-term solution.
  2. Deploy a prover with state built-in, but this requires new logic and may not be optimal for the long term.
  3. Implement generic state storage in the prover node, which will be replicated across all nodes. This appears to be the most viable long-term solution.
    Concerning the prover commitment, who is expected to provide it? Given that it is a single commitment for the entire proof tree and the actual proving can and probably should be distributed across many provers, it seems logical for the sequencer to provide the commitment. This would allow the sequencer to retain flexibility in distributing the proof tree construction. Is this the intended approach?
    answers would be nice, Thanks.

What is the expectation about who will post the prover commitment? It would seem that the sequencer would be the natural provider of the commitment, as it is a single commitment for the full proof tree and the actual proving may and presumably should be split over multiple provers, while still allowing the sequencer to distribute the proof tree creation in a flexible way. Is this the main concept?

1 Like

This is Shivanshu from the Succinct team. We’ve been following the discussions on the Aztec forum and are aligned with the direction Aztec is taking with the Sidecar prover coordination proposal. We believe increasing the sequencer authority to submit the proof for their proposed blocks has several benefits:

  • Sequencers can leverage the open market to find the best proof prices. Further, through strategic third-party agreements, they have the potential to share in the prover profits
  • Vertically integrated sequencers have the flexibility to generate the proofs themselves, increasing their revenues
  • Aztec protocol maintains neutrality as it doesn’t have to enshrine any particular proof system
  • Lower costs for Aztec protocol as it doesn’t have to bootstrap a rollup-specific prover network

Given these benefits, we believe most ZK rollups will eventually choose to outsource their proving needs to external prover marketplaces. This is why we are building the Succinct Prover Network - an open marketplace that will democratize access to ZK proofs. Succinct Network is compatible with Aztec’s proving needs. It provides:

  • High Liveness: The network employs an auction mechanism to match proof requests with a highly available decentralized network of provers.
  • Competitive Proof Pricing: The free-market competition between specialized hardware providers leads to best-in-class proof pricing for rollups, without needing to lock-in any specific vendor. Aggregating demand across different rollups smoothens out demand for provers, who have better hardware utilization and therefore can offer cheaper proofs.
  • Native interoperability with other ZK rollups: The Succinct Network can aggregate proofs across different rollups, facilitating robust and fast cross-rollup interoperability without trusted intermediaries.

We are excited about the potential for collaborating with the Aztec team and are keen to contribute to shaping the future of privacy together. We would love to get involved and spec out the integration & design details.

1 Like

How are block rewards currently distributed in the Aztec design?
How will future versions of Aztec receive rewards based on their staked amount?
What is the purpose of conducting economic analysis and modeling with Blockscience 7?
Thanks a lot!

The hardware requirement is a bit of a stretch i reckon

Thanks for the article, Cooper! Very cool and informative!
I had a few questions while researching it:
What are the different phases involved in block production in Aztec?
What is the role of the Aztec L1 Contract in the verification and validation of blocks?
How do the confirmation rules for transaction execution differ at various stages in the block production lifecycle in Aztec?

What are the responsibilities of Aztec Sequencers?
Can Aztec Provers be non-full nodes? If yes, under what conditions?
How are historical copies of the Aztec state stored in immutable and decentralized storage providers?

-Could you provide more details on how the proof-boost mechanism operates and its potential impact on sequencers negotiating proof costs?

-How does the proposed system address the Biasability issue in RANDAO, and are there plans for potential improvements?
-Could you elaborate on the considerations for constraining RANDAO values and ensuring stability throughout a block?

Thanks for the RfC, Cooper!
I have a few questions about voting. Hope you can clarify :pray:
How will the voting process for upgrades in Aztec be randomized and distributed among sequencers?
Will there be any consequences for sequencers who choose to abstain from voting on upgrades?