Request for Comments: Aztec Sequencer Selection and Prover Coordination Protocols

Confirmation Rules:

  1. How does the proposed confirmation rule system provide information about various stages of transaction confirmation for users and developers?
  2. Are alternative confirmation rule options considered, and what factors influence these considerations?

MEV and Proof-boost:

  1. How does MEV-boost address the issue of Maximal Extractable Value in the Aztec Network?
  2. Can you provide additional details regarding the role and functionality of Proof-boost, especially in negotiations over the cost of specific proofs?

What is the proposed ratio for the distribution of block rewards in the Aztec network?
How are the block rewards allocated to the sequencer, prover, and entity submitting the rollup?

How does the mitigation of MEV work in the Aztec network?
What types of transactions are affected by MEV in the Aztec network?


The hardware requirements are extreme .


Hey, Cooper!
I think you’ve thought this through very well.
Decent system requirements, by the way. I see they are not to everyone’s liking :slight_smile:
Still, I have a few questions:
How do sequencers determine the ranking of block proposals?
What happens if a prover fails to submit a commitment or if the block is not finalized?
How does the backup phase of “based-rollup” mode work?


Block Production RFC Comments (Response)

Hey everyone! Thank you so much for the thoughtful comments, questions, and suggestions. Thank you as well for your patience this week in our response, there were many things that warranted significant discussion and consideration. Your participation truly makes a difference on the design of the eventual Aztec Network.

We received two different “categories” or types of responses.

  1. Clarifying questions
  2. Suggested changes

In this response we will attempt and answer these clarifying questions. If similar questions or comments were submitted we will do our best to consolidate these and provide a singular response for clarity and consistency.

There were a variety of suggested changes including 1) “becoming a based rollup” - which I would say has since been rebranded to using “The Ethereum Shared Sequencer” (ESS) - and generally 2) trying to improve the speed of block production.

We are currently debating paths forward in these designs and attempting to get to a position of more clearly understanding the tradeoffs, including various conversations with external parties - such as taking an active role in the newly created Ethereum Shared Sequencer & Preconfirmations Community Call. If you feel strongly about Aztec using Ethereum as a shared sequencer, please get in touch! We’d love to hear why.

The Aztec Labs team will share more information to this research forum once we are all on the same page with respect to the suggested changes and their viability. This should indicate to the community that we are taking the suggestions and recent interest in using Ethereum as a shared sequencer seriously.

Thank you to Joe, Lasse, and Santiago (Palla) @ Aztec Labs for their review on these responses.

Clarifying questions

  1. What are the hardware requirements for provers, and for a given type of proof? @ norbert, @ efosa919, @ Abdulruphai

As noted in the original request for comments, the specific hardware and networking requirements are largely to be determined after completion and benchmarks of the latest “HONK” proving system from the Aztec Labs team.

:computer: Minimum Recommended Ideal
CPU 16 cores 64 cores 128 cores
Networking 64 mb/s 256 mb/s 1 gb/s
Storage 1TB 3TB 5 TB
RAM 16 GB 64 GB 128 GB

These hardware requirements are current best guesses and subject to change. It is also possible that the initial versions of Honk release with higher hardware requirements (closer to the ideal hardware articulated here) than would be expected to be recommended or minimally viable after some known performance optimizations.

Obviously a bigger machine is going to be a safer assumption to target, I believe that goes without saying! Ideal is as big as your profit margins or expectations would allow. Currently it’s expected for both simplicity and practicality’s sake that these hardware requirements are for all “levels” of types of proofs that must be generated, ie. merge, base, and rollup.

It is also important to note another thing that may have gotten lost in the original RFC – these are machines expected to operating in parallel, together, to produce Aztec’s rollup proof. The quantity of which is dependent on the size of a block, but we would estimate that you may need 100s to maybe even thousands of these machines to prove each Aztec rollup. Please see below for insight as to how this parallel computation could be distributed.

  1. Can you clarify if and how the state could be split up into smaller chunks for provers? @ teemu

The short answer is yes it’s possible, and kind of “up to proving marketplaces” how they wish to optimize within the tradeoff space. Which may not be a great answer for you :smile: There’s a broad spectrum of options for how to break up the state for different “proving network topologies”. In various cases, we belive that having the lowest hardware requirements possible and maximally parralellizing the proving will result in the most decentralized network with the lowest barrier to entry. It is also possible that the proving network could entirely be ran on a single, giant machine to reduce networking costs (or other costs).

For instance, here is the design from the original Prover Coordination RFP. Here it articulates a small rollup of 16 transactions being proven by 4 individual machines (or staked provers), with the sequencer (proposer) acting as the final aggregator, compressing each other prover’s proofs into the final two merge & root rollup proofs.

And here is another of the same 16 transaction rollup (naive, but hopefully illustrative) model that would use less individual participants (in this case 2-3), but larger hardware requirements, with a dedicated “aggregator” or coordinator. In this model it’s expected that due to the already parallel design, these hardware requirements would scale proportionately to the amount of work required (maybe minus some networking considerations).

We currently expect blocks to contain somewhere between 120 in a small block configuration up to over 1,000 transactions per block. This should give you some relative insights as to the height of the tree that would need to be proven and how many machines may be required. Ultimately this is dependent on network usage and interest, and other constraints such as “gas” limitations or upper bounds imposed elsewhere on the network architecture. The proof tree as articulated is quite scalable. It is also possible that the network may not define an upper bound on the block, and “let the market figure out” the maximum size of a block it is able to propose, prove, and land on L1 within the allocated time frame. In theory this is limited by demand, compute across various ecosystem participants, and data availability constraints. The cost to verify the rollup itself doesn’t change due to the size.

The Aztec Labs Engineering & Cryptography teams are actively working on benchmarks that will be published alongside some documentation to help guide integration and identify where on the tradeoff space individual projects may choose to land. I would expect this documentation to take a few months, rather than weeks, unfortunately - potentially targeting Q2, 2024. Alongside this it should come with performance requirements to ensure that the specific configurations a proving marketplace chooses to implement meets the needs of the rest of Aztec’s network architecture.

  1. How will prover-boost work and what are the impact on ecnomics? @ lomnasfar

Prover-boost is mostly left undefined at the moment. As articulated in the original request for comments, it will work similar to how mev-boost operates and facilitate an opt-in auction by which sequencers can effectively “sell” the rights to prove a given block, in exchange for a portion of the block rewards, and potentailly out of protocol tips or other buisness arrangements. Generally the economics of prover-boost are out of the protocol, since it’s “unenshrined” and opt in.

If you’re interested in helping us design prover-boost, we currently aim to start a community research and design effort within the next few weeks (:wave: @david/radius, gevulot, succinct, and many others!). Many have also specifically expressed interest and we wish to ensure this is a general puprpose set of open source software that is expected be helpful to other ecosystems, or even other zero knowledge rollups.

  1. What is the economics workstream with BlockScience covering? How does this document articulate this? @ boreta5

The plan is to formally define and structure the “economic marketplace” of Aztec including demand from the various participants including: sequencers, provers, full nodes, archival nodes, application developers, individual users who are often paying for transactions via fees, etc.

The document doesn’t aim to articulate this workstream, moreso indiciate to the Aztec Community and those reading this research forum that there is dedicated work ongoing in the direction of network economics, and that is why there may be limited information currently available about Aztec’s expected economics. We’re taking it seriously!

  1. Is there adequate incentive to run an archival node? @ bberry259

Ultimately, the current economics are similar to Ethereum in the sense that they do not particularly try to incentivize long term storage. This is due to the difficulty in proving or verifying that someone is actually running an archival node &/or storing the data consistently, among other challenges. It’s generally assumed that there are participants sufficiently incentivized outside of the protocol to ensure there are all relevant copies of historical state. For instance, application developers or RPC providers. It is also quite possible, or even likely, that there is some type of incentive scheme put into place for ensuring the usage of long term storage networks, such as Filecoin or Arweave or {other viable long term storage provider}, to further guarantee historical data is available. This hypothetical incentive scheme would likely be out of protocol via a grants program or something similar.

  1. If every sequencer is responsible for proving (or outsourcing the proof of) the block they propose, and if this proving is sequential, considering that the proving time is likely to be longer than block time, then can’t the time to “hard finality” increase superlinearly with block height? With the number of blocks growing, won’t we end up with a system that becomes slower and slower in terms of finalization? Maybe I’m missing something. @ sam from node guardians

In Aztec’s current designs as articulated, we would not allow the “proposal” phase to begin until the previous slots proofs are finalized. It is quite strict in that sense. Therefore it is not the case that finality times are going to increase, or change. The current designs aim to produce a block every N minutes, eg 6-10 minutes (!), depending on some relationship between sequencer/prover execution speed, and the size of a given block. You would get “hard finality” on the same timeline, with respect to inclusion confirmations, but quite faster than others with respect to finality.

We should be very clear in the sense that is quite “slower” than other block production schemes used in (centralized) L2’s today specifically with respect to transaction confirmations, but it should be able to target a relatively industry standard amount of TPS (on-par with other L2’s current usage/demands), and specifically decides to tradeoff speed/performance for safety - in the sense that it defines waiting for the block to get proven + validated, before proposing the next block. Given the novelty of Aztec’s cryptography, we believe this could be a valuable tradeoff to make in the early days of a fully decentralized network. Block proposals are linear and always correlated to “hard finality” and this hard finality is expected to be ~10 minutes (+ waiting for the verification to be deep enough in Ethereum’s history depending on the use case) compared to other rollups which offer 24 hours or even 7 days in many cases.

There are some ongoing conversations on suggested changes, and therefore this may change if Aztec. Notably with the suggested changes of potentially using the Ethereum Shared Sequencer (ie becoming a based rollup), &/or figuring out “faster block production schemas” then that is certainly an issue that needs consideration.

If this doesn’t make sense or answer your specific questions, I’d love to get on a call to discuss!

  1. How does the generation of proofs work? If it involves competitive proving, then I guess the most efficient prover always wins? @ sam

At the moment, the generation of proofs happens “however the sequencer chooses to” – we expect in practice that “well resourced” sequencers may choose to vertically integrate, and non-well resourced sequencers choose to auction off the rights to prove their blocks via what we would call “prover boost” - in this case, I think that centralization around the fastest prover is not necessarily a concern. I think that the concern would be specifically aligning in the tradeoff space of “fast enough to produce proofs within the required timeframe” and cost – therefore, it should not purely be a hardware game, and hopefully result in a rich economic marketplace of various competing actors.

For further clarity & transparency, the Aztec Labs team is already in conversations with various networks that wish to participate in this type marketplace, many of whom have responded publicly to this thread (kudos to them!), each with various different approaches.

  1. What is the expectation about who will post the prover commitment? It would seem that the sequencer would be the natural provider of the commitment, as it is a single commitment for the full proof tree and the actual proving may and presumably should be split over multiple provers, while still allowing the sequencer to distribute the proof tree creation in a flexible way. Is this the main concept? @ Efosa919

This is a good question. If the sequencer puts up the prover commitment, then the sequencer is “on the hook” for producing the proofs - in the case of a slashing condition. This means that the sequencer, if posting the individual proofs as jobs on a third party proving marketplace, may be subject to what we understand as “proof withholding attacks” – eg single or a sufficient subset of provers not doing their “jobs” may prevent the block from finalizing within a sufficient amount of time. This may be the case if they are acting byzantine in some capacity, including internet or hardware failures. Therefore those putting up prover commitments are inherently taking on this risk for the MEV and/or block rewards associated with this block.

In practice, I expect the prover commitment to be supplied by either a smart contract from a decentralized proving marketplace, a vertically integrated sequencer-prover who has sufficient confidence in their own infrastructure to self prove, or alternatively via a relayer/builder who’s sophisticated and well capitalized enough to take on the slashing risk.

  1. How does the proposed system address the Biasability issue in RANDAO, and are there plans for potential improvements? Could you elaborate on the considerations for constraining RANDAO values? Are there cryptographic safeguards or economic disincentives to deter manipulation attempts [regarding the use of RANDAO]? @ solderpo2

RANDAO is commonly accepted within the Ethereum community as a “sufficiently good enough” sense of randomness, depending on the use case. In this case, we believe it to be practically sufficient. In the future if the Ethereum research community aligns on a better random beacon, I would expect that to be adopted in replace of RANDAO. Further reading suggestions: 1, 2, 3. Some explicit consideration on the biasability of RANDAO and it’s impact on the Fernet sequencer selection algorithm can be found in BlockScience’s initial report analyzing Fernet vs B52.

  1. How will the voting process for upgrades in Aztec be randomized and distributed among sequencers? @ MarkCartooN

Further information on Aztec upgrades and governance will be published in the coming weeks-to-months. Please see the upgrade mechanism RFP to get a sense of some proposed designs!

  1. Will there be any consequences for sequencers who choose to abstain from voting on upgrades? @ MarkCartooN

No, in the currently articulated designs (that as above, are expected to be shared soon) there are no consequences. In practice, abstaining from voting in the “happy path” of block production for sequencer’s participating in the Fernet sequencer selection protocol would be equivalent of voting “no, I do not wish to upgrade, and prefer to stay on this current version”. Depending on the final upgrade mechanism designs, it may be the case that sequencers who vote no are able to continue sequencing on the version of the network they prefer.

  1. What factors will be considered in determining the distribution ratio of block rewards among sequencers, provers, and entities submitting rollups to L1? @ dragonfyyyy

There will be a lot of factors to consider but it should be reflective of relative cost to operate including: hardware acqusition, networking, electricity, and generally the relative opportunity cost of doing similar services for other networks. Please stay tuned for the more detailed economics workstream in Q2 2024 for more information on reward mechanisms and how we arrived at them.

  1. How does the proposed confirmation rules framework enhance user experience in understanding transaction confirmation stages? Are there considerations for alternative confirmation rule frameworks, and what encouraged the proposed approach? @ gibonnnnnnn

The proposed confirmation rules are a guiding principle that articulate different phases of transaction finality, or expected finality, rather than a universal truth. We share this as the security of any transactions in any network are predicated on the confirmation mechanisms. Being explicit is favorable to ambiguity in almost any security circumstance. In practice, each application or user will have different preferences for how to share, display, and act on this information. For instance, a micropayments transfer of $0.01 likely can and should have different confirmation mechanisms than a transfer of $1,000,000. Transactions are guaranteed to be finalized after the rollup proofs have been verified by the Aztec L1 smart contracts and are sufficiently deep within the Ethereum blockchain to prevent reorgs.

  1. Could you elaborate on the measures taken to ensure the integrity and confidentiality of proofs generated outside the core protocol? Additionally, how are potential vulnerabilities like front-running or denial-of-service attacks addressed within this setup?

Aztec’s privacy assumptions are generally predicated on client-side generated zero knowledge proofs. The prover coordination articulated here is aiming to compress all of these client side generated zero knowledge proofs into a single SNARK that can be easily verified on Ethereum. Please read more here about Aztec’s rollup circuits and the goal of this design.

Frontrunning is somewhat prevented due to the way all transactions originate from a private function context, but not prevented in the public side of Aztec’s execution environment with public functions. DoS attacks are generally the same for any similarly distributed blockchain architecture and should be treated with careful consideration.

If you’re referring to the question of “how do I access encrypted notes that I want to use” please see some of the latest thinking within the Aztec note discovery RFP.

Next steps

Here’s just a quick articulation of where this workstream will go from here.
Please let me know if there are specific questions!

  1. Ongoing dialogue in this forum
  2. Continued research on the suggested changes
    • Using the Ethereum Shared Sequencer (becoming a based rollup)
    • Providing a better (end) user experience
      • Stronger preconfirmations
      • Faster confirmation times
  3. Specifications of Aztec’s mev-boost & prover-boost equivalents
    • Note that these may be relevant &/or necessary regardless of the suggested changes, so we intend to move forward with their design irrespective of #2.
    • Those who have expressed interest in this form or privately should hear soon on how to participate!
  4. Integrating specific economics into block production designs
    • Fees, burning, block rewards, etc.
  5. Build!
  6. Test!
  7. Ship!

If we missed your question, and you believe it’s important, please follow up!


I’m sorry but the CPU requirement is high for me and I would like to know why it’s like that. 3TB is not easy buy :pensive: