From the Aztec team’s perspective, do you anticipate there’d be advantages to doing some parts of the Sequencer Selection protocol’s chain execution side endogenously in Noir? i.e. leveraging what you think would be good uses of Noir’s private state and selective disclosure capability differentiations. (Though as I write this, I feel like this is a phase2/follow on topic, it would still be interesting to know if there is any early thinking about this point)
How to select Sequencer, is there a point calculated system?
Definitely, @zac-williamson should be submitting a proposal soon that does just this. There are some trade-offs.
- The sequencer selection process likely has a role to play in the network upgrade process especially if you follow the ETH model of pure social consensus. This means that if the selection process uses Noir transactions and there was a bug, it could be quite fatal for the network if exploited.
- It is harder to bootstrap institutions e.g Fireblocks as they have to integrate the L2 on day 1, vs just an ETH contract.
- However running the process on L2 has benefits:
- Easily add privacy to the sequencer, and payment of any block rewards / fees
- Fees will be far cheaper than on L1
- The algorithm can be more computationally expensive, as its wrapped in a zkp.
Excellent question. We’re in the process of finalizing the selection criteria, which is planned to be communicated when the submission period concludes and deliberations begin (to not bias submissions). Probably not a point based system, but that is one of the options on the table. Another idea is ranked choice voting from a number of well educated experts across a variety of teams (eng, product, cryptography, commercial, etc). Do you feel strongly about any particular decision making framework? After deliberations, we’ll share the selected proposal(s) that will continue being pursued for further testing/simulations/modeling, with eventually 1 resulting in an implementation.
Perhaps worth noting that as the protocol and community expand, I imagine a more well established process for collective decision-making, integrating with @joshc’s AZIP process: [Proposal] Aztec Improvement Proposal (AZIP) Process.
Thanks for the time & interest! Let us know what other questions you have
Interesting! Thank you for the insight and context. I think that at the moment, I’m not personally concerned about bootstrapping a significant enough number of validators, or interest in participating in the network. Additionally, I’d be opposed fundamentally relying on other network’s validators opt-ing in, as they are less likely to be dedicated to upgrades/etc., with less longterm community alignment.
Do you see any overlap between this approach & “shared sequencers”?
Yes, there are two “stages” of proving coordination generally required or assumed.
1 - Proof delegation - breaking up which parts of a block’s private transactions will be worked on, and by whom. @jaosef’s PBS proposal handles this nicely with provers voting on what they want to work on.
The block proposer assembles block votes into a block transcript, detailing which provers will create which part of the proof tree from the available votes on the L2 gossip network.
Generally this design decision comes from the idea in Aztec that if the block is built with invalid transactions, or the provers work on a block that doesn’t end up in a rollup, that could waste a significant amount of time resulting in an uncle block (2-10+ minutes, potentially, across 100s-1000s of machines…). So they need strong commitments prior to beginning the parallel proving.
2 - Proof aggregation - After the proving network is completed, someone or some mechanism must be in place for aggregating all of these individual proofs and putting them into a single final rollup which gets published to the Eth mainnet. (stealing this photo from Joe’s proposal, which may be a helpful mental model)
Lastly, I’d say that this RFP allows you to assume that these are “problems to be solved later” (via a subsequent RFP) - hence why it’s not explicitly outlined. The goal is to get to a point in time in which the proving network knows there’s a valid block being built, and/or sequencer to listen to.
Does this all make sense & help clarify?
Hey @cooper-aztecLabs -
When you say “sequencer selection should not prioritize the best hardware or largest actors” is this meant to preclude PoS?
Hey @jrg thanks for the comment!
I think PoS is definitely on the table, and directionally where you see Whisky/Joe’s PBS proposal going. With respect to not prioritizing best hardware or largest actors, this is a (perhaps poor) way to try and prioritize a design that doesn’t consolidate on “fastest provers/sequencers win” → as this could result in a single monopolistic entity controlling block production, likely in a very large cloud instance, that sort of thing. I think that Joe’s proposal is more likely to become centralized around very sophisticated proposers, but does a nice job ensuring that the proving network and rest of rollup production is well distributed.
My proposals way of addressing this is that it randomly chooses a leader from the entire set of sequencers that are currently staked. Therefore, if you have a machine that meets the minimum performance requirements, and if you have met the minimum staking requirements, then you have some (small) guarantee that you have a random chance to produce a block (eventually). This is nice to have, because in a worst case censorship scenario, users would still have a chance to build their own blocks, and include their own transactions.
I think it would be interesting to add the role interoperability plays in the discussion. Reading about shared sequencer solutions like Espresso and Astria, or watching the OP Stack approach, apparently having a shared sequencer can enable interoperability between chains. I’m super intrigued by how public/private state would work in such cases.
This also defines if Aztec-based chains would be more of a sister chain that shares the sequencer and L1 contracts or an L3 that uses Aztec as a settlement layer.
I would like to suggest adding an increased notion of censorship resistance to any leader-based protocols for sequencers, by utilising some of the directions proposed wrt fair ordering proposed in modified versions of Themis (and later on, in Multiplicity). This presumes some form of leader-based consensus mechanism between sequencers in the network (which should hopefully not be too resource heavy).
We want to enable a form of coarse-grained ordering and then utilise fine-grained auction mechanisms (potentially driven by MEV projections - this is a fine discussion) in order to reduce a single proposer’s monopoly on txn inclusion and drive better time-scales for eventual inclusion (hopefully linked to network latency and also in the process, avoid some of the complexities associated with Themis). The core idea, proposed in Duality’s Multiplicity blog post, relies on a modification to any leader-based consensus protocol for sequencers - and allow the leader to collect all orders from consensus nodes and subject them to a fair aggregation protocol (and a proof of correctness): this could be the leader choosing to include a certain number of txn bundles from sequencer signed bundles - and other nodes determining the validity of the leader’s construction and choosing to accept/reject accordingly. The number mentioned previously, could be stake-weighted, or more interestingly, linked to sequencer reputation or uptime (and maybe relying on Noir). The idea for the latter is essentially a proof of uptime/sequencer performance, which could be linked as a VDF taking the beacon chain state + number of bundles produced till date as input - credits to the Obol team as well as the delay tower proposal by the 0L team).
This might be a nice fit for an ecosystem with a small number of sequencers and also help with establishing a base towards including contributions from a larger pool of sequencers in a stake-independent dimension.
The last point I would like to highlight (even as part of the inclusion criteria) is getting feedback from the Aztec team and the broader community as it relates to sequencer-specific behaviour (on the network propagation/compute piece) - including other forms of ‘witnesses’ (via network telemetry and some efforts on the proof of bandwidth gadgets) in order to facilitate proper stakeholder behaviour if un-incentivized/not on the happy path for L2 epochs.
Hello all, to our knowledge everyone currently working on a proposal has posted… So we are doing a final call for proposals! Please respond to this comment within the next 48 hours if you’re currently working on something that has not yet been published on this forum. Otherwise the submission window will close. Discussions, feedback, etc., are all still highly valuable and very much appreciated.
Thank you to everyone who’s participated so far!
Following up here to confirm that the submission deadline has closed Thanks so much for to those who’ve submitted proposals. Note that there’s still plenty of debate, feedback, and discussion to be had
For me it’s awesome but at the time i have no idea how to do it
Please read this week’s Fernet announcement here for more information: https://medium.com/aztec-protocol/announcing-fernet-aztecs-decentralized-sequencer-selection-protocol-dd06194d572f as well as insight into some outstanding (but very interesting) design decisions!
Thank you to everyone who has participated! The value of your contributions cannot be understated.
Excited to continue designing, debating, and building fully in public.
Also quick reminder we have an active rfp for upgrade mechanisms!