Interesting! Thank you for the insight and context. I think that at the moment, I’m not personally concerned about bootstrapping a significant enough number of validators, or interest in participating in the network. Additionally, I’d be opposed fundamentally relying on other network’s validators opt-ing in, as they are less likely to be dedicated to upgrades/etc., with less longterm community alignment.
Do you see any overlap between this approach & “shared sequencers”?
Yes, there are two “stages” of proving coordination generally required or assumed.
1 - Proof delegation - breaking up which parts of a block’s private transactions will be worked on, and by whom. @jaosef’s PBS proposal handles this nicely with provers voting on what they want to work on.
The block proposer assembles block votes into a block transcript, detailing which provers will create which part of the proof tree from the available votes on the L2 gossip network.
Generally this design decision comes from the idea in Aztec that if the block is built with invalid transactions, or the provers work on a block that doesn’t end up in a rollup, that could waste a significant amount of time resulting in an uncle block (2-10+ minutes, potentially, across 100s-1000s of machines…). So they need strong commitments prior to beginning the parallel proving.
2 - Proof aggregation - After the proving network is completed, someone or some mechanism must be in place for aggregating all of these individual proofs and putting them into a single final rollup which gets published to the Eth mainnet. (stealing this photo from Joe’s proposal, which may be a helpful mental model)
Lastly, I’d say that this RFP allows you to assume that these are “problems to be solved later” (via a subsequent RFP) - hence why it’s not explicitly outlined. The goal is to get to a point in time in which the proving network knows there’s a valid block being built, and/or sequencer to listen to.
I think PoS is definitely on the table, and directionally where you see Whisky/Joe’s PBS proposal going. With respect to not prioritizing best hardware or largest actors, this is a (perhaps poor) way to try and prioritize a design that doesn’t consolidate on “fastest provers/sequencers win” → as this could result in a single monopolistic entity controlling block production, likely in a very large cloud instance, that sort of thing. I think that Joe’s proposal is more likely to become centralized around very sophisticated proposers, but does a nice job ensuring that the proving network and rest of rollup production is well distributed.
My proposals way of addressing this is that it randomly chooses a leader from the entire set of sequencers that are currently staked. Therefore, if you have a machine that meets the minimum performance requirements, and if you have met the minimum staking requirements, then you have some (small) guarantee that you have a random chance to produce a block (eventually). This is nice to have, because in a worst case censorship scenario, users would still have a chance to build their own blocks, and include their own transactions.
I think it would be interesting to add the role interoperability plays in the discussion. Reading about shared sequencer solutions like Espresso and Astria, or watching the OP Stack approach, apparently having a shared sequencer can enable interoperability between chains. I’m super intrigued by how public/private state would work in such cases.
This also defines if Aztec-based chains would be more of a sister chain that shares the sequencer and L1 contracts or an L3 that uses Aztec as a settlement layer.
I would like to suggest adding an increased notion of censorship resistance to any leader-based protocols for sequencers, by utilising some of the directions proposed wrt fair ordering proposed in modified versions of Themis (and later on, in Multiplicity). This presumes some form of leader-based consensus mechanism between sequencers in the network (which should hopefully not be too resource heavy).
We want to enable a form of coarse-grained ordering and then utilise fine-grained auction mechanisms (potentially driven by MEV projections - this is a fine discussion) in order to reduce a single proposer’s monopoly on txn inclusion and drive better time-scales for eventual inclusion (hopefully linked to network latency and also in the process, avoid some of the complexities associated with Themis). The core idea, proposed in Duality’s Multiplicity blog post, relies on a modification to any leader-based consensus protocol for sequencers - and allow the leader to collect all orders from consensus nodes and subject them to a fair aggregation protocol (and a proof of correctness): this could be the leader choosing to include a certain number of txn bundles from sequencer signed bundles - and other nodes determining the validity of the leader’s construction and choosing to accept/reject accordingly. The number mentioned previously, could be stake-weighted, or more interestingly, linked to sequencer reputation or uptime (and maybe relying on Noir). The idea for the latter is essentially a proof of uptime/sequencer performance, which could be linked as a VDF taking the beacon chain state + number of bundles produced till date as input - credits to the Obol team as well as the delay tower proposal by the 0L team).
This might be a nice fit for an ecosystem with a small number of sequencers and also help with establishing a base towards including contributions from a larger pool of sequencers in a stake-independent dimension.
The last point I would like to highlight (even as part of the inclusion criteria) is getting feedback from the Aztec team and the broader community as it relates to sequencer-specific behaviour (on the network propagation/compute piece) - including other forms of ‘witnesses’ (via network telemetry and some efforts on the proof of bandwidth gadgets) in order to facilitate proper stakeholder behaviour if un-incentivized/not on the happy path for L2 epochs.
Hello all, to our knowledge everyone currently working on a proposal has posted… So we are doing a final call for proposals! Please respond to this comment within the next 48 hours if you’re currently working on something that has not yet been published on this forum. Otherwise the submission window will close. Discussions, feedback, etc., are all still highly valuable and very much appreciated.
Following up here to confirm that the submission deadline has closed Thanks so much for to those who’ve submitted proposals. Note that there’s still plenty of debate, feedback, and discussion to be had
I’m excited to share that this week we announced the decision to continuing to iterate on and improve fernet, versus b52 the other finalist candidate which was previously announced on July 13th.