Why not just use Chainlink’s decentralized sequencer?
Fair Sequencing Services: Enabling a Provably Fair DeFi Ecosystem can read more on there design here.
I think this is a much better and safer alternative that trying to kickstart your own set of sequencers.
Edit: linking to new post (missed that bit)
Is this live? Do you have documentation past the initial spec?
Thanks for the suggestion!
Use L1 to drive sequencing?
Hey @0x-Stoic I think in general, we could use something like “based/L1 sequencing” but it begs a few other questions in Aztec’s context - specifically, who’s responsibility does it become to: execute state transitions, coordinate the proving network, and aggregate transactions into a final zk rollup. It sounds like the suggestion is piggyback on L1 validators/proposers/searchers/builders, and have them do some ~aztec specific~ work? Is that generally correct, or could you provide some comments on how you’d map this proposal to our usecase/context? Due to how much work an Aztec sequencer (or whomever is publishing the rollup) may require, it may not be reasonable to expect L1 node operators would do so. But that’s just my initial reaction! Appreciate the contribution/discussion.
Thanks for opening this up! Properly decentralizing sequencer trully matters when it comes to build scalable and secure rollups.
Do you have any due dates for submission?
I am by no means an expert on based rollups. In fact, I am still trying to wrap my head around it; but I do see its benefits as detailed by Justin Drake. That said, I agree that L1 node operators will not be able to carry out all the work. These would be done at L2. For state transition execution, does it have to be done by any designated entity? If the sequence is determined by L1, then the state transitions become deterministic and every L2 node will know them. I am not sure I understand the need to coordinate with the proving network. Thought the users generate the proof locally and then submit them to Aztec/L2. So not sure what needs to be proved, unless you mean some entity needs to coordinate the proving of the aggregate proof and push down to L1?
I had the same reaction when I saw Justin Drake’s proposal a while ago. Despite risking to sound like an EigenLayer shill (I’m in no way associated with the project, also they have no token) - I have to say that I think that’s exactly where EigenLayer fits in nicely. It allows validators to opt in to take on addittional duties, possibly exposing them to additional slashing conditions in exchange for additional yield. I don’t think just “possibly some MEV in the future” is strong enough of an incentive to take on the amount of work required.
Great question! Currently targeting May 23rd for submission, and leaving some time afterwards for discussions + deliberation. However if you’re working on a proposal and believe this deadline cannot be met for any variety of reasons, please reach out - I’d love to chat!
I am a fan of Eigenlayer. I think it is great for protocols that are not able to bootstrap a trust network. However, for Aztec and most L2s, I dont think that will be an issue. And I dont buy the pooled security argument that Eigenlayer will afford every protocol Eth L1 level security. The validators have to opt-in, and with Eigenlayer having access to the stakers’ withdrawal credentials, and each EL project having varying levels of slashing vulnerabilities, it will be interesting to see how many actually opt-in. Additionally, for Aztec there is the overlap risk. If there is a high overlap between Aztec Eiglenlayer stakers and other projects’, any one single project’s mass slashing due to unintended vulnerabilities will impact the other projects’ validator set
That’s why I’m more of a fan of the LSD variation
Awesome, will get back to you on this!
From the Aztec team’s perspective, do you anticipate there’d be advantages to doing some parts of the Sequencer Selection protocol’s chain execution side endogenously in Noir? i.e. leveraging what you think would be good uses of Noir’s private state and selective disclosure capability differentiations. (Though as I write this, I feel like this is a phase2/follow on topic, it would still be interesting to know if there is any early thinking about this point)
How to select Sequencer, is there a point calculated system?
Definitely, @zac-williamson should be submitting a proposal soon that does just this. There are some trade-offs.
- The sequencer selection process likely has a role to play in the network upgrade process especially if you follow the ETH model of pure social consensus. This means that if the selection process uses Noir transactions and there was a bug, it could be quite fatal for the network if exploited.
- It is harder to bootstrap institutions e.g Fireblocks as they have to integrate the L2 on day 1, vs just an ETH contract.
- However running the process on L2 has benefits:
- Easily add privacy to the sequencer, and payment of any block rewards / fees
- Fees will be far cheaper than on L1
- The algorithm can be more computationally expensive, as its wrapped in a zkp.
Excellent question. We’re in the process of finalizing the selection criteria, which is planned to be communicated when the submission period concludes and deliberations begin (to not bias submissions). Probably not a point based system, but that is one of the options on the table. Another idea is ranked choice voting from a number of well educated experts across a variety of teams (eng, product, cryptography, commercial, etc). Do you feel strongly about any particular decision making framework? After deliberations, we’ll share the selected proposal(s) that will continue being pursued for further testing/simulations/modeling, with eventually 1 resulting in an implementation.
Perhaps worth noting that as the protocol and community expand, I imagine a more well established process for collective decision-making, integrating with @joshc’s AZIP process: [Proposal] Aztec Improvement Proposal (AZIP) Process.
Thanks for the time & interest! Let us know what other questions you have
Interesting! Thank you for the insight and context. I think that at the moment, I’m not personally concerned about bootstrapping a significant enough number of validators, or interest in participating in the network. Additionally, I’d be opposed fundamentally relying on other network’s validators opt-ing in, as they are less likely to be dedicated to upgrades/etc., with less longterm community alignment.
Do you see any overlap between this approach & “shared sequencers”?