[Proposal] Sequencer Selection: Fernet

Data withholding issues

After a long discussion with @Mike, we believe that data withholding issues (whether unintentional or malicious) are not a major problem for the proposal, since we have multiple candidate sequencers per slot. Given we can have multiple leaders per slot, which means multiple commitments to block orderings, we can expect provers (and nodes) to follow the one with the leading VRF for which they have all available data.

If this is the one with the leading VRF, then clients have a strong guarantee that this will become the canonical chain (only exception would be if the prover network times out).

If it is not, either a) the next VRF gets proven, which is the one that the network was following (assuming no splits), so we’re good, or b) the leading VRF gets proven somehow (let’s say the sequencer went on to prove on their own and withheld data from the proving network). If (b) happens, then we experience a reorg, but it shouldn’t affect liveness.

Comparison with Schmequencer

There are two major differences between Schmequencer and Fernet, each with their pros and cons:

  • Preconfirmations vs latency: Fernet pushes the block ordering commitment to L1, in the form of the “unproven block”, which works as a preconfirmation for L2 nodes (assuming no data availability issues as mentioned above). On the other hand, Schmequencer only pushes to L1 once the proof has been completed. This allows Schmequencer to keep adding txs to the block being built/proven, which can reduce time to finality of submitted txs. Fernet can mitigate this by having multiple unproven blocks per rolled-up block (ie commiting message orderings more frequently than it assembles proofs), but this increases L1 costs and could make reorgs deeper.

  • Staking vs balances: While Fernet requires staking for sybil resistance, Schmequencer keeps things simpler by snapshotting balances. This is explored in “Requiring staking for participation” in the proposal, but the simplicity of Schmequencer’s approach is still appealing.

5 Likes

Here it says:

However, more research is needed to determine if there is a valid DA layer that fits the requirements. We estimate that the proof size for each tx could be up to 32kb, so even 4844 blobs are impractical.

Why’s that? Do EIP4844 blobs not have ~125kb of space? Also there can be several of them per block.

The main feature introduced by proto-danksharding is new transaction type, which we call a blob-carrying transaction . A blob-carrying transaction is like a regular transaction, except it also carries an extra piece of data called a blob. Blobs are extremely large (~125 kB), and can be much cheaper than similar amounts of calldata.

1 Like

Congratulations :tada: I appreciate the decentralized ranked choice process.

2 Likes

That’d mean we can squeeze at most 4 txs per blob, which is way below the TPS we’re looking at. And that’s also assuming that no one else in the entire ethereum ecosystem is using the blobs!

1 Like

For the sake of completeness, here’s the link to the latest version of Fernet: Fernet - HackMD

1 Like

I see, thx for clarifying. I assumed 32kb were already many L2 transactions batched into one

Howdy folks! Following up here to share that this week we announced the decision to continuing to iterate and improve on fernet, versus b52 the other finalist candidate.

Please read the announcement here for more information: https://medium.com/aztec-protocol/announcing-fernet-aztecs-decentralized-sequencer-selection-protocol-dd06194d572f

Thank you to everyone who has participated!

Excited to continue designing, debating, and building fully in public.

1 Like

The current design is vulnerable to L1 proposer collusion. L1 proposers can buy L2 blocks with a single low VRF score, by censoring L2 proposers during the proposal phase.

1 Like

If I understand correctly, you mean that the L1 proposer can simply choose what L2 proposals to include, right? If that happens, then the VRF scoring is rendered useless, but I’m not sure it affects liveness: it just changes the selection protocol to that of a based rollup, which is not necessarily bad I believe, since the barrier of entry to be an L1 proposer is also designed to be low. What do you think?

1 Like

I agree. How about a based version of Fernet? Merge proposal/reveal phases into a single L1 block, and let anyone post sufficient collateral alongside block data.

IIRC, we had ruled out based sequencing since it doesn’t promote much diversity for L2 sequencers. Seems like incentives would lead to just a handful of builder-proposers pushing the blocks to L1 via a MEV sidecar. And if these go down it could affect L2 liveness.

@cooper-aztecLabs @jaosef do you remember if we had other reasons to not go with pure L1 based sequencing?

I am broadly curious what a “based” version of fernet means? Could you explain in more detail @Anon?

1 Like

I agree (I see censorship more than liveness). I like the idea that L1 validators can choose to build their own L2 blocks if desired, moreso given PBS has not hit L1 yet. In-protocol provers would address this problem directly. However I think the solution looks less like B52 and more like PBPS (a single bonded first-price auction). What do you think?

Each L1 block, the contract accepts from anyone a deposit (stake) and block data (unproven). If a proof is not posted within the proving phase, the deposit is burnt. (This is close to what Taiko is now trying)

Oh, ok. Interesting. How exactly is that a “based” version of fernet? And could you clarify how your suggesting the proving phase works in that model?

“based” here means blocks are sequenced by L1. The proving phase is the same as in Fernet.

Ah, I see where I got confused. I see fernet as a randomized leader election & explicitly being designed in order to give a single sequencer a specific time as leader (so that provers know which blocks to work on, among other reasons) – removing this characteristic means it’s no longer fernet, at least in my mind :sweat_smile: Thanks for the clarification! In general I like the randomness guarantees Fernet provides, with clear incentives for people to run Aztec specific infrastructure, and believe it leads to healthier long term decentralization (as Santiago mentioned). I think the designs that get submitted to the proving RFP will be interesting, since it’s the only thing Fernet doesn’t define.

So, back to your original point - it seems a possible mitigation for your original concern of L1-proposer-censorship could be by extending the proposal phase to 2 or 3 slots on L1? Or even longer “proposal” VRF-reveal phases? For example, you could run the “proposal phase” (VRF reveal phase) 2 or more “slots” in advance, and take the lowest VRF from the entire 8-10 minute slot (block period, the upper bounds of the proving phase). That means you (theoretically) would have to be censored by about 40-50 L1 proposers in a row.

Am I understanding your concerns correctly now?

1 Like

I am confused. Do you mean selecting from a stakeholder set vs L1 validator set?

The all-pay nature of the auction (L1 txn cost) reduces to ~two/zero bidders and only in the final block. It seems more effective and simple to just assign slots, as in PBPS.

1 Like

I like that it randomly chooses from a staked sequencer

Honestly, I haven’t read PBPS yet since you just recently posted it yesterday and we closed the sequencer proposals back in June - however I’ll take a look & circle back once I understand more

1 Like

Just went through Taiko’s proposal, thanks for sharing it. It looks pretty simple, though I’m worried about prover griefing: what happens if the proposer or builder does not share with the prover the data they need to generate the proof?

Yes, their in-protocol prover auction sounds broken.