[Proposal] Cooperative proving network for Fernet

Cooperative proving network for Fernet

Attempts at designing a prover network that plays nicely with Fernet.

A cooperative proving network

We’ll shoot for a cooperative proving network, as opposed to a competitive one. Similar to sequencers, provers are required to stake to participate in the protocol. On every block, for every section of the proof tree, a VRF is evaluated using the prover’s key. This VRF outputs a score that ranks each prover for working on a specific part of the proof for a specific block.

Fee split and burn

We first adjust Fernet’s sequencer scoring function by multiplying the resulting VRF score by a fee split percentage. During the proposal phase, sequencers also submit a proposed FEE_SPLIT (from 0 to 1), and their resulting score is VRF_OUTPUT * (1 + FEE_SPLIT). Should their block be elected, the captured fees in that block plus any constant block rewards are split according to that value between the sequencer and the proving network. This incentivises sequencers to forward as much value as they can, while remaining competitive. Note that all captured MEV remains with the sequencers, though this could be revised by allowing the FEE_SPLIT to be greater than 1.

Similarly, each prover has a VRF output score that determines their eligibility to build a section of the proof tree. This score is further adjusted by a FEE_BURN value (from 0 to 1), so their resulting score is VRF_OUTPUT * (1 + FEE_BURN). Should their proof be included in the tree, they get FEE_BURN of their respective reward, and the rest is burned. Allowing values greater than 1 doesn’t make sense here, since provers don’t have incentives for doing the work other than the reward itself.

VRF considerations

The VRF can be evaluated on the provers public or private key. Using the public key lets anyone know the complete prover ranking, potentially leading to less wasted effort. It also allows the lead to set a precise FEE_BURN that allows them to keep the leading position. On the other hand, using the private key allows provers to retain privacy, and it may also lead to higher wasted effort which is not necessarily bad and could provide good redundancy. This same discussion applies to the sequencer VRF.

As for the randomness source, we could use the same RANDAO value as the sequencer, combined with their VRF output. We rule out using the block hash, since sequencers could manipulate it to ensure that provers they control end up with a higher score. And we don’t use the raw RANDAO value but instead we mix it with the sequencer VRF output, so that different block proposals from different sequencers lead to different prover selections.

Prover selection and rewards

For each node in the proof tree, up to PROOFS_PER_NODE = 3 proofs are expected to be selected. Each prover that gets their proof included in a finalised tree with NUM_NODES gets a reward equal to:


Example of rewards distribution

Total block rewards & fees: 1000 AZT
Number of transactions in the block: 32 (ie 31 nodes in the proof tree)
Sequencer FEE_SPLIT: 80%
Total rewards to provers: 1000 AZT * 80% = 800 AZT
Maximum individual reward per prover: 800 AZT / (31 * 3) = 8.6 AZT
Effective reward for prover with FEE_BURN=40%: 8.6 AZT * 60% = 5.16 AZT

Should there be competing roots submitted, the winner is the one with the highest total score summed across all its included proofs. Remember that the score of a prover for a given node is determined by the VRF and boosted by their FEE_BURN.

This incentivises provers to build upon proofs with the highest scores, but grants them flexibility to choose others if those proofs are not shared. Furthermore, provers higher up in the tree are encouraged to include all 3 proofs for each subtree, in order to increase their score. If less than 3 proofs are included for a given node, the fees that correspond to the missing ones are burned.

Example of proof tree scoring

Let’s assume a proof tree of size 3, with two base rollups and a root rollup. On the left tree, prover R1 (who has an effective score of 60) includes the proofs from provers G1, G2, G3 for the green node, and from B1 and B2 for the blue node. On the right tree, prover R2 (with a score of 40) chooses the same G1, G2, G3, but adds the proof from B3 for the blue node. Thus, the total score for the right tree is 10 more than the left one, and wins.

Note that it is possible that a prover has the highest score for a node and is still not included. However, this would be the result of all 3 provers for its parent node ignoring it and choosing other proofs instead (at the risk of reducing their own score), making it unlikely.


Allowing provers to be stateless is only possible if there is another entity incentivised to push them whatever state they need to execute the proofs. In this model, the sequencer cannot be trusted to fulfil this role, since this grants the sequencer power to choose whatever provers they want (by pushing data only to them), defeating the purpose of a protocol prover selection algorithm.

This seems to lead to provers being stateful. However, this could also lead to prover-PBS (?), in which the entity that is selected to propose a proof and the one that actually builds it is not the same. Staked provers could just be regular nodes that follow the chain, and if they are selected as provers, they could offload the computation, along with all necessary inputs, to a specialised proof builder such as nil.

It’s also worth noting that we could remove the statefulness requirement for merge rollup provers. If a merge rollup prover trusts the input from its children, which is a big assumption, then it can assemble the merge proof without any state. If we wanted to exploit this, we could allow provers to sign up separately as base and merge provers.


How is proof-data made available to provers?


Via the L2 P2P network


Without proof-data availability guarantees, the sequencer may select its own provers by selectively revealing data.

The auction incentivises a FEE_SPLIT of 1 and a FEE_BURN of 1, but the optimal equilibrium is a FEE_SPLIT =compute_cost and a FEE_BURN of 0. For this reason, I believe out-of-protocol coordination (a secondary market) will emerge between sequencers and provers.
e.g. FEE_SPLIT is 1, and FEE_BURN is 0, with side-channel payments to sequencers. A simple way to enforce this is for sequencers to selectively reveal proof-data.

If there is a way to share compute costs between proofs for the same node, then PROOFS_PER_NODE might introduce centralization pressure.

The scoring function SUM(VRF*(1+FEE_BURN)) for trees favors centralized actors which can aggregate FEE_BURN to the highest VRFs. Therefore provers will auction off their VRF slot in a secondary market.

The sequencer doesn’t care about the tree score, and will submit the first valid root it sees. Honest provers must then coordinate the L1 fees if they want to push a different tree through. This gives fast centralized provers L1-txn-fee worth of advantage per block.


There’s no restriction to who can submit the tree, and the tree with the highest score wins, not the fastest one. The L1 fee advantage can probably be removed with a gas rebate for the tx.origin that submits a valid tree.

Yes, this is becoming an issue, not just here but on sequencing as well, where sequencers can withhold proof data to trigger a reorg if it’s advantageous to them. We’re open to ways to fixing this, aside from introducing a DA layer specific for tx proofs.


The sequencer is slashed if no proof is submitted, and so has incentive to submit a proof as early as possible.
Rebates for all valid proofs sounds too costly. If you only rebate gas for the winning tree, that might work. However, this ensures the proof lands in the final L1 slot (may affect liveness), which means L1 proposers can just steal it (like with block proposals in Fernet).

1 Like

(assuming you’re referring to the gas rebate) Is this a problem? The proof lands on chain, which is our goal, and the original sender doesn’t pay any gas.

1 Like

By steal it, I mean decide the winning tree with their own auction (where FEE_BURN is all 0).

My concern with sequencers submitting early may be inconsequential (or even beneficial), as they would submit in the slot 1 block before the deadline and use the highest tree.
I’m finding it difficult to reason about coordinating L1 fee payment. Can you clarify how/if rebates work, and who you expect to submit to L1?

In my mind, the tx.origin of the tx would receive a value transfer proportional to the gasUsed and gasPrice of the call. The transfer could be done in ETH or in whatever token powers the network, which may require a price oracle.

I believe it’s crucial to consider the fee split in the context of MEV. Currently, the fee structure doesn’t seem to account for MEV, which could form a significant portion of the protocol’s revenue going forward. As I see it, in the long term, MEV might become the primary revenue source for the protocol. This could lead to potential imbalances in incentive structures.

Sequencers could control the transaction ordering to maximize their extractable value, thereby retaining a significant share of the profits. This could skew preferences for individuals who later have to choose roles; given the profitability, many might opt to be sequencers over provers.

You touched upon the possibility of allowing the split to exceed 1, but the question arises: how would this be quantified in terms of MEV?
I recall a conversation with @jaosef on that twitter space about Fernet, where he mentioned a potential auction mechanism between sequencers and provers. That makes sense because such a system could create competition among sequencers to get their batches included/proven, guaranteeing a portion of the MEV revenue via the bidding process. Note that in the longer run, this could also bring different tradeoffs wether you want to enshrine that process or not as we see it today on Ethereum with PBS.

So on that case, you mean that staked provers would be enshrined into the protocol and could run some sort of mev boost to offload proof generation, correct or am I missing something?


The way I see it, the sequencer knows the total fees from the block and the total MEV they could extract, so they can express “I’m willing to forward this much $ to provers” in terms of the amount of fees.

So let’s say MEV is 10x the fees for a block, and a sequencer is willing to forward all fees and half the MEV to provers, then the split would be 6.

The rationale for expressing this in terms of fees rather than absolute values is that it’s easier for scaling the VRF score. If the sequencer was to specify an absolute number of $ or tokens to be forwarded, how would you adjust the score based on that?

Yep! proposer-builder-prover-proofbuilder-separation for the win!

1 Like

Thanks for clarifying.

One way to tackle this could be by establishing a standardized protocol-wide reference. Let’s say, for instance, we have an on-chain oracle providing the average MEV to fee ratio for the past x blocks or over a certain time period. Sequencers can then specify their contribution in terms of this average. If a sequencer is willing to offer above this average, their VRF score could be adjusted favorably, and if they offer below, it could be adjusted less favorably. This dynamic adjustment aims to ensure that the system is adaptive to changing MEV values and market dynamics.

However, I do see the appeal and simplicity in expressing this in terms of fees, as it provides a stable and understandable metric for the parties involved and you still have the bidding process (which is what matters imo).

Now I’ve been nerdsniped on this: can you measure MEV reliably enough to build an oracle?

Great question - Measuring MEV reliably is challenging but how I see it:

  1. “We” could analyze past blocks to get an idea of extracted MEV.
  2. Platforms like Flashbots offer insights into MEV from relayed transaction bundles.
  3. A decentralized oracle drawing data from multiple MEV platforms might offer even more accurate results.
  4. Given the dynamic nature of MEV strategies, the oracle would need periodic updates to stay relevant.

I guess it’s not straightforward but with the right approach, it’s feasible to estimate MEV with decent accuracy imho.