Agree with @spalladino’s point about Sybil attacks. Assuming that the cost to create multiple accounts each with N native tokens is negligible, there would be no reason for large provers not to do so.
@valerini’s point about the bidding war is interesting. I think this is highly dependent on the cost of proving and the value of N. These parameters will need to be well-tuned. For instance, if proving is very computationally cheap and the value of N is low, then I wouldn’t be surprised to see the most effective block builders running their own proposers and proving networks, and dominating the proving for their own blocks. I would be curious to see the result of any modeling on what will happen with different values of N, different costs of proving, etc.
Additional questions:
If there is a fixed total uncle block reward, isn’t there an incentive for provers to spam block proposals (up until the cost of posting a block proposal to the L1 exceeds the marginal reward)?
Doesn’t the existence of an uncle block reward ultimately result in a lot of wasted prover work?
Overall, I think the proposal is very promising. I agree that the primary concern is that the sequencer selection prioritizes the largest actors, and it’s likely that the entity who is most effective at extracting MEV will always end up being the proposer. This, coupled with the potential for large provers to split up stake and dominate the proving, could lead to significant centralization.
Thanks for the great questions and comments @charles-espresso! We will share publicly any modelling we do on this in due course.
I have answered your questions below.
This is also worry for me in the network bootstrapping phase. Over time, I would hope that the cost would not be negligible, especially once there is significant value transacting on the network
Question 1.
Good flag! The proposal should be updated and say “The uncle block reward is only paid to the top X blocks that have valid proofs.” Secondly, the SCORE for a block could take into account the height of the proof tree, this way a spam block would only win if it was also large (computationally intensive), and with a diverse proving set (lots of stake). Further increasing the cost.
In this case, spam blocks will incur an L1 gas cost plus the cost of proving a block with a sufficiently large score. IMO this should prevent spam and ensure a valid proposer doesn’t have to do much to win the uncle reward.
I was imagining only a 1-2 block uncle reward, just to incentivise no missed blocks. Flashbots had a good suggestion here which could mitigate spam as well. The idea was to do a “pay to play” scenario, where the proposer always pays the BURN_BID for the write to commit a block to L1, even if it does not win.
Question 2.
It does result in wasted prover work, but this is a deliberate trade-off to ensure a valid block is always received and the networks block time liveness is guaranteed. e.g BLOCK_PROPOSAL_TIME + PROOF_ACCEPTANCE_WINDOW = Block Time.
This liveness guarantee is important for L2 > L1 messaging, and just a consistent UX.
Other proposals would require a slashing mechanism on an SSLE elected leader to get the same guarantees.
To the point you made about builder centralization (whoever extracting MEV). I believe this is inevitable as long as there is significant MEV opportunity. Eth builders are less centralized now than a year ago because Flashbots guys have been doing public goods. But in the future, we will expect to see more decentralized block builder to come out in the space with the launch of SUAVE etc. Imo, having somewhat centralized builder set is transitory.
Good question for L2 finality (all proposal will technically have to wait 2 L1 slots for final finality), it would be the BLOCK_PROPOSAL_WINDOW + PROOF_ACCEPTANCE_WINDOW. These need modelling and can be tweaked, but on average I would say its slightly slower than an SSLE by the length of the BLOCK_PROPOSAL_WINDOW. I imagine this to be ~30 seconds.
This is an open question I chatted with @LasseAztec about. The current idea is that, each proof will prove the state transitions, and in the very last step, after the end of the PROOF_ACCEPTANCE_WINDOW, when advanceState is called on the rollup contract, the new state roots would be set, and any work done to include messages in that block.
After speaking with @zac-williamson for a few hours, we have concluded that this proposal does have a flaw as mentioned by @spalladino. The proposal will need modelling to understand the likelihood of this and if tweaks to parameters can prevent this.
A 51% attack
If an entity purchase 51% of the stake, they will trend towards a 100% probability of always winning block auction. Furthermore as the incentives for the provers to compete is linked to winning blocks, the 49% of the network will only win uncle blocks and will further reduce their stake.
An example:
There are 100 stakers (provers) and Bob controls 51 of those.
Bob is a block builder and so is Sally, they both run the same building software and on average can make blocks with the same value, leading them to bid similarly for a given block. Lets assume 1 token for easy maths.
Bob bids 1 token and receive 51 votes (as he owns 51% of the stake). Bob’s score is:132,652 = 51^3 * 1^2
Sally will have to bid at least 1.062 tokens to win as the maximum amount of votes she can receive is 49 votes in this scenario. (~133,000 = 49^3*1.062^2).
Over time, this creates a centralising force towards Bob, likely causing Bob to end up with 100% of the staking network, as the provers getting paid on uncle blocks will diminish over time as it is less profitable.
Further modelling is needed to tweak the formula for SCORE, or the proposal will need modification to always pay all provers, regardless of the score of their block, or to bake a VRF into the provers votes to be secure.
Fair pay prover model
Instead of splitting the block reward in the agreed split with the provers who contributed to a specific block, split the block reward amongst all the provers who contributed to any block.
This will have the effect of incentivising provers to work on any block, and to work with smaller proposers to get that block on chain, in order to prevent 51% stake attacks.
Modification with a VRF
Further protection against 51% attacks can come from modifying this proposal with a VRF as used in Irish Coffee or Cookie jar.
Instead of the sequencer using a VRF, each staker in the proving network would use a VRF attached to their vote.
A signature over
1. The block hash
2. The index of the binary proof tree they will make e.g `Depth 10:Index 4`
3. VRF output for current block hash, which weights their vote
This has some nice properties:
The VRF logic can be consumed within the rollup circuit. This means each block comes with its own proof of the VRF output, for no additional cost as its baked into a proof.
It gives an easy way for the block builder to assemble provers into a block transcript, based on the weight of their VRF in any given block. i.e they will pick votes with high weight.
It ensures the maximum amount of MEV is always burnt per block, unlike other models.
It incentivises coordination.A builder needs to pay provers with a high VRF output fairly for a given block, not attempt to gain stake to win a block.
Could a malicious sequencer grief provers by not revealing the contents of the block they have voted on, colluding with a private prover pool to build and submit proofs separately, and then slashing the honest provers?
Could this be mitigated by making the ordering of the txs in a block proposal public during the block proposal window? This would also offer a soft finality to clients, who wouldn’t need to wait until the end of the proof generation to know that their tx will be included (if everything goes well). What’s the drawback of making this info public, besides the increased size of the block proposal (since it now needs to include a bunch of tx hashes, rather than a single commitment)?
A malicious sequencer could grieve, but it is very unlikely that their block would win in that case. for the following reasons:
The block commitment contains a transcript of the rollup proof effectively assigning, one prover to a given part of the tree. The proposers score is dependant on that prover constructing that proof. If they do not (as the block was not revealed) the score will be lower as a result.
There is a higher cost to this attack as it would require submitting 2 transactions on chain (1 for the block commitment, and one for the proof). The cost to reverse incorrect slashing for an honest prover is only 1 transaction, so the attack is unlikely to prevail.
The main drawback is proposers will have ordered transactions in order to extract MEV, if the transactions are public, then anyone can simply take the block, and submit it with a slightly higher score. I think the ordering must remain private until it has been committed to on L1, in order to attract MEV searchers.
Edit: I should have read the whole thing first, as some of these questions are addressed further down your proposal.
To reduce communication, could we encode all intended proof tree indices (for a particular block proposal) into a single vote?
I guess a prover wouldn’t be able to confidently commit to an intention to produce a merge rollup proof high up the tree (at least, not immediately), because they have no guarantee that they will see the child proofs which they would need to feed into their intended proof. Perhaps this means provers would only be able to vote (with confidence) for higher up tree indices once they can see the children actually exist in the proof pool?
Broadcasting the intents to the gossip network doesn’t ‘register’ their intent anywhere tangible. I.e. the protocol cannot ‘see’ the gossip network and therefore cannot point to any evidence in order to slash users. Would their intents actually need to be persisted in some L1 or L2 state, to enable slashing to be provably justified? edit: Intents are encoded in a ‘transcript’ on L1
If a prover submits intent for a block which doesn’t ‘win’, and they didn’t produce a proof for that block, do they get slashed? What if it’s an uncle block?
If a block doesn’t win as a result of prover(s) not producing a block, how would the lazy prover(s) be slashed, given the lack of existence of a completed block?
What if nobody votes for a particular index of the tree? Do provers have the opportunity to realise this, and submit a vote to ‘unblock’ the block-production process?
Some more Q’s (which I’ll collect as I read)
Submitting a valid proof (of an entire block) will be costly (hundreds of thousands / millions of gas of calldata, blob data, message storing, and computation). What’s the incentive to submit, if there’s a risk that submitters might get gazumped by a later submission with a higher score? Is it just “take a gamble”?
It seems that we’d always need to wait until the end of the proof acceptance window (to give opportunity to all submitters, to ensure the highest score is selected). Does this cause unnecessary network delays (in the sense that a valid, perfectly nice block proof might already have been submitted to L1 earlier in the window)?
How are sybil attacks (from provers) prevented? I.e. how do we stop a prover spinning up multiple prover instances, thereby inflating the NUM_PROVERS component of a score? Perhaps the SCORE calculation needs to include a reference to the prover stake. E.g. the total amount being staked by all contributing provers could be incorporated, or a weighted average ‘stake per prover’.
Edit: looks like this might have been discussed in the comments above.
What’s to stop someone proposing, proving (by themselves) and submitting a block containing 1 tx, just to reap an uncle block reward? Perhaps such a reward would need to be proportionate to the number of txs in the uncle block, or perhaps some other metric for the ‘size’ of the uncle block. Further, how many submissions would be eligible for uncle rewards?
Could the bidding process benefit from randomness via a PRF?
Without it you end up in a situation where a pool with 51% of the stake produces 100% of the blocks.
I suspect this will happen due to the large economic incentive to pool sequencer stake into pools to capture all block production. This makes the network vulnerable to censorship.
Randomly-weighting the bids removes this problem. If you have x% of the stake you make x% of blocks
(the above is assuming everybody votes against the block that produces the most value, which I think we can assume will happen from game theory)
Is coordinating votes on L1 not very costly? For example, if you have 100 entities voting per block that’s 100 L1 transactions whose cost will have to be borne by L2 tx fees.
Even a basic voting protocol will likely change 2 state variables (~50k gas per tx), so we could be looking at 5 million gas per block just for coordinating votes for 100 participants.
One random thought is: what happens if we change B52 from voting per-block to being more of a prediction market?
Provers vote against how much they fee revenue they think they will get for a given set of blocks.
The winning Provers win the rights to submit a fixed number of blocks before the voting process is run again.
If the winning Provers do not submit blocks, then the runners up get to submit (repeat until you find somebody who makes a block)
Potentially, although the idea is that the rollup circuit is fed in the transcript that the propers committed to on L1. The prover, would feed in their vote to a given proof and the circuit would check the index matched the vote. We would need a clever way to mimic that, to ensure the score is accurate.
I was imagining provers would vote on complete sub-trees initially yes. There would need to be some further coordination mechanism to vote on higher up levels of the tree, or there is a grieving attack here I think.
I was thinking this would occur for any valid block submitted in the proof acceptance window, but this would need to be modelled. It should follow the payment methodology for provers. i.e if provers always get paid for a valid block, (regardless of if it is becomes, the head of the chain, an uncle or a low scoring block) then they would always be eligible for slashing on any valid block. If provers don’t always get paid, then they would not get slashed. Again modelling needed here.
The idea is that anyone can fill in any part of the tree that is missing to complete the block. The commitment on L1 to the proof transcript, defines the maximum SCORE a block can receive. The SCORE will not be incremented in a given proof if it is made by a prover not on the transcript (the signature / private key will not match the vote).
TLDR, a block can always be produced, it may just have a lower score than the proposer intended if different provers, do the work.
The proposer can force slashing by submitting that block to L1 (even if it may not win).
The idea here is that a proposer knows at the end of the BLOCK_PROPOSAL_WINDOW which block should win if a proof is made. They chose to reveal the block at this point, and would only submit it if they stand a chance of winning the block or an uncle reward. They may gamble if the highest scoring block fails to reach L1 in time as well. TLDR, the proposer has a lot of information to make that gamble as the set of potential blocks and their max score is fixed at this point.
This is an intentional trade-off for liveness. You can shorten the delay by allowing the next BLOCK_PROPOSAL_WINDOW to start at the start of the PROOF_ACCEPTANCE_WINDOW and each proposal to reference a prior block as described above.
This will need to be modelled, my current thinking is there is no need to to stop this. There is a cost on L1 for a proposer to submit that block, and if it is a valid uncle e.g the second highest scoring block it would get paid.
For provers, I agree it would be split pro-rata, e.g if the total number of uncle proofs is 1025 (a 2^10 tree and a 1 rollup) the prover who submitted the 1 rollup would get 1/1025 of the reward. Secondly if you assume that the winning block is the most profitable block in terms of MEV extraction, the winning block should be able to pay the proving network the highest fee, so doing work on an uncle block will always be less profitable so an equilibrium should be reached.
The voting from provers only occurs in the L2 p2p network.
A proposer would have to commit a valid block transcript (made up of votes collected on the L2 p2p network) to L1 in the BLOCK_PROPOSAL_WINDOW but I would assume there are only 5-10 for any given block (much like the number of entities building on Ethereum).
I think you could get a block commitment down to 1 state variable (a hash of the proof tree and the full proposal). With some gas golfing you can get this down to ~26k gas per L1 commitment and maybe a hash of calldata. The first time a proposer commits to L1 will be closer to 45k as it will be setting an empty storage slot which is an additional 20k.
This is an overhead of closer to 150k per block assuming 5 submissions at 30k.
I do think this overhead needs to be modelled, but given the protocol has 100% MEV value capture, it may be acceptable. This cost also gives the user a soft confirmation. E.g if your transaction is in the top N highest scoring commitments, its likely to be final.
A malicious sequencer could grieve, but it is very unlikely that their block would win in that case. for the following reasons:
The block commitment contains a transcript of the rollup proof effectively assigning, one prover to a given part of the tree. The proposers score is dependant on that prover constructing that proof. If they do not (as the block was not revealed) the score will be lower as a result.
There is a higher cost to this attack as it would require submitting 2 transactions on chain (1 for the block commitment, and one for the proof). The cost to reverse incorrect slashing for an honest prover is only 1 transaction, so the attack is unlikely to prevail.
Since the provers need to vote blindly to the parts of the block, I think TXs withholded by the sequencer can be problematic. The sequencer can withhold a tx pertaining to a part of the block one “attacked prover” has voted on. If raw TXs are not published on L1 with the block proof the attacked prover will be slashed, because he won’t be able to generate the valid proof to revert the slashing, by never having access to the withholded TX. If you have enough “score legroom” as a proposer, you’d be able to choose to slash specific provers of competitors. If the raw TXs are published on L1 then the time window for reverting slashings have to be sufficiently large to allow the prover to do the proving work after the block has been submitted to L1.
Another idea is that the ability to revert a slashing opens the ability for the provers to manipulate scores of blocks, where they commit to blocks of other proposers and then they don’t provide the proof to the proposer and they just revert the slashing afterwards. The proposer score will be affected by this and they can use this mechanism to choose which proposer wins. I’m assuming the the revert of slashings happens after the PROOF_ACCEPTANCE_WINDOW where the winner block has already been determined.
I don’t see that provers need to vote blindly. The proposals are shared on L2 gossip network, so the provers can compute the commitment and see if it matches what is on L1. Only if the block is available to you will you then want to vote on the block and share this intent on the gossip network otherwise just ignore the proposal.
A commitment to the order of transactions in the block (a random secret is used as this will be revealed later).
This protects the MEV-extracting ordering from being disclosed before provers have already commited but It has the side effect of making the provers vote blindly. If the provers only voted for blocks they know, they’d have to be controlled by the sequencer if the sequencer doesn’t want to disclose the tx selection and ordering.
The only issue with provers voting blindly, is that they may vote on a block they can’t produce. I am not too worried about this for the following reasons:
There is a cost to the proposer to sustain this attack. They have to commit a transaction to L1, and get a block made and published on L1.
If a proposer does manage to do (1) an honest prover can always create a proof after the fact to reverse any slashing as a result of the greiving prover. The data will be available at that point.
An all pays prover model will also likely detract from this occurring.
If this is deemed an issue, the transaction order can be made public along with the block transcript commitment on L1. The main goal is that the proposal on L2 is hidden to prevent gazumping of a profitable ordering.
This is very unlikely after the commitment is broadcast to L1, as such a commitment has amassed hundreds of unique prover votes which a new proposal will struggle to do. (A vote contains the block hash, which is unique to the proposer sending it)
Is coordinating votes on L1 not very costly? For example, if you have 100 entities voting per block that’s 100 L1 transactions whose cost will have to be borne by L2 tx fees.
Even a basic voting protocol will likely change 2 state variables (~50k gas per tx), so we could be looking at 5 million gas per block just for coordinating votes for 100 participants.
One random thought is: what happens if we change B52 from voting per-block to being more of a prediction market?
The votes are proved via a ZK snark verified on L1, so there is a fixed cost to verify. This is amortised with the rest of the networks computation so only 1 snark verification is needed for each block around ~3-400k gas.