| Author | Mitchell Tracy | Aztec Labs |
|---|---|---|
| Payload Address | 0x05d2a884760f801c1c59369f6fe576132e8ef96c | |
| Proposal ID | 0 |
Simple Summary
This proposal increases the current throughput of the entry queue, allowing validators to join the set more quickly. It also decreases the maximum possible queue flush rate.
Motivation
The current, relevant parameters are:
| Key | Value |
|---|---|
| bootstrapValidatorSetSize | 500 |
| bootstrapFlushSize | 500 |
| normalFlushSizeMin | 1 |
| normalFlushSizeQuotient | 2048 |
| maxQueueFlushSize | 8 |
| aztecSlotDuration | 72 (seconds) |
| aztecEpochDuration | 32 (slots) |
The function of the queue is divided into âbootstrapâ and ânormalâ modes. In âbootstrapâ mode, no one is flushed into the rollup until bootstrapValidatorSetSize validators are in the queue. At that point, up to bootstrapFlushSize validators may be added per epoch until there are bootstrapValidatorSetSize validators in the set, at which point the system is considered âbootstrappedâ, and the queue enters ânormalâ mode.
In ânormalâ mode, the number of validators that may be added per epoch is
Math.min(
Math.max(_activeAttesterCount / config.normalFlushSizeQuotient, config.normalFlushSizeMin),
config.maxQueueFlushSize
)
This produces the following characteristic (only looking at ânormalâ mode)
At the time of writing, there are 827 active validators in the rollup, and 2,911 validators in the queue, so 1 validator is being added per epoch. At this rate, it will take 77 days for validators entering the queue to join the set. This will result in poor network participation and issues as the queue grows with validators being stuck in the queue ahead of the alpha upgrade next year.
The primary point of the queue is to prevent the validator set from being overwhelmed by validators that are misconfigured/offline, and thereby unable to participate in block production or governance.
If the rollup were to take in these âdeadâ validators faster than they could be removed via slashing, it could put the network in a situation where more than 1/3 of the validators in the set are dead, which is difficult to recover from.
The Ignition network has had extremely good participation in terms of block production and attestations, often hitting 100% on both metrics in an epoch. Further, slashing is working as expected which gives confidence in the networkâs ability to withstand dead validators.
Based on this, and given our expertise as the authors of the L1 contracts and Aztec client, we (Aztec Labs), feel a safe assumption is that 95% of the current validators are online and at most 60% of validators in the queue are offline. Therefore, we feel it is safe to increase the throughput of the queue to:
| Key | Value |
|---|---|
| normalFlushSizeMin | 1 |
| normalFlushSizeQuotient | 400 |
| maxQueueFlushSize | 4 |
This would produce the following characteristic (only looking at ânormalâ mode):
Here are monte carlo simulations demonstrating how this would play out assuming this proposal were enacted at the time of this writing, and 60% of the validators presently in the queue and in perpetuity were offline.
These simulations may be reproduced using the code at here with the command of
uv run main.py --mode monte_carlo --quotient 400 --max-flush 4 --queue-dishonest 0.6 --runs 25 --epochs 6000 --validators 827 --honest-ratio 0.95
Signaling/Voting
To signal for this change, please use the following admin API command on your node.
The payload address is 0x05d2a884760f801c1c59369f6fe576132e8ef96c.
curl -X POST <http://localhost:8880> \\
-H 'Content-Type: application/json' \\
-d '{
"jsonrpc":"2.0",
"method":"nodeAdmin_setConfig",
"params":[{"governanceProposerPayload":"0x05d2a884760f801c1c59369f6fe576132e8ef96c"}],
"id":1
}'
Simulating Payload Execution
The source for the payload is here. Simulating execution by governance can be done via:
forge script FasterIgnitionSim -vvvv --rpc-url your_rpc
It will be seen that there is one write to 0xE525c64ee3bb9a0ed18e42504d313128ed19fD31 (ValidatorOperationsExtLib), and the entry queue flush size after execution is 2.


