Increase Ignition Queue Throughput

The impact this has on rewards to major providers is a fair point, but must be weighed vs the loss of rewards to token holders that have delegated. That is about 20x the rewards since most operators are at 5% commission.

May I ask what percentage of your provider’s total stake to your knowledge comes from investments by Sigma Prime, employees, institutional partners, and related entities?

Correct. VOTE NO. There is a massive incentive misalignment. Voting yes will actively harm ROI for solo stakers, wrecking the early upside for taking on the greatest risk.

1 Like

I am writing in support of a YES vote. I’m an early contributor currently in the ~80-day queue.

The core issue is contradictory requirements: The protocol mandates that many ICO participants stake during their 3-month lockup, yet the current queue forces a 2-month wait to fulfill that mandate. As @emrepiconbello noted, this traps early contributors who are trying to stake but are mechanically prevented from doing so.

Furthermore, this situation seems to create potential for an unnecessary/unfair unlock timing risk. If the queue extends to 4 months but TGE occurs in 3, it appears that users mandated to stake at the back of the queue will be forced into an extended lockup period simply due to congestion. A different version of this same core concern is what happens to users who haven’t realized they need to stake in order to unlock for TGE and they enter the staking queue at TGE and face multiple months of extra lockup! Why treat early participants arbitrarily poorly when everything else about ignition and the ICO was so carefully crafted to be community first? This resolution just corrects a design oversight. I encourage node operators to align with @realbigsean (Sigma Prime) and vote YES, prioritizing the more important long-term protocol community interests over short-term reward rate.

2 Likes

Genesis purchasers who took on the largest risk, not only in their sale terms with a 12 month lockup, but by participating in the testnets with associated resource costs, were incentivized to purchase before the public sale with the idea of (as well as a cheaper FDV purchase price) getting in early and securing high rewards for at most three months to offset the treasury risk of being forced long on the token for 12 months (while public participants have liquidity in February). Participants made financial decisions on this expectation and changing it so soon feels like a bit of a rug now the public sale was a huge success.

I struggle to see the potential consensus risk when the code is extremely stable and resource requirements very lightweight. Nearly all epochs are at 100% participation and the slashing mechanism is working correctly. Sequencers are not technically fragile - you’re either have the process running and the correct peering ports open or you don’t. Is there really a danger that on the next rollup a large minority will be offline? Most of the queue is from infrastructure enterprises with very low slashing risk.

The genesis validators are likely the most involved, interested, and technically mature participants in the sequencer set, and I feel their impact should be rewarded as originally designed and the APY to deflate naturally (as it’s already doing) with the current settings. Additionally, I don’t see a lot of evidence that public sale purchasers had staking rewards as a primary motivating factor to their purchase and will feel significantly maligned if their delegate didn’t even join the set by February.

The slashing and rejoining wait time is extremely punishing right now admittedly, but I think this hardens consensus and lowers halt risk as existing genesis stakers and providers are heavily incentivized to monitor accurately and prevent slashing incidents.

I would prefer signalling for speeding up the queue after the February unlock vote.

6 Likes

Genesis sequencers are capped at 1M Aztec token participation, so Sigma Prime has invested 7 ETH for this amount and self-delegated. We have no institutional partners or related entities we are staking for. We’ve done no business development while acting as a staking provider.

For employees, we circulated awareness of the auction but have no sort of program for participation or means of tracking (or desire to track) investment or delegation here.

We’re all commenting on the obvious lockup-period staking yield distribution implications of this, but I’m more concerned about the second order effect of a months-long queue on ICO participants’ more fundamental ability to unlock at TGE. Being prohibited from participating in a few months of high-yield staking is one thing; realizing at TGE that you need to enter a months-long queue before your tokens unlock is a MUCH nastier surprise. Could someone with deeper knowledge of the current contracts clarify what will happen to ICO participants with >200k tokens who are unaware of their requirement to stake at TGE if no change is made? Does the requirement to stake evaporate, or when they attempt to transfer will they learn they need to enter a potentially months-long queue to stake before they’re allowed to transfer? As a Contributor-participant, I can say I read everything I could about Aztec’s ICO ahead of time (have been waiting since day ZK-Money went down), Uniswap CCA whitepaper & mechanics, and became active participant on Aztec Discord … and still I didn’t have any idea there was a requirement to stake until I initiated the process of staking! This concern doesn’t affect me, but I’m sure there are hundreds or thousands of users who never attempted to stake and aren’t aware of the requirement to do so. If they learn about it in the days before TGE and simultaneously realize they need to enter a months-long queue to fulfill that staking requirement … that’s a poorly designed system, would be a huge problem for many, and is totally independent of the staking yield distribution issue this discussion has been focused on.

Maybe my point is that EITHER the queue needs to be sped-up to be days-long OR the ICO participant requirement to stake should be eliminated at TGE. But that requirement to stake was implemented for a good reason: to get large buyers more deeply invested in chain mechanics and introduce a couple reasonable hurdles/speed-bumps to dumping at TGE. That philosophy remains valid so I think fixing the queue before TGE is a better idea than invalidating requirement to stake at TGE. If Genesis participants prioritize guarding their stake yield over using the chain’s lockup/warmup period for fixing issues like this, then maybe the queue fix could be implemented closer to TGE.

We (Aztec-Scan) support this proposal. We believe increasing the flush rate is most beneficial to the network and fair to its participants, it allows more stake to be locked in the network signalling a strong intent of trust through participation from the community.

The concern for small stakers being impacted the most by this change are valid, with concentration of stake ending up in the larger providers (yes we are one of those) and not landing with them. However, we believe this is something can be remedied in the future, for example by delegated/liquid staking pools incentivizing for decentralization and network health and not only for profit.

To make sure I understand the proposal correctly: is the normalFlushSizeQuotient applied only forward-looking from the proposal execution, or are previously passed sequencer-set thresholds taken into account retroactively?

Concretely, if at execution time there are ~1,500 active sequencers (hypothesis), should we expect:

Option A (forward-looking, continuous division):

  • At execution (1,500 active): 1,500 / 400 = 3.75 → 3 sequencers per epoch (floor function)

  • At 1,600 active: 1,600 / 400 = 4 sequencers per epoch (capped by maxQueueFlushSize)

  • At 1,800 active: 1,800 / 400 = 4.5 → 4 sequencers per epoch (remains capped)

  • At 2,000+ active: Still 4 sequencers per epoch (remains capped)

Option B (threshold-based with 400-sequencer steps):

  • At execution (1,500 active): Flush rate moves to 2 sequencers per epoch (first threshold at 1,200 crossed)

  • At 1,600 active (+100): Flush rate increases to 3 sequencers per epoch (second threshold crossed)

  • At 2,000 active (+400): Flush rate increases to 4 sequencers per epoch (third threshold, capped)

  • At 2,400+ active: Remains at 4 sequencers per epoch (max reached)

Option C (retroactive calculation including queued validators):

  • At execution (~1,500 active, ~3,000 in queue): If the system considers that thresholds at 800, 1,200, 1,600, 2,000, 2,400, and 2,800 have already been “crossed” by the queue progression, the flush rate would immediately jump to 7 sequencers per epoch, but capped to 4 by maxQueueFlushSize

  • All subsequent stages: Remains at 4 sequencers per epoch (already at maximum)

My understanding: Based on the formula Math.max(activeSequencers / normalFlushSizeQuotient, normalFlushSizeMin), I believe it’s Option A (simple real-time division of active sequencers), which means:

  • The flush rate adjusts dynamically every epoch

  • Only the current number of active sequencers matters (not queued ones)

  • At 1,500 active sequencers, we’d immediately see 3/epoch, not 2/epoch

Could you confirm which interpretation is correct?

Thanks for clarifying!

Your interpretation is correct. It is Option A, simple real-time division of active sequencers.

Seconding this line of thought. I’m looking for some clarification on this too. Where is the requirement to stake stated and does it include disclosure about the queue?

Hello,

I created a Dune dashboard to help the staker community make informed decisions on Aztec Governance proposals. It can be used as a complementary data source to Dashtec. In general, it displays information about the current payload and proposal lifecycle (coming soon as governance kickstarts), and provides an overview of network “solo” sequencer distribution and staking power (coming soon as the GSE start receiving deposits) versus existing delegated providers.

This is still a WIP as we wait for some of the on-chain components in the governance lifecycle to start receiving activity. Any feedback is welcome.

1 Like

The requirement is stated in the auction Terms and Conditions: Auction Terms of Sale.

In more practical UX terms, I only became aware of the requirement through frontend prompts at the staking dashboard (https://stake.aztec.network/) after I initiated the process to stake. I suspect many users are still in the dark. That’s not great, but not necessarily a problem as long as folks are allowed to stake (to unlock) when they eventually realize they need to.

Regarding pre-auction disclosure about staking queue mechanics, I’m sure the technical information was available and by making educated guesses about the number of auction participants and portion that would stake, one could have guestimated staking queue duration before/during the sale, but it seems likely few outside project devs actually understood the ramifications of intial queue variable setpoints. During the auction I asked the new AI tool on the Aztec support website to explain staking/unstaking mechanics and how long it would take to unstake. It didn’t know (actually nice feature to admit that instead of hallucinating). Other SOTA models guessed at stake/unstake mechanics for me but obviously weren’t able to locate or process any project-specific information.

All terms to date seem fair to me! But if no staking queue changes are made, it doesn’t take much imagination to picture many very upset users when they come back at TGE to celebrate and monitor project progress and realize that while most supply unlocks, they need to enter a monthslong queue before any of their tokens are transferable. That seems like a nasty own goal that we can easily avoid thanks to the work of @mitch and others.

1 Like

Hi, newbie here. Landed here from discord as I was looking for information to simply understand why the sequencer to which I delegated stake is still in queue.

  1. The dashboard at https://stake.aztec.network/ does not give any information besides just stating the status as being “In Queue”.
  2. Hence, we don’t know where in the queue we are, and how long we will be in there.
  3. Should we un-stake and re-stake with a different sequencer for instance?

Independently of what the outcome of the vote is, could anyone in a position to do so document what is going on with this queue thing, and explain what are the implications for those who have delegated their stake to a sequencer that is still waiting in queue.

Lastly, is there anything we can do to improve our current status?

Apologies, If this has already been documented somewhere and I missed it.

If you’re delegating, you don’t delegate to any particular sequencer that has already cleared the queue, you delegate to a provider who will provision a new sequencer(s) (at the end of the queue) for your stake. Undelegating and re-delegating will worsen your time until live.

1 Like

This payload received several rounds of sufficient support (see the dashboard put together by @santteegt ).

It was submitted to governance on Jan-02-2026 09:45:23 AM UTC, so voting starts on Jan-05-2026 09:45:23 AM UTC and ends a week later. After that, there is an additional week delay before the proposal can be executed (if it passed the vote).

EDIT: this also means that sequencers can stop signaling for the payload.

1 Like

Thanks! So each delegation/stake has a dedicated sequencer?

I’m not sure how the staking dashboard works in portions less than 200K or if that’s even possible, but effectively yes. For every 200K delegated a new sequencer for the person that delegated will be created.

Update: removed post. Disregard.

A semi-strong yes from us. I do think that increasing decentralization should prevail above all at this stage. So yes, more val p/epoch, more “throughput.”

But…

The assumption here is that everyone is honest. I’ve been running nodes since BitShares, so 2015, and working in the industry a long time (over 10 years). Alas and alas. It might not be the case…

I also think that if the mission here is to decentralize the stake, then fixing the 0 on the main explorer for some validators, such as ourselves and others, is more important (and quicker). It achieves the same thing as the prop has in mind. The overall stake is not being distributed to those operators that made that error while setting up (which isn’t really an error, btw), and new users will avoid delegating to them. The stake goes to the same validators. In turn, this means that whether we have 1 or 4 going through each epoch, it won’t change much if all the stake is in 1 place).