How would you scale Aztec?

Generating a proof for non-membership is a well-known scaling problem in zk-private projects.

How is Aztec going to solve this issue? Suppose 1B transactions, or more. Which party would generate a proof? Is it a user, like in Tornado? How much data would a user need to download?


Great question, the Aztec Node software via a sequencer will provide the proof of non membership.

A user is responsible for proving correct transaction execution, including correct computation of nullifiers in client side proofs.

The rollup circuit, is proved by the proving network and for each nullifier, will check non-membership of the nullifier, then insert it into the nullifier tree. You can read more here: Indexed Merkle Tree | Privacy-first zkRollup | Aztec Documentation

TLDR, an Aztec node running sequencing or proving software will need all the data, a user will just need there data to construct proofs.

There is a linked scaling issue on how a user gets all their data (UTXO’s) that we are currently actively exploring, as a brute force sync over 1B transactions will not work.