Let me try to recap how would epochs work, based on the Vitalik post you shared, and how initialization nullifiers (or contract instance deployments, which are the same) would be used, to understand the problem here.
- Addresses will have to include an epoch identifier. We could force the first few bits of the address to represent it (assuming 1-year epochs, with just 4 bits we’re covered for the next 16 years of Aztec network, and 4 bits should be easy to “mine”), or maybe include it in the preimage (may require more hashing operations).
- When deploying a contract at an address, if the address epoch identifier is the current one, just emit the initialization nullifier in the current tree as usual, checking for duplicates. If the epoch is a future one, fail. If the epoch is a past one, prove for every single nullifier tree since that epoch that the nullifier was never emitted, and emit it in the current one. This last case is the only nasty one, but it’s only a problem when executing a deployment for a counterfactual address that took a very long time to be executed, which is very uncommon.
- When calling into a contract at an address, look up in which epoch its initialization nullifier was emitted (needs to go back at most to the epoch of the address, this may require indexing services for very old addresses), and produce a proof of membership in that epoch’s nullifier tree, and another one for that tree root in the epoch’s tree. Producing the proof in the old nullifier tree may be expensive, since it requires storing all tree’s data. However, it can be cached: once a user or app knows that they’ll interact with a given old address, they can just store the membership proof for that address and reuse it as needed.
- When making a public call into a contract at an address, the flow is the same as above, but means that the sequencer needs to store all historic trees information. Even worse, when making a public call into a contract at an address that has not been initialized, we need to prove that the initialization nullifier was never emitted. This requires producing proofs of non-inclusion for every single historic nullifier tree since the address’ epoch.
This last item seems to be the nastier one, as it puts a huge burden on sequencers. I can think of two ways around it:
- Having a way to “refresh” an initialization nullifier by moving it to the current epoch, similar to what you suggest in your original message. If an initialization nullifier is not present in the current epoch or the immediately previous one, the sequencer must then fail the transaction. Sequencers could automatically “refresh” initialization nullifiers read from the immediately previous epoch into the current one, so contracts accessed at least once per year just work without extra effort. For older nullifiers, a private transaction needs to be crafted that includes the historic proofs.
- Putting the burden on the sender. Transactions would be required to include strict access lists for public execution (which would help in the future in parallelization), and the private kernel circuit should check the correct initialization of every address in the list, so the sequencer doesn’t have to.
Drilling into the first option, how do we “refresh” a nullifier? We definitely don’t want to have duplicate nullifiers across historic trees, as that seems to break a very fundamental invariant. Maybe we could emit a different nullifier that’s linked to the original one, something like hash(new_epoch, original_nullifier)
. This way, proving that an initialization nullifier has been emitted requires either showing the actual nullifier or showing a “refresh” nullifier.
This “refresh” could be orchestrated as just another method in the canonical ContractDeployer
contract, something like:
fn emit_refresh_nullifier(
original_nullifier,
optional_previous_refresh_nullifier,
optional_previous_refresh_epoch,
optional_original_nullifier_membership_proof,
optional_previous_refresh_nullifier_membership_proof
):
refresh_nullifier = hash(current_epoch, original_nullifier)
// If the nullifier was emitted on the epoch before this one, just emit the refresh one
if original_nullifier in previous_epoch:
emit_nullifier(refresh_nullifier) and return
// If the nullifier was already refreshed, and the refresh was on the previous epoch, then emit the new refresh
if optional_previous_refresh_nullifier in previous_epoch:
assert hash(optional_previous_refresh_epoch, original_nullifier) == optional_previous_refresh_nullifier
emit_nullifier(refresh_nullifier) and return
// Otherwise, go with a historic proof of the original nullifier
if is_valid(optional_original_nullifier_membership_proof, original_nullifier, historic_nullifier_tree_roots):
emit_nullifier(refresh_nullifier) and return
// Or one for the previous refresh nullifier
if is_valid(optional_previous_refresh_nullifier_membership_proof, optional_previous_refresh_nullifier, historic_nullifier_tree_roots):
assert hash(optional_previous_refresh_epoch, original_nullifier) == optional_previous_refresh_nullifier
emit_nullifier(refresh_nullifier) and return