Proposal: abstracting contract deployment

Shoutout to @spalladino for the idea to remove the contracts tree, and to adopt a “classes & instances” approach to contracts.

Shoutout to Starkware for lovely ideas relating to contract classes & instances.

This proposes quite a drastic change to contract deployment on Aztec.

One of my goals is to simplify the core protocol, so whenever I need to coin a new concept, I make sure to include the word “abstraction” in there.

See the end for pros & cons.


  • Separate contracts into contract classes and contract instances.
    • A contract class is roughly bytecode / function data / vk data.
    • A contract instance establishes a contract address, and storage space for the contract.
    • Classes are “declared”.
    • Instances are “deployed”.
  • Remove the contracts tree.
    • Piggy-back on the nullifier tree instead when declaring contract classes and deploying new instances.
  • Contract class declaration logic and contract instance deployment logic is moved from the kernel circuit into an app.
    • This enables us to piggy-back on existing nullifier and event functionality.
    • It also enables a constructor function to be called by another contract.
      • That is, it enables “contracts deploying other contracts”.
  • When declaring a class:
    • A nullifier is emitted, which captures all of the data about the class.
    • Data relating to the contract class is broadcast to the network via a conventional event.
  • When deploying an instance:
    • A nullifier is emitted, which captures the underlying class, the constructor args, the deployer, the new contract address, (and various public keys, pending further keys discussion).
      • The contract address could even be this nullifier (pending further discussions relating to keys).
    • Data relating to the new contract instance is broadcast to the network via a conventional event.
  • When executing a private function:
    • The function selector and vk are looked-up from the function tree, (which is embedded within the contract class id, which is embedded within the contract instance, which is embedded within the contract instance’s nullifier (which was emitted at deployment)).
      • This is just a merkle membership proof of a leaf in the nullifier tree.
  • When executing a public function / unconstrained function:
    • The bytecode is looked-up from the contract class id, which is embedded within the contract instance, which is embedded within the contract instance’s nullifier (which was emitted at deployment).
      • This is just a merkle membership proof of a leaf in the nullifier tree.

tl;dr tl;dr

“Let’s put lots of contract deployment logic in an app contract, to remove stuff from the kernel circuit. Let’s emit contract data as nullifiers and events. We get ‘contracts deploying other contracts’ functionality.”.

Contract class

A contract_class_id would be derived in a way which looks something like this:

Note: this diagram is illustrative, but even in writing this post, I’ve realised it’s missing some things, or needs to be rearranged.
I.O.U. some updated diagrams.

A contract class id contains the following (most of which are self-explanatory):

  • class_version
  • declarer_address
  • artifact_hash - a hash of the abi/artifact which is spat out by noir.
    • Must include:
      • The version of nargo that’s been used to generate the bytecode.
      • Bytecode (including that of unconstrained functions).
  • Missing from the diagram:
    • Information about the repo / commit hash / tag / version / type of the proving system used to generate the VKs.
  • Constructor info:
    • constructor_function_selector
    • constructor_vk_hash
    • Suggestion: move this info to instead be the 0-th leaf of the function tree.
  • Private function info is encoded in a function tree:
    • (Recall, private functions are standalone circuits, because we can’t afford a private vm).
    • Each leaf contains info about a private function:
      • function_selector
      • booleans relating to the nature of the function
      • vk_hash
      • function_salt - to salt a function’s bytecode.
      • Missing from the diagram: a hash of the acir PLUS unconstrained bytecode.
  • Public function info is encoded, but it depends on a few things which haven’t-yet been specced:
    • If selectors aren’t enshrined, then we just need a hash of the public bytecode to be included.
    • If selectors are enshrined, public functions can be encoded in a similar way to private functions in the function tree.
    • Missing from the diagram: a hash of the avm bytecode PLUS unconstrained bytecode.
  • Unconstrained functions’ info can likely be included the same way as public function info.
  • portal_contract_bytecode_hash - an L2 contract is developed with exact corresponding portal contract bytecode in mind.

Public function encoding is a big question mark for my brain, whilst it’s all being figured out by the public vm team.

Contract Instance

A contract_address would be derived in a way which looks something like this:

Note: this diagram is illustrative.

A contract instance (and contract_address) contains the following (most of which are self-explanatory):

  • deployer_address
  • deployment_salt
  • contract_class_id
  • portal_contract_address - the actually-deployed portal contract address. A check needs to happen to ensure the bytecode of this L1 contract matches the portal_contract_bytecode_hash contained within the contract_class_id.
  • constructor_args_hash
  • Info about the public keys for this contract.

Why classes & instances?

Why remove the contract tree?

  • We already have a tree (the nullifier tree) which can contain all this info.
  • It’s one less tree to manage within the circuits and within an aztec node.

What’s wrong with our current approach to contract deployment?

  • Lots of (usually unused) contract deployment logic is baked into the initial private kernel circuit.
  • A contract cannot deploy another contract (which is particularly strange in an account-abstraction world where users are meant to be represented by an account contract).

Contract deployment abstraction

Instead of baking most contract deployment logic into the Initial Private Kernel Circuit, we have a standalone smart contract (an app) to contain:

  • contract class declaration logic;
  • avm opcode commitment validation logic;
  • contract instance deployment logic.

The kernel circuits (and other core circuits) contain less (no?) contract-deployment related logic. But they would still contain function lookup logic at the time of execution.

Contract class declaration logic:

The logic would live in a smart contract:


  • Emit the contract_class_id as a nullifier, and its underlying data as an unencrypted event. (Events are designed to be an arbitrary length and submitted to L1).


fn declare_new_contract_class(
    contract_class_data: ContractClassData,
) {
    // This data might not align with the diagram above.
    // Don't worry about it. It's all illustrative.
    const {
    } = contract_class_data;
    assert(class_version == 1); // An example of some hard-coded check that can be done.
                                // In a world of contract deployment abstraction, the class_version
                                // could maybe even be the deploying contract's address...?
    assert(declarer_address == context.this_address);
    // Compute the contract_class_id:
    const contract_class_id = hash(
        // declarer_address, // Oooh, this isn't needed, because the kernel siloes every nullifier!

    // ---
    // Deploy the class, as a nullifier:
    // Emit contract bytecode (etc) as an event:
        "New Contract Class",
        contract_class_data, // loads of data: it all gets sha256-hashed behind the scenes.

    // ---
    // We could call a bytecode commitment circuit as follows...
    const bytecode_commitment_contract_address = 0x...;
    const bytecode_commitment_function_selector = 0x12345678;
    let i = 0;
    for (public_function_leaf_preimage in all_public_function_leaf_preimages) {
        // Pass the purported avm_opcodes_commitment, and the bytecode.
        // Validate that the commitment is correct.
        const result =
        assert(result == true);

Contract instance deployment logic:

The logic would live in a smart contract:


  • Emit the contract_address as a nullifier, and its underlying data as an unencrypted event. (Events are designed to be an arbitrary length and submitted to L1).


fn deploy_new_contract_instance(
    contract_instance_data: ContractInstanceData,
) {
    // This data might not align with the diagram above.
    // Don't worry about it. It's all illustrative.
    const {
    } = contract_instance_data;
    const deployer_address = context.this_address;
    const constructor_args_hash = hash(constructor_args);
    // Make an L1->L2 call (before executing this function)
    // to validate that the bytecode of the L1 contract has actually
    // been deployed at the purported portal_contract_address, 
    // using the portal_contract_bytecode_hash contained within
    // the contract_class_id.
    // Consumption of this L1->L2 message read is NOT SHOWN HERE.
    const contract_deployment_hash = hash(
    // ---

    const contract_address = hash(contract_deployment_hash, keys_hash);
    // Deploy the new contract address, as a nullifier:
    // TODO: use the siloed nullifier _as_ the contract address, instead?

    // Usually the kernel circuit will check that a function exists
    // in some already-deployed function, before executing it.
    // Here the check would need to be modified to look at 
    // pending nullifiers (to find the newly-emitted contract address nullifier.
    // Perhaps a `constructor_call` function is needed? Although I hope not., contract_address, constructor_function_selector, constructor_args);
        "New contract instance",

A bootstrapping problem

If the code for deploying a smart contract lives in a smart contract, how can we deploy that smart contract?

I guess we’d have to make a version of this ‘deployment’ smart contract as a special precompile, built into the genesis block.

Pros & Cons


  • It’s cool.
  • Less contract deployment logic in core protocol circuits. (I think… this would need to be validated).
  • Removes the contract tree, so less code to maintain and audit.
  • It enables contracts to call other contracts.
  • It might enable contract deployment logic to be more-easily updated in future (although this is debatable, because at the time of execution, the kernel would still need to support both old and new execution paths).


  • Overloads nullifiers and events.
  • It’s basically still “core protocol” in that we’ll likely need to ensure a version is deployed in the genesis block. (See the bootstrapping section above).
  • If there’s a bug in one of these smart contracts, it would be hard to deploy a new, replacement ‘contract deployment’ contract (because it would only be deployable with this buggy smart contract). (That is, unless this contract has some special ‘precompile’ status…).
  • There might be more… hopefully this thread can unearth them…

Full contract abstraction, if you’re an absolute mad lad

The above proposal for “contract deployment abstraction” says “Let’s put lots of contract deployment logic in an app contract, to remove stuff from the kernel circuit. Let’s emit contract data as nullifiers and events”.

But the structure of the nullifiers would still need to follow a rigid, enshrined structure. That’s because when executing a function, the kernel circuit would need to know how to look the function up, against a nullifier in the tree.

So although the deployment process is “abstracted”, it isn’t really. It’s just moved from the kernel. In fact it might be that one canonical smart contract would need to be deployed to perform the deployment logic, and that would suffice for the whole network’s deployments.

A “full contract abstraction” approach would be to have the kernel “make a call” (whatever that means) to the smart contract which deployed the contract in the first place to say “Please validate that this function exists in this contract which you deployed… I don’t know how to read the preimage of its nullifier (because it’s fully abstracted), but you do. Let me know if it’s a valid function of this contract, and I’ll proceed with verifying it.”

Pretty crazy, right?

I’m not advocating for it. I’m not even sure if it’s possible (or if the King of the Hill problem would rear its ugly head again). But it was a fun thought.


Love how seriously you’re taking the “simplify the protocol” thing!

I’m thinking if there’s a problem with removing the contract tree for public function execution. Let’s say I send a tx that triggers a call to an address, and the bytecode for that contract was never emitted, how do you prove it doesn’t exist? Can you use this to “brick” apps with a variant of the KotH? Does this just apply to public, or to private as well?

Ooh nice question! I think there are a few things to unpack there.

Perhaps I could distill the problem to: Apps need confidence in the correctness of the deployment process.

With our current approach, the logic for validating a deployment is done in the kernel circuit, so apps can be confident that every entry in the contracts tree has been deployed in a consistent way which follows the rules (because there’s only one way to insert data into the contracts tree).

With this proposal, it’s certainly true that if contract data is now inserted into the nullifier tree by arbitrary ‘contract deployment’ apps, it would be difficult for an app to discern whether a nullifier represents a well-formed contract instance – which was deployed in-line with the rules of the system – or whether the nullifier represents an intentionally ill-formed instance, which cannot be executed. The app would need to query which contract deployed a particular “contract instance” nullifier (by querying which contract address has been siloed within that nullifier), and cross-reference the deployment contract’s trustworthiness against some whitelist, before then having the confidence to make a call to the deployed contract. This doesn’t sound very pleasant.

But this proposal doesn’t really seek to enable any app to deploy a contract (except the yolo section at the end which can be ignored for this discussion). I think I admit further down the page, this proposal isn’t really “abstraction”, but simply moving some canonically-important logic from the kernel into a smart contract, to unlock some extra features. It’s certainly simplest if only one, universally-accepted contract deployment smart contract exists, and the whole network acknowledges it as “the one”.
In fact, at the time of execution, the kernel will need to prove existence of a vk (within a class within a contract instance within a contract address) within a nullifier. It would be simplest if the kernel circuit only recognises one deployment smart contract address, so that the siloed address within the nullifier (i.e. the deployment smart contract’s address which was injected into the nullifier at the time it was created) can be validated by the kernel circuit at the time of execution.

Interestingly: there are a few other outstanding problems with our current implementation. Most known, the last one might be a new realisation.

  • Public bytecode isn’t committed-to yet, so there’s no validation that the broadcast bytecode is correct.
  • There’s no validation that any bytecode is broadcast at all.
  • Private bytecode isn’t being broadcast.
  • Unconstrained bytecode isn’t being broadcast or encoded in a contract’s leaf in the contracts tree.
  • Unconstrained bytecode within a private/public function is not being broadcast or encoded either.
  • There’s no way for an app to query that an address doesn’t exist in the contracts tree (because it doesn’t support non-membership proofs).

So I think there are a few ways to ‘brick’ an app already, with our current implementation.

I don’t think it’s the removal of the contracts tree which causes these bricking problems. The tree just holds hashes of data, as does the nullifier tree. So we have this problem regardless of the data structure used. In fact, the nullifier tree helps solve the final point, because it supports non-membership proofs.

As for the problem of constraining (correct) bytecode to be broadcast at the time a contract class is declared…

Correctness of public bytecode is ensured by running a “bytecode commitment” circuit at the time of class declaration, which computes a commitment to the bytecode. We haven’t implemented this in the sandbox, but Zac drafted a spec recently. There’s a section of pseudocode in the “class declaration” section above which makes calls to such a circuit. The resulting commitment, and the bytecode, can then be passed into the avm whenever a public function needs to be executed.

If a canonical smart contract is recognised for contract deployment, users and sequencers can be sure:

  • The nullifier containing the class info is well-formed.
  • The nullifier containing the instance info is well-formed.
  • The public bytecode within the class has been committed-to correctly.
  • All necessary data is broadcast as events

Maybe this post needs to be renamed from “abstracting contract deployment” to “a ‘precompile’ smart contract for contract deployment”.

1 Like

(Please keep the criticisms coming!)


Makes sense! Fun fact: older versions of Starknet didn’t have a deploy kind of transaction, and instead relied on a deploy syscall which was implemented in a Universal Deployer Contract. Other contracts could still deploy though, but this one was the canonical one.

This is the issue I was thinking of in the previous reply, since it could be used for griefing sequencers. You could craft a tx that eventually makes a public call to a contract that doesn’t exist, and if the sequencer cannot prove that it doesn’t exist, it cannot revert the call, and it cannot be refunded for the work it has done so far. Good that we can solve it with the nullifier tree now.

Only because you asked.

Given that classes are stateless, what’s the point of including the declarer address as part of the hash? Shouldn’t the class identifier only depend on the class itself, and not on who declared it? Edit: just realized this is the address of the “deployer” contract, which is needed to compute nullifiers. Still, maybe we can remove it?

Upgradeability can cause issues here. An upgradeable portal contract would return the bytecode of the proxy in the target address, not of the actual implementation. Constraining L1 bytecode would force portal contracts to be non-upgradeable, which may be a requirement for some apps.

And even if we remove proxies from the picture, I believe SELFDESTRUCT hasn’t been deprecated yet, so metamorphic contracts are still a thing. So a user could deploy an L1 portal contract with the required bytecode, then deploy the L2 instance, then destroy the L1 one, and recreate it with different bytecode.

So, I’d remove this L1 bytecode restriction altogether.

Unrelated but allowing for contracts to deploy other contracts has another major advantage - allows for foundry like test contracts :slight_smile:

Yes, we might be able to remove this one, good point!

I’m trying to think if there’s a problem if a developer develops their (L2 contract, L1 contract) as a pair, but then only the L2 contract bytecode is represented by the class which is declared. If someone deploys an instance of that class, but linked to an unintended (different) L1 portal contract, that might be a bit confusing/dangerous for users? Perhaps if at the time the contract instance is deployed, the actual L1 portal contract address is hashed in with the L2 contract address, it’s ok. But then how can a user validate that the L1 portal contract matches the original class developer’s intentions (i.e. matches the intended bytecode), if the class developer’s intentions aren’t conveyed at the time their class is declared?

1 Like

Doesn’t this hold for any kind of composition across contracts, not just L2 instances to L1 portals? As in if I build a system out of N contracts, should the protocol restrict the deployment so that I can only “connect” them to one another?

Anyway, I believe this is better solved by tooling than by protocol enforcement (simplify the protocol!). And if the contract dev wants to enforce this, they can still do it at the application layer: the contract constructor could require that an L1toL2 message is consumed that contains the bytecode hash of the portal address (accessible via context) and it matches a hardcoded value.

Spent a few more hours on this and couldn’t find a flaw.

I was going to point out that contract class data (ie bytecode) may be too big for conventional events. But then I checked how we’re handling it today: we are simply not validating it. And to validate it, it’ll need to go through the same flow as events. Maybe there’s a difference regarding how we’d commit to it…?

The one other thing that bothers me is that, by placing everything in the nullifier tree, which grows a lot faster than contract classes and instances, we may end up with more expensive merkle membership proofs whenever we need to lookup the bytecode to run for a given function. And if we implement nullifier epochs this means that every client will need to store the old frozen trees that include the nullifiers for the contracts they need to run. Keeping contract classes and instances on a separate tree means we probably don’t need epochs for that tree, or at least we change them far less frequently.

Good points all round!

Events are currently designed so they can be any size. I think we’d need to use sha256 (ignoring future blobs) to validate (on L1) that the data broadcast to L1 (including the bytecode data) matches the single public inputs hash provided as part of the rollup proof given to L1. The bytecode might not need to be hashed within a circuit: the logs_hash might be able to simply be emitted, given that the hash can be reconciled by L1. I’ve forgotten exactly what we do for events at the moment, though.

Ah yes, good points. They’re pretty compelling arguments to keep a separate tree.

So perhaps an approach might be:

  • Keep a separate contracts tree for classes & instances (maybe even two separate trees?).
    • This tree must support non-membership proofs, so would need to be an indexed / sparse tree.
  • We modify the public inputs ABI of an app circuit to pass new_contract_class_hash and new_contract_address values to the kernel circuit (so that the rollup circuit can insert these values into the contracts tree).
    • We could even pass arrays of these values, if we anticipate that a contract might wish to deploy multiple contracts - although that might cost too many extra kernel/rollup constraints.
  • I’m still not sure whether computation of those values should happen in the kernel circuit, or whether it can happen in a ‘canonical’ contract deployment smart contract.
    • It seems easier to get “contracts deploying other contracts” if the computation happens within a smart contract.
  • We use events to emit the contract data (at least until we think about a “multi DA world”).
1 Like

@spalladino interestingly, I revisited Vitalik’s original post which inspired the “nullifier epochs” ideas. His proposal is actually aiming to prune accounts (and state) from Ethereum, using this mechanism.

If that forum is entertaining such a convoluted ux, perhaps it’s not completely insane to regularly prune contracts & account contracts?

It certainly makes it much simpler to deploy a variable number of contracts from a single function/tx, because we’re already aiming for such “variable-ness” with nullifiers. Designing the kernel to also juggle a varying number of contract deployment requests is a bit daunting.

this means that every client will need to store the old frozen trees that include the nullifiers for the contracts they need to run

Yeah, or they’d need to request witnesses (in some privacy-preserving way) from some 3rd party archive node that retains all the historic trees.

1 Like

Oh, another benefit of using the nullifier tree, is it would enable apps to not reveal that a contract had been delpoyed. Clearly most contract deployments will need to be shared with the world. But if an app wishes to deploy a new ephemeral multisig account contract (e.g. for some escrowed game state), it could do so without leaking to the world that “a contract was just deployed”. The fingerprint of this kind of tx would be much more uniform if we used nullifiers.

1 Like

Another thought on this.

If contract classes and addresses are stored in the nullifer tree (which will have epochs), then we could adopt a rule where if a contract class/instance is being looked-up from a nullifier leaf in an earlier nullifier tree epoch, the kernel circuit could insert a duplicate of that nullifier leaf into the current epoch’s tree. That way, future lookups would be against the current tree instead.

As you can probably tell, I’m still entertaining the notion of overloading the nullifier tree, because it makes kernel logic simpler when deploying contracts (the logic is exactly the same as emitting any other nullifier and log).

But, there are clear down sides:

  • Overloading the nullifier tree with contracts might make it more difficult to prove security of the system.
  • This message’s suggestion to insert duplicate nullifiers for contract leaves (into the latest epoch’s tree) will make security analysis even more difficult (because the collection of all epochs’ trees now contains duplicates!!!).
  • The responsibilities between this ‘contract deployment’ contract and the kernel circuit are intermingled. Sometimes the ‘contract deployment’ contract creates nullifiers; sometimes the kernel does. The kernel does the membership checks against these special nullifiers whenever a function is called.
  • I’m still not sure how the kernel could efficiently check membership of classes/addresses in historic nullifier tree epochs, without some extra ugly hashing. We wouldn’t be able to cope with very old contracts (that haven’t been touched for ages) unless we added some recursion.

I like this bit!

Strong no on this part. Having some “special” nullifiers that can be duplicated feels like we’re breaking a fundamental invariant. Unless we want to enable this optimization for all nullifiers (which I’m not sure it’s a good idea), I’d look for an alternative. Perhaps we can embed the epoch number in the nullifier preimage…?

I’m still not sure how the kernel could efficiently check membership of classes/addresses in historic nullifier tree epochs, without some extra ugly hashing. We wouldn’t be able to cope with very old contracts (that haven’t been touched for ages) unless we added some recursion.

How would this work for non-contract nullifiers? Is it the same problem, only that here it’s exacerbated by the fact that the same nullifier may have to be checked more often if the contract is popular? Or is there a more fundamental difference?

Let me try to recap how would epochs work, based on the Vitalik post you shared, and how initialization nullifiers (or contract instance deployments, which are the same) would be used, to understand the problem here.

  • Addresses will have to include an epoch identifier. We could force the first few bits of the address to represent it (assuming 1-year epochs, with just 4 bits we’re covered for the next 16 years of Aztec network, and 4 bits should be easy to “mine”), or maybe include it in the preimage (may require more hashing operations).
  • When deploying a contract at an address, if the address epoch identifier is the current one, just emit the initialization nullifier in the current tree as usual, checking for duplicates. If the epoch is a future one, fail. If the epoch is a past one, prove for every single nullifier tree since that epoch that the nullifier was never emitted, and emit it in the current one. This last case is the only nasty one, but it’s only a problem when executing a deployment for a counterfactual address that took a very long time to be executed, which is very uncommon.
  • When calling into a contract at an address, look up in which epoch its initialization nullifier was emitted (needs to go back at most to the epoch of the address, this may require indexing services for very old addresses), and produce a proof of membership in that epoch’s nullifier tree, and another one for that tree root in the epoch’s tree. Producing the proof in the old nullifier tree may be expensive, since it requires storing all tree’s data. However, it can be cached: once a user or app knows that they’ll interact with a given old address, they can just store the membership proof for that address and reuse it as needed.
  • When making a public call into a contract at an address, the flow is the same as above, but means that the sequencer needs to store all historic trees information. Even worse, when making a public call into a contract at an address that has not been initialized, we need to prove that the initialization nullifier was never emitted. This requires producing proofs of non-inclusion for every single historic nullifier tree since the address’ epoch.

This last item seems to be the nastier one, as it puts a huge burden on sequencers. I can think of two ways around it:

  • Having a way to “refresh” an initialization nullifier by moving it to the current epoch, similar to what you suggest in your original message. If an initialization nullifier is not present in the current epoch or the immediately previous one, the sequencer must then fail the transaction. Sequencers could automatically “refresh” initialization nullifiers read from the immediately previous epoch into the current one, so contracts accessed at least once per year just work without extra effort. For older nullifiers, a private transaction needs to be crafted that includes the historic proofs.
  • Putting the burden on the sender. Transactions would be required to include strict access lists for public execution (which would help in the future in parallelization), and the private kernel circuit should check the correct initialization of every address in the list, so the sequencer doesn’t have to.

Drilling into the first option, how do we “refresh” a nullifier? We definitely don’t want to have duplicate nullifiers across historic trees, as that seems to break a very fundamental invariant. Maybe we could emit a different nullifier that’s linked to the original one, something like hash(new_epoch, original_nullifier). This way, proving that an initialization nullifier has been emitted requires either showing the actual nullifier or showing a “refresh” nullifier.

This “refresh” could be orchestrated as just another method in the canonical ContractDeployer contract, something like:

fn emit_refresh_nullifier(
  refresh_nullifier = hash(current_epoch, original_nullifier)

  // If the nullifier was emitted on the epoch before this one, just emit the refresh one
  if original_nullifier in previous_epoch:
    emit_nullifier(refresh_nullifier) and return

  // If the nullifier was already refreshed, and the refresh was on the previous epoch, then emit the new refresh
  if optional_previous_refresh_nullifier in previous_epoch:
    assert hash(optional_previous_refresh_epoch, original_nullifier) == optional_previous_refresh_nullifier
    emit_nullifier(refresh_nullifier) and return
  // Otherwise, go with a historic proof of the original nullifier
  if is_valid(optional_original_nullifier_membership_proof, original_nullifier, historic_nullifier_tree_roots):
    emit_nullifier(refresh_nullifier) and return

  // Or one for the previous refresh nullifier
  if is_valid(optional_previous_refresh_nullifier_membership_proof, optional_previous_refresh_nullifier, historic_nullifier_tree_roots):
    assert hash(optional_previous_refresh_epoch, original_nullifier) == optional_previous_refresh_nullifier
    emit_nullifier(refresh_nullifier) and return