L1 storage proofs

A common pattern on Aztec will be to prove something about the state of Ethereum L1 within an Aztec function. I.e. proving membership of a leaf in Ethereum’s merkle patricia trie (using keccak hashing) against a valid historic Ethereum state root.

Note: there are talks within the Ethereum community to move towards more snark/stark-friendly state trees. I don’t know when that will happen, so I ignore the possibility here.

How can someone achieve this today?

Here’s a flow that works (with tradeoffs):

  • On L1, via a portal contract designed for this purpose, read a recent Ethereum blockhash (so that we know it is a valid blockhash) and send that blockhash via an L1->L2 msg to a corresponding L2 contract.
    • Note: only the last 256 block hashes are accessible within an Ethereum smart contract.
  • On L2, within a private function, consume the L1->L2 msg to be convinced of the correctness of a valid historic blockhash. Consumption of this msg can be private: no one learns what msg has been consumed.
  • Generate your L1 storage proof against the Ethereum state root that is contained within the chosen blockhash.
    • This L1 storage proof will be hundreds-of-thousands of constrants. To improve proving times, it can be extracted into a standalone Noir circuit, so that it can be generated asynchronously, as soon as the blockhash is known.
  • There will be some latency between the L1 tx and the ability to execute an Aztec function that can consume the L1->L2 msg.
  • Once the L1->L2 msg is consumable, the (UltraHonk) storage proof can then be verified within an Aztec smart contract function for ~25k constraints.

I don’t want it to be a two-step process

I thought you might say that.

Can we do a storage proof against L1 state without starting with an L1 tx? Let’s look at an approach:

  • As an observer of Ethereum, we know the latest Ethereum state root and blockhash, so we just read it and feed the blockhash into a circuit.
    • Again, you might want to optimise by extracting the storage proof into its own standalone circuit and verifying that UH proof within your Aztec function – but that’s an orthogonal topic, really.
  • So we do our L1 storage proof within our Aztec function, and expose the blockhash as a public input.
  • But… how do we convince the chain that this is a valid blockhash? Our tx only has a claim of some blockhash, but it could be maliciously incorrect, or there might have been a reorg on L1 since you read the blockhash.
  • The Rollup.sol contract doesn’t have a way to check “Is this blockhash correct?”, and it certainly doesn’t have a way to iterate over every tx in an epoch to check each tx’s claimed blockhash.
  • Our Aztec function could send an L2->L1 msg to a portal contract to say “Here is my claim of a valid, recent L1 blockhash”. But then we’d need to execute an L1 tx to consume that msg from the outbox via the portal. And what’s more our L2 tx wouldn’t be allowed to make any state changes until it is convinced that the claimed blockhash is correct.
  • So then the portal – after having consumed the msg from the outbox and validated the correctness of the claimed blockhash – would need to send an L1->L2 msg back to the L2 contract to say “Yes, this blockhash is correct, you can now make state changes”.
  • So then we’d need to execute another L2 tx to consume the L1->L2 msg and finally make state changes.

Madness. Hence the first approach; because it requires fewer steps.

Can’t we change the protocol to make this easier?

I thought you might ask that.

Here’s an edit that could be made to the protocol (no comment on whether it’s worthwhile):

A protocol change

You might be familiar with Aztec’s Parity circuits: they copy L1->L2 messages from the L1 inbox to the L2 tree. A similar approach can be taken here: with each Checkpoint, we could copy-into an L2 state tree a recent Ethereum blockhash. This would automatically make a new, recent L1 blockhash available to Aztec smart contract functions every ~72s (subject to change).

I’m not entirely sure where we’d copy it to. Maybe we could put it in the block header of the first block in the checkpoint (similarly to what we do with some other checkpoint-specific information), so that it can be read via the archive tree.

Thoughts?

Disclaimer

The information set out herein is for discussion purposes only and does not represent any binding indication or commitment by Aztec Labs and its employees to take any action whatsoever, including relating to the structure and/or any potential operation of the Aztec protocol or the protocol roadmap. In particular: (i) nothing in these posts is intended to create any contractual or other form of legal relationship with Aztec Labs or third parties who engage with such posts (including, without limitation, by submitting a proposal or responding to posts), (ii) by engaging with any post, the relevant persons are consenting to Aztec Labs’ use and publication of such engagement and related information on an open-source basis (and agree that Aztec Labs will not treat such engagement and related information as confidential), and (iii) Aztec Labs is not under any duty to consider any or all engagements, and that consideration of such engagements and any decision to award grants or other rewards for any such engagement is entirely at Aztec Labs’ sole discretion. Please do not rely on any information on this forum for any purpose - the development, release, and timing of any products, features or functionality remains subject to change and is currently entirely hypothetical. Nothing on this forum should be treated as an offer to sell any security or any other asset by Aztec Labs or its affiliates, and you should not rely on any forum posts or content for advice of any kind, including legal, investment, financial, tax or other professional advice. For readability, this document uses the first-person plural (“we”) as a narrative convenience. This is a figure of speech and should not be read as implying that the author personally endorses or participates in every idea, proposal, recommendation, criticism, or opinion expressed.

1 Like

Big +1 to including this information in the checkpoint header, next to the inHash (or maybe mixed into the inHash?).

As for the standalone Noir circuit, is it possible to keep it standalone but still use Chonk for folding it into the client-side proof, rather than recursively verifying it? Does this question even make sense?

1 Like

Yep, that makes sense. I think I was too lazy with my phrasing “verify the proof”. My understanding is that the interface for “verifying” an UH proof within an aztec smart contract function will fold the UH proof into the client-side proof. So it’ll be quite a nice, small number of constraints (tens-of-thousands rather than hundreds-of-thousands or millions).