Accessing historic public state from private functions

It’s desirable for private functions to have access to public state. Since private functions are not able to reliably access the most recent public data (only sequencer can do that) we will need to somehow store the historic public data tree roots. What are the best approaches to do this?

70 Likes

To elaborate a bit on why it is desirable, here it is a snippet from Clarify on communication abstractions by LHerskind · Pull Request #54 · AztecProtocol/docs · GitHub. Essentially, it gives you a way to enforce access control inside private functions.


A note on L2 access control

Many applications rely on some form of access control to function well. USDC have a blacklist, where only parties not on the list should be able to transfer. And other systems such as Aave have limits such that only the pool contract is able to mint debt tokens and transfers held funds.

Access control like this cannot easily be enforced in the private domain, as reading is also nullifying(to ensure data is up to date). However, as it is possible to read historic public state, one can combine private and public functions to get the desired effect.

Say the public state holds a mapping(address user => bool blacklisted) and a value with the block number of the last update last_updated. The private functions can then use this public blacklist IF it also performs a public function call that reverts if the block number of the historic state is older than the last_updated. This means that updating the blacklist would make pending transactions fail, but allow a public blacklist to be used. Similar would work for the Aave example, where it is just a public value with the allowed caller contracts. Example of how this would be written is seen below. Note that because the onlyFresh is done in public, the user might not know when he is generating his proof whether it will be valid or not.

function transfer(
  secret address to,
  secret uint256 amount,
  secret HistoricState state
  ) secret returns(bool) {
  if (blacklisted[msg.sender] || blacklisted[to]) revert("Blacklisted");
  onlyFresh(state.blockNumber);
  // logic
}
function onlyFresh(pub uint256 blockNumber) public {
  if (blockNumber < last_updated) revert("Stale state");
}

:::info
This is not a perfect solution, as any functions using access control might end up doing a lot of public calls it could put a significant burden on sequencers and greatly increase the cost of the transaction for the user. We are investigating ways to improve.
:::


Having that out of the way, I think we should be able to use the public_data_tree_roots that we already have in the rollups (see [1]) and insert the end_public_data_tree_root into a historic tree.

  1. aztec3-packages/root_rollup_public_inputs.hpp at 3d72666098adb66b2cb52a74d45c3aada48250ed · AztecProtocol/aztec3-packages · GitHub
72 Likes

Pasting my reply to Jan’s original message from elsewhere (belatedly). This message only addresses Jan’s Q, and not Lasse’s reposonse.

we will need to somehow store the historic public data tree roots. What are the best approaches to do this?

Something like it might be needed, but it might be worth having a few rounds of discussion to hone in on the best approach for accessing historic states in general. E.g.:

  • For accessing historic public state:
    • Would an app (a Noir contract) perform the public state tree merkle membership proof, or would the private kernel circuit do it? (The former approach wouldn’t survive network upgrades, if we changed the tree hash function, for example).
    • Would such ‘read’ operations be common enough to warrant putting them in every kernel circuit iteration, though (because it’s a huge tree!)
    • Should we instead have some other tree which tracks all historic state hashes, instead. (I.e. a tree whose leaves are state hashes, where a state hash contains the roots of all trees). This might be more efficient.
    • Should we introduce a ‘transactions tree’ and a ‘receipts tree’, much like Ethereum has. Maybe they could be used to store historic state.
    • Should a private function only be able to access const public state? (we’d need to encode ‘const-ness’ in public tree leaves in that case)
    • Should a private function only be able to read notes, in which case public state which needs to be read by private circuits might need to be pushed to ‘notes’ in the private data tree.

There are a few options, which it’d be good to weigh up.

A nice approach might be: if an app circuit wants to read historic state, it can just specify the historic L2-block number from which it wants to read (as a public input). Then it would be cool if the kernel/rollup circuits were able to link that number with whatever tree we decide should store historic state information.

86 Likes

Would such ‘read’ operations be common enough to warrant putting them in every kernel circuit iteration, though (because it’s a huge tree!)

See my earlier response.

Should we introduce a ‘transactions tree’ and a ‘receipts tree’, much like Ethereum has. Maybe they could be used to store historic state.

Seems useful for the cases you mentioned offline, e.g., in transaction tx_1 I spent funds that I received in tx_2 and of the funds received in tx_2 none have been moved afterwards etc. I think it is still possible to do some of these without specific trees by using the trees we already got, but it is less elegant especially if we have unbalanced trees for new blocks (insertion of blocks that is not 2^n new transactions).

Should a private function only be able to access const public state? (we’d need to encode ‘const-ness’ in public tree leaves in that case)

If you have the historic tree, you can do as in my earlier response.

Should a private function only be able to read notes, in which case public state which needs to be read by private circuits might need to be pushed to ‘notes’ in the private data tree.

If this would be the case, it would be notoriously slow to do things in private that would need a public value, and it could still be “outdated” at the time of execution, so I don’t see the appeal.

it can just specify the historic L2-block number from which it wants to read (as a public input).

In the current outline of the historic trees, index 1 holds the historic block state after block 1 and so on, so it seems like we are already pretty close to this?

Given that we have multiple historic trees, what would be some cases where you would want to point to different “times” for the different trees in the same tx? Say t1 for data tree but t2 for public data etc? If we would always end up using the same t_n when using historic data, we could do as you point to and simply make a leaf include multiple roots making it a bit less heavy in the number of paths to prove?

72 Likes

Is the reason for wanting to do this having an easy ability of revealing source of funds for audits or is there something else behind this?

38 Likes

Mainly helping provenance. But provenance can also be used to make NFTs that you can prove have never been resold beyond the initial mint.

Can also be used to make airdrops and the like be made on-chain instead of as off-chain merkle drop, e.g., prove that you used the application before t_1 or had multiple transactions before t_2 etc.

6 Likes

Just thinking about @LasseAztec’s proposal further.

A downside of it, is that it leaks which contract has just been executed (because the contract has to call a public function to lookup the last_updated variable).

A very ugly solution (which build’s on Lasse’s, but seeks to hide which contract was executed), would be to write a SlowUpdates contract, which would be shared by many contracts, as a way of reading historic state. The SlowUpdates contract would hold state on behalf of other contracts (e.g. in a mapping(Address contract_address => Field[] storage_slots)).

The SlowUpdates contract could provide a guarantee that updates can only be made once every n blocks. That way, for n-1 blocks, users would have an assurance that their historic reads will be valid, and it would also hide which contract actually made the read (amongst a smaller anonymity set of all contracts which make use of the SlowUpdates contract).

Of course, it has major downsides:

  • Apps would need to be sure that updates every n blocks would be acceptable.
  • There might be gas spikes every n blocks, as apps try to make updates in the only n-thly permitted block.
  • Apps might fail to make an update (due to block space competition) for a long time.
  • Users who are attempting to read a state at a time which is close to the nth block, might choose to wait until after the ‘update block’, in case their tx isn’t mined in time.
  • Apps which update their public state far less frequently than every n blocks would be causing unnecessary UX pain for their users (for the reason bullet-pointed immediately above).
5 Likes

Taking it one step further…

We could introduce another tree: a ‘Slow Updates’ tree at the protocol level.

This would be an updateable tree (i.e. not append-only) whose leaves can only be updated once per epoch, by one big ‘batch update’ circuit, which would be ‘bundled into’ the first* rollup of the epoch.

Private functions would then have an entire epoch during which they can read current account-based state, privately.

That is to say: a private function would be able to read from the slow tree; only revealing the root of the current epoch’s slow tree, and a nullifier would not be required (because it’s not the utxo tree).

The benefit of this approach (enshrining this tree at the protocol level vs an app contract) is that some of the disadvantages listed in my post directly above wouldn’t happen. It whittles the ‘downsides’ to:

  • Apps would need to be sure that updates every n blocks would be acceptable (which seems fine: just don’t use the tree if you don’t need it).
  • Users who are attempting to read a state at a time which is close to an epoch boundary, might choose to wait until after the ‘update block’, in case their tx isn’t mined in time.
  • Apps which update their public state far less frequently than every epoch would be causing unnecessary UX pain for their users (for the reason bullet-pointed immediately above).

How would ‘Slow Tree’ updates be done?

From the Sequencer’s PoV:

  • A batch of ‘slow tree’ updates can only be submitted once per epoch.
  • Ideally, the sequencer of the very first slot of an epoch would include in their rollup block submission the pending batch of updates.
  • There would be a separate ‘slow tree updates’ circuit, whose proof could be optionally verified within the root rollup circuit. (Or we could just have another root rollup circuit variation containing this extra logic).
  • The L1 contract would enforce only one ‘slow tree update’ submission per epoch.
  • If the first sequencer doesn’t submit, any sequecer in the epoch can submit, as long as such a submission hasn’t been made earlier in the epoch.
  • Sequencers of future epochs would have to keep an eye on the tx pool, and ‘remember’ any txs which contain ‘slow tree update requests’, because we might not want such txs to loiter in the pool for an entire epoch.

From the user’s PoV:

  • Users would submit txs.
  • If those txs include non-empty ‘slow tree update requests’ (in their public inputs), then they can only be included in the next epoch.

Example use case

A really nice thing that this unlocks is proxy contracts:

  • A proxy contract needs to store a pointer to the address of the ‘proper’ contract. It can then make delegatecalls to that proper contract.
  • Options for how to store that pointer:
    • As a publicly visible note in the utxo tree:
      • Then every time a tx calls the proxy contract, it will need to read the ‘address pointer’, which will result in a nullifier being generated (in order to prove the ‘current’ address pointer is being used). This leaks to the world which contract has been called.
    • As a private note in the utxo tree:
      • Then nobody except the owner of the ‘secret’ can execute the contract, which often is useless.
    • As a public state in the public state tree:
      • Then we leak which contract we’ve called.
    • As a constant/immutable state in one of the trees:
      • Then we cannot update the pointer, which defeats the purpose of a proxy.
    • Using Lasse’s trick (see post above).
      • This needs a public call, which leaks which contract was executed. It also can lead to race conditions over a particular state variable.
    • As a value in this proposed ‘state updates’ tree.
      • It works, but txs shouldn’t be generated too close to the end of an epoch (because the tree’s root might imminently update and render the tx void).
8 Likes

I like this idea (and think we have discussed it irl before :thinking:). As I think some of the more time-sensitive contracts would be the USDC and Tether blacklists, we might take a look at how quick they normally have been to act?

Similarly, if the epoch is quite a bit longer than the time expected to include a transaction, we could probably get a world where people most of the time would be able to “get through” before the update happens.

The UX might be slightly more awkward than my trick, but the cost should be reduced because the public call was removed, and it gives better privacy so a fair trade-off from my point of view.

4 Likes

I love this idea.

I had been thinking about doing the opposite: let a private function read public state up to N blocks stale. The app circuit would emit a public input like “I’ve read public state from root XYZ, and this tx should only be valid if XYZ is at most N blocks old”, which gets validated by the base rollup circuit in the sequencer. However, this has the big disadvantage that, when data gets updated, there are N blocks where both the old and new value coexist, and the caller can choose which one they want to use for each tx. And that makes things really hard to reason about.

On the other hand, with your approach you just delay updates, and once they go into effect, the old value is inaccessible. I think this is a much better building block than what I had in mind!

4 Likes

Specifically “they can only be included in the next epoch”
So does a user get a revert if this is not the right time? Or does the app store such requests and forwards them to a sequencer when the epoch comes? Or can the users trust the protocol to force the sequencer to send these updates to the tree?

2 Likes

So does a user get a revert if this is not the right time?

Good point. If the Sequencer was able to include one of these ‘slow tree update’ txs before the next epoch, resulting in a ‘revert’, that would be bad, because then the tx wouldn’t be able to be included in the correct epoch.
We would probably have to cause the entire rollup to fail, if a sequencer tries to include a slow tree update in an epoch, if some previous slot in that epoch has already submitted the slow tree update for that epoch. This would force sequencers play to the correct rules.

Or does the app store such requests and forwards them to a sequencer when the epoch comes?

I guess an app could hold on to the tx until the time comes, but if the app is too late to submit, they might miss being included. I was thinking the app could sent such a tx whenever, and it would be up to sequencers to store the txs until they can be submitted.

Or can the users trust the protocol to force the sequencer to send these updates to the tree?

I don’t think the protocol would force inclusion. I was hoping the regular fee markets would work: if a user pays enough, the sequencer will be incentivised to include their tx.
There is an interesting problem, though: what if there’s a spike in gas prices (on L1 or L2) after a user has submitted such a tx, but before the tx can be included in an epoch? The tx could become stuck. Perhaps the solution would just be for the user to resubmit the tx with a higher fee? But then it might be better to implement your suggestion: for the app to only submit these txs when the time comes.

5 Likes

Interesting!

We would probably have to cause the entire rollup to fail,

What are the downstream effects of this? Can the sequencer recover by sending another rollup? Or have they missed the epoch causing a liveness issue

I guess an app could hold on to the tx until the time comes, but if the app is too late to submit, they might miss being included.

Tbf, the same applies to a sequencer though, and there is no incentive for the sequencer to add it. However apps do have an incentive → users!

Great point on the gas issue!

3 Likes

What are the downstream effects of this? Can the sequencer recover by sending another rollup? Or have they missed the epoch causing a liveness issue

Depends on how quick they are to create a valid one. Contract will essentially not “know” that your tx reverted, would just be as if it never happened so if you can give it a new one in a timely manner its all good. Essentially it will be similar to if no blocks are proposed for that time period.


An issue with relying on the fee at the time of epoch update to be covered by what came earlier could be for an attacker to pump fees after an attack such that an upgrade pausing or adding him to a blacklist is “skipped”. It puts a lot of extra stress into gas prices at the end of epoch as all changes want in but no-one want to sit there with an underpriced tx.

Ideally it should be possible to execute the tx, and then have the update take effect when epoch is passed. As txs that use this storage would need the last data, any pending txs of this type will be invalidated when the epoch boundary is crossed. So you will need to have a public input with the active epoch root to be valid.

Say that you accept this, and play along. Introduce the slow poke storage

struct SlowPoke {
  Field currentValue; 
  Field pendingValue;
  Field pendingActiveFrom;
}

mapping(slot => SlowPoke) public slowPokes;

When reading the slowpoke, you will use currentValue if blockNumber < pendingActiveFrom and pendingValue otherwise.

As we have bounded the blocknumber to the epoch, the user cannot just pick something stale. But the cost is that we might throw away a lot of pending messages when updated :grimacing:.

To update, you set currentValue = pendingValue and then update pendingValue = newValue and pendingActiveFrom = nextEpochblock. For users in the current epoch, they will be using the currentValue and after the epoch boundary is crossed, they need to use the new value.

Note, if we allow writing further into the future for the pendingActiveFrom it can also be used as a timelock until X in the private domain, which I think could be quite fun to see for governance tokens turning on transferability etc. Some governance tokens when minted cannot be transferred until decision is taken to support it, you could do that this way with the event being somewhat timed in the future (beyond one epoch).

“updating at new epoch” is no longer an action that is done at the exact jump, but could happen any time in the epoch. This allows something like USDC to send one tx that is updating theirs blacklists in both domains with the private just being a bit slower to kick in, but there should not be more pressure at the end of epoch than at beginning to include. And you don’t need to have l1 handle this separately. Also if there multiple tx’s that update the same, only the last would take over, e.g., say protocol updates a pause flag, but then figure it was a mistake, so they revert their change by updating the flag again.

I might be rambling, so please let me know

7 Likes

I like this idea a lot!

3 Likes

I’ve a few thoughts from sitting down and thinking about how to implement something like this, my mind is split into two camps, I’ll do my best to try and describe them.

  1. The transactions which introduce pending updates for the next epoch are held by the sequencer in their mempool.
  2. There is a new “pending slow updates tree” that is written into during each epoch, queuing updates for the next.

1. Holding pending slow updates in the sequencer mempool.

This model implies that at the beginning of an epoch there is a set period of time in which pending updates can be included into the current epoch, after which the current epoch is frozen and no more updates can take place.
I fear this approach ( as mentioned above ) it introduces a gas bidding war at the end of every epoch, to see what this bidding would look like we can draw analogies from ethereum’s mempool profile around oracle updates. Where gas prices spike in the hope of including transactions around the update. This very issue is not that big today for a couple of reasons.

  • The price to maintain an attack preventing an oracle update is increasingly high, as the cost of maintaining the attack is high. If we have set times in which tree updates will be included we limit the cost of an attack to just being in this very small window. Granted this attack must be repeated for multiple epochs, but at that point it may already be too late.
  • Secondary auction markets take bidding for these blocks offline (flashbots), decreasing the pain on users, by introducing new centralising forces on the network.

To counteract this, we probably need additions into the pending updates tree to be a continous auction that is not temporaly isolated. In the section with the pending updates tree I propose a draft for one possible solution.

Introducing a pending updates tree,

In this proposal, updates are no longer held in the mempool to be flushed around epoch barriers, rather inserting into the next epoch is a continous auction that occurs during every block.
For reasons stated above, we want to avoid isolating our auctions to take place around epoch boundaries, as this has a relatively low censorship cost, but rather we would like to favour a continous auction.
In this model:

  • Slow updates do not do directly into the tree, but are queued into a second tree ( the pending slow updates tree or something idk).
  • This tree is refreshed each epoch and it is flushed into the slow updates tree.
  • Space in the pending tree is auctioned off on a block by block basis, rather than around the epoch boundary.

Open questions for the pending updates tree

  • We do not want bidding to access the slow updates tree to affect normal operation of the network, in which case it may make sense to have an secondary fee market that is only concerned with updates to this tree.
    • This increases the complexity of operations leveraging this slow updates tree, however, it is likely app developers and not users who will be needing to use this tree. Maybe additional overhead is acceptable in this case as its usage is likely scoped to powerusers.
    • This may be a HUGE can of worms to open just to address this state race conditions, however if the usecase is essential to the operation of normal apps (reliable public state access the private setting), then maybe increased protocol complexity is an acceptable trade-off, alot of thought will need to go into this if we end up choosing this approach.
  • Parallel to the fees discussion, the translation from the pending to the slow updates tree will have another protocol overhead, deciding how the sequencer will coordinate this work with the prover network, and how we will compensate these operations.
  • The tx fee will need to be split between the sequecer that add the slow update and with the sequencer that performs the flushing, again the degree of this split must be modelled and decided.
  • We will need to decide the throughput of this fee market, again requiring modelling.
6 Likes

My vote goes for option #2, the pending updates tree. As you say, having a set of upgrades tied to a very specific block can turn into a big issue depending on the sequencer selection algorithm we pick (for instance, we are seeing that RANDAO manipulations could affect a random-based selection algorithm like Fernet) since it makes a particular block a target for manipulation.

In addition to the above, holding updates on the mempool can be annoying if the txs that execute those updates have other effects. Let’s say I send a tx that updates the private data tree and the slow updates tree: I wouldn’t want the private data tree change to be withheld because I’m enqueuing a slow update change as well. Holding updates on the mempool also complicates the p2p algorithm, since now we have txs that need differentiated treatment.

As for the problem you mention of translating from the pending to the slow-updates tree: I think this should be doable at a low cost. Instead of keeping just the pending updates in the pending tree, we can keep the whole tree as it gets updated, and on the epoch boundary just update the slow-update tree root with the pending one (like a double-buffer).

5 Likes

Another question I got is what kind of tree are we going to use here. For some applications, it may be desirable to prove non-inclusion in the slow updates tree, so we’d need something like the nullifier tree, as opposed to the private data tree (though it has a higher overhead).

In particular, I was thinking about contract or encryption key upgrades: today we can prove the encryption public key or bytecode of a contract through its address (the address is the create2-like hash of those two, plus a salt and a few others). This has the benefit of “encoding” an encryption key into an address, which is important for being able to send encrypted notes to an address before that recipient has interacted with the chain deploying their account contract.

However, if the user wanted to update their encryption key or the code of the contract, then the relationship between the key and the address would no longer be valid. I was thinking we can store updates to the public encryption key or code into the slow update tree, so proving that a key (or code) corresponds to a given address would be a matter of either a) showing the address preimage and proving non-inclusion of an update in the slow-updates tree, or b) showing the updated value in the slow-updates tree.

This has the side effect of tying pretty much every single private tx to the slow-updates tree, since asserting the public key of a user or the code of a contract is always needed. So I’m not sure if it’s such a good idea, but I haven’t been able to come up with a better alternative for key rotation or contract upgrades (assuming no proxies, that is).

6 Likes

I might be missing something, but are the issue of side effect being slow not handled by the method i proposed in Accessing historic public state from private functions - #14 by LasseAztec? There changes are made “instantly” but don’t take effect before after time limit. There is not a pending tree and an “in use” tree, there is just one tree. To be able to point to specific historic blocks, you could include it similarly to any other historic public data.

For some applications, it may be desirable to prove non-inclusion in the slow updates tree

Since we are dealing with public data in this tree, should be rather simple to handle non-inclusion as you could just lookup the slot?

5 Likes

More thoughts / realisations about the slow updates tree:

Assuming a Slow Updates tree which is implemented as a smart contract, a contract doesn’t need to do a “call” to read from it! Thanks to @jaosef for pointing this out. Instead, the calling contract can read the state (such as the root, frontier, leaves; depending on the implementation) of the slow updates contract from the archive tree. This is slightly ugly, in that the calling contract now needs to have an understanding of the slow update tree’s epoch length and storage layout, and effectively needs to implement slow updates tree membership proofs itself. So it’s not ideal from the perspective of clean interfaces and modularisation. But… it does save a precious function call!


Another thought:

There’s still some concern that slow updates trees could lead to spikes and troughs in network activity. E.g. at the very end of an epoch there might be a spike in submissions (as people race to read against the root before it changes), followed by a lull at the beginning of an epoch (whilst everyone takes time to generate new membership proofs against the new root), followed by another spike when everyone’s new proofs are ready.

Such volatility might only become a concern if everyone congregates around a select few SU tree implementations, because then epoch windows are more likely to be “in-phase”.

One way to smooth this possible network volatility, is to introduce another tree, so people need to wait one extra epoch before updates take effect.

So instead of:

  • current tree ← this tree is fixed, and reads are done against this
  • updateable tree ← this gets updated during the current epoch. In the next epoch, people can read from it.

… we have:

  • current tree ← this tree is fixed, and reads are done against this
  • next tree ← this tree is fixed. In the next epoch, people can read from it. People can begin creating membership proofs against this if they’re running out of time in the current epoch, because its root is static.
  • updateable tree ← this gets updated during the current epoch. In two epochs’ time, people will be able to read from it.

This approach adds extra latency (which might not be good if someone needs to make an immediate state chance), but I think it might smooth network activity.

To support this idea, a min_block_number might be needed, so that txs could be submitted to the tx pool in advance of them being valid.

7 Likes