Fast note decrypting

The protocol spec on note tagging strategies outlines three strategies for users to find notes that belong to them:

  • Trial Decryption
  • Delegated Trial Decryption
  • Tag Hopping

Trial decryption is the most straightfoward option: check every note to see if it works with your key. According to the docs, this is “a significant burden on the user”:

This is the cheapest approach in terms of calldata cost, and the simplest to implement, but puts a significant burden on the user. Should not be used except for accounts tied to users running full nodes.

The main point of my post: it seems like trial decryption isn’t that expensive? Here’s the pseudocode:

decrypt(ciphertext, recipient_private_key):
ephemeral_public_key = ciphertext[0:64]
shared_secret = ephemeral_public_key * recipient_private_key
[aes_key, aes_iv] = sha256(shared_secret ++ [0x01])
return aes_decrypt(aes_key, aes_iv, ciphertext[64:])

  • At 10 TPS, there would be ~1 million notes to trial decrypt each day.
  • AES is fast compared to sha256
  • sha256 is probably the “limiting” factor, but it seems fast enough?

How long would it take users to compute 1 million sha256 hashes? A few minutes depending on hardware? A few minutes per day isn’t that bad. And this could easily be optimized.

It seems to me that trial decryption is fast enough that every user could do it, without needing to run a full node. Am I missing something?

34 Likes

Hey hey! Thanks for the post :smiley:

I think the bottleneck is not so much the amount of computation, but the amount of data to download in order to sync.

With 10tps, 315m txs/year.

Pretending every tx is a basic 2-in-2-out zcash-style transfer, that’s 630m new notes each year.

Brute force discovery of which notes might be pertinent to you could be done by just (somehow) only downloading the ephemeral public keys (which could be encoded as 32 bytes), and ignoring the ciphertexts. (E.g. with this kind of scheme ERC-5564: Stealth Addresses, or some similar approaches that require an extra 32 bytes ERC-5564 Stealth Addresses - #10 by iAmMichaelConnor - EIPs - Fellowship of Ethereum Magicians).

Downloading just 32 bytes per note would be 20gb of data to download per year, which isn’t much if you stay synced to the chain, but what we’re thinking is that might be a bad UX for light nodes who wish to sync after some time away from the chain.
With a 100Mbit/s internet speed (it’ll obviously vary from place to place and user to user), that’s ~30 minutes to learn the indices of which ciphertexts are relevant to you from the past year of network activity, which isn’t great.

But this approach of just downloading the ephemeral public keys (to learn which ciphertext indices are relevant to you) doesn’t quite work without extra cryptography, because you’d then need to make a separate query to a server to download your pertinent ciphertexts, at which point you leak to the server which indices are pertinent to you. The server could implement some cool PIR approach so that they don’t learn which ciphertexts are yours.

But if we want straightforward (which I agree, would be lovely: simple is great), then the user would need to download all of the ciphertexts too.

A bare-minimum private token note on Aztec might need ~2 fields: contract_address, value. (The owner address could maybe be inferred from the incoming viewing key that matched with the ephemeral public key, the randomness could be derived from the shared_secret of the tx, a fairie-gold nonce could be derived from one of the nullifiers of the tx, and the value could technically use up less space). So let’s say the recipient actually needs to download ~four 32-byte-words per note: ephemeral_public_key, contract_address, value, a nullifier.

That’s now 80GB/year of data, so 2 hours to sync a year’s worth of data.

But Aztec txs won’t all be bare-minimum transfers. They’re general private smart contracts, so a note might need to convey more information than a value. To be flexible and general, a note payload will likely need to contain:

  • ephemeral public key
  • contract_address
  • storage_slot (so that the Aztec execution environment can store the note against some index, in some collection that can be queried by a smart contract when it later needs to read from that storage slot)
  • note_id (in case a smart contract contains many different note definitions. This is likely 1 byte, so I might ignore it here).
  • custom note fields (given meaning by the smart contract developer - we’re yet to decide on what this constant number should be). It’ll likely need to be several fields if users want to build interesting things. Let’s say 4 for example’s sake, but it might need to be more. Lots of tradeoffs around this topic.
  • (and download a nullifier to derive the nonce)

So 4 fields plus e.g. 4 custom note fields. So 8 * 20GB = 160GB/year. So 4 hours to sync a year’s worth of data.

But each tx will very likely need to create more notes than just 2. A tx can touch multiple private functions, and each of those functions might wish to create new notes. We haven’t settled on constants for how many notes a tx should emit. It’ll depend on the apps we see during testnets, and on benchmarking costs. There’s also a consideration that we need every tx to look the same. Maybe we’ll have a few “tranches” of privacy set, that allow different numbers of new notes / nullifiers.

A reasonable tranche might be 8 notes per tx. So then we’d be looking at 640GB/year, so 16 hours to download the data.

Lots of ISPs around the world won’t let their customers download that much data in a month; let alone in a single day. We might need to be mindful of those annoying limitations too.

And if someone needs to sync after multiple years away… :scream:

We’ll be hitting data publication bottlenecks before any of these note discovery problems play out, but with the RFP, we wanted to explore whether there are any interesting ideas to bring down sync times for ordinary users. The client-server model is a compelling approach, if private information retrieval approaches can be used.

The good news is, there are lots of incremental approaches that can be taken, with different tradeoffs.

4 Likes