Hey hey! Thanks for the post 
I think the bottleneck is not so much the amount of computation, but the amount of data to download in order to sync.
With 10tps, 315m txs/year.
Pretending every tx is a basic 2-in-2-out zcash-style transfer, that’s 630m new notes each year.
Brute force discovery of which notes might be pertinent to you could be done by just (somehow) only downloading the ephemeral public keys (which could be encoded as 32 bytes), and ignoring the ciphertexts. (E.g. with this kind of scheme ERC-5564: Stealth Addresses, or some similar approaches that require an extra 32 bytes ERC-5564 Stealth Addresses - #10 by iAmMichaelConnor - EIPs - Fellowship of Ethereum Magicians).
Downloading just 32 bytes per note would be 20gb of data to download per year, which isn’t much if you stay synced to the chain, but what we’re thinking is that might be a bad UX for light nodes who wish to sync after some time away from the chain.
With a 100Mbit/s internet speed (it’ll obviously vary from place to place and user to user), that’s ~30 minutes to learn the indices of which ciphertexts are relevant to you from the past year of network activity, which isn’t great.
But this approach of just downloading the ephemeral public keys (to learn which ciphertext indices are relevant to you) doesn’t quite work without extra cryptography, because you’d then need to make a separate query to a server to download your pertinent ciphertexts, at which point you leak to the server which indices are pertinent to you. The server could implement some cool PIR approach so that they don’t learn which ciphertexts are yours.
But if we want straightforward (which I agree, would be lovely: simple is great), then the user would need to download all of the ciphertexts too.
A bare-minimum private token note on Aztec might need ~2 fields: contract_address, value. (The owner address could maybe be inferred from the incoming viewing key that matched with the ephemeral public key, the randomness could be derived from the shared_secret of the tx, a fairie-gold nonce could be derived from one of the nullifiers of the tx, and the value could technically use up less space). So let’s say the recipient actually needs to download ~four 32-byte-words per note: ephemeral_public_key, contract_address, value, a nullifier.
That’s now 80GB/year of data, so 2 hours to sync a year’s worth of data.
But Aztec txs won’t all be bare-minimum transfers. They’re general private smart contracts, so a note might need to convey more information than a value. To be flexible and general, a note payload will likely need to contain:
- ephemeral public key
- contract_address
- storage_slot (so that the Aztec execution environment can store the note against some index, in some collection that can be queried by a smart contract when it later needs to read from that storage slot)
- note_id (in case a smart contract contains many different note definitions. This is likely 1 byte, so I might ignore it here).
- custom note fields (given meaning by the smart contract developer - we’re yet to decide on what this constant number should be). It’ll likely need to be several fields if users want to build interesting things. Let’s say 4 for example’s sake, but it might need to be more. Lots of tradeoffs around this topic.
- (and download a nullifier to derive the nonce)
So 4 fields plus e.g. 4 custom note fields. So 8 * 20GB = 160GB/year. So 4 hours to sync a year’s worth of data.
But each tx will very likely need to create more notes than just 2. A tx can touch multiple private functions, and each of those functions might wish to create new notes. We haven’t settled on constants for how many notes a tx should emit. It’ll depend on the apps we see during testnets, and on benchmarking costs. There’s also a consideration that we need every tx to look the same. Maybe we’ll have a few “tranches” of privacy set, that allow different numbers of new notes / nullifiers.
A reasonable tranche might be 8 notes per tx. So then we’d be looking at 640GB/year, so 16 hours to download the data.
Lots of ISPs around the world won’t let their customers download that much data in a month; let alone in a single day. We might need to be mindful of those annoying limitations too.
And if someone needs to sync after multiple years away… 
We’ll be hitting data publication bottlenecks before any of these note discovery problems play out, but with the RFP, we wanted to explore whether there are any interesting ideas to bring down sync times for ordinary users. The client-server model is a compelling approach, if private information retrieval approaches can be used.
The good news is, there are lots of incremental approaches that can be taken, with different tradeoffs.