Researching Aztec applicability for a privacy-first participation protocol

Hi Aztec team — I’m Ivan, building D-Scope, a privacy-preserving survey protocol where participants prove eligibility to answer certain questions without revealing sensitive attributes. We’re exploring Aztec as a potential privacy-native foundation for gated participation, our goal is consumer-grade UX, so we’re looking for practical benchmarks and guidance:
Typical client-side proof generation time (desktop vs mobile) for an eligibility gate with ~2–5 simple conditions plus a nullifier to prevent duplicate participation?
How this scales under higher participation volumes?
Best practices to keep latency/friction low and avoid degrading user experience.
If you can share current metrics, recommended patterns, or any references/examples, that would be hugely helpful?

1 Like

eyey! interesting, i was actually developing a neural-network (for digit recognition) trainer in Aztec and trying to get a use-case for, i think this could apply right in, what do you do with the survey results?

i imagine: users answer questions, they privately input data, and the result is a model trained with the user’s interactions. the private data never leaves the user’s device, the model weights are updated to reflect the user’s interactions

i’m planning to public the repo soon, will keep you posted about

as for benchmarks, i’m getting for single layer w/ 650 total params 12s proving time (for submitting a new input), multi-layer w/ 1200 params 15s, and convolutional network w/ 91 params 10s. reading the model is roughly 0.5s.

be aware that the 2 main limitations i was running into are: (a) each transaction can only write up to ~60 fields (~1.9 KB), and (b) each transaction is limited to 6M gas in computation

Client side proof generation time depends on exactly what conditions you’re checking. That being said, our proving system is optimized for client side proof generation. Doing a AMM swap on Aztec takes about 15s on a macbook M3 and 30s on a pixel 8 (using code here). It should scale well because all proof generation is done client side.

Is this something that you’d want to put on chain, or would it be in off chain application? Doing it on Aztec will add latency because you have to wait for transactions to be included on chain. If you can do it off chain with just Noir, latency could be reduced. Do you have more details about your application design?

Hi — thanks for the detailed response, this is very helpful, let me provide more context about our application design.
D-Scope is a privacy-preserving survey protocol. The core flow is:
A survey creator defines eligibility conditions (typically 2–5 simple predicates such as age threshold, country check, KYC flag, etc.) a participant proves eligibility without revealing the underlying attributes (we use a nullifier to prevent duplicate participation) individual answers should remain private, but we need aggregated results for analytics, from a circuit perspective, we expect something like:
3–5 simple comparisons
1 nullifier
potentially 1 Merkle membership proof (depth ~20)

We do not intend to store full survey responses in Aztec state. Ideally, only nullifiers and minimal commitments would be written on-chain, our main constraint is consumer-grade UX. We are targeting a non-crypto-native audience (e.g. marketing research, DAO governance), so:
Proving time ideally <10–15s on desktop
Mobile proving time should not exceed ~20–25s
High participation bursts (10k–50k users) should not degrade the experience significantly

A few specific questions - for a circuit of this complexity, what proving times would you expect on, mac M-series, mid-range Android (e.g. Pixel-class) iPhone? In a survey-style application, would you recommend, executing the eligibility logic fully inside an Aztec private contract or generating proofs off-chain with Noir and only anchoring minimal state (e.g. nullifiers) on Aztec? under higher participation volumes, what typically becomes the bottleneck?
Are there upcoming optimizations (mobile proving, native/GPU acceleration, prover improvements) that would materially reduce client-side latency? We are trying to decide whether Aztec should be, the full privacy execution layer for D-Scope or a minimal settlement layer anchoring off-chain proofs

Any guidance on recommended architectural patterns for this type of application would be greatly appreciated!?

Ey Ivan, I’m not sure if it fully applies for this use case, but if I understand it correctly you don’t wanna have the actual inputs of the users, but an aggregated “view” of their responses, similarly, this is what this neural-network actually does: https://aztec-hive-training.vercel.app/ (the diff is that the users here make a claim “this drawing is a 1” and the neural network remembers that). If you can define your problem in machine learning terms I’m sure you’ll find some plausible implementation for the codebase. As for speed in different devices, I invite you to try it out!

1 Like

Thanks for sharing this! Yes, the aggregated view without exposing individual inputs is exactly what we’re aiming for in D-Scope, the ML example is interesting as an analogy for privacy-preserving aggregation
I’ll try the demo to get a sense of proving performance across devices!

2 Likes