On proving marketplaces

Authors: Jakob Hackel @ BlockScience, Cooper Kunz @ Aztec Labs

Context

Aztec’s community originally articulated a number of designs for decentralized proving protocols during an RFP facilitated during October 2023. These designs articulated a landscape that resulted in a variety of mechanisms, i.e. auctions, proof races, or cooperative solutions aiming to achieve traditional byzantine fault tolerance.

Examples: 1, 2, 3, 4, 5.

The current designs enable the currently randomly elected sequencer to outsource or subcontract their proof generation to any zero knowledge proving marketplace that supports the Aztec network.

Here we aim to articulate a new idea: zero-knowledge proving marketplaces may be able to offer an improved user (requester) experience, and an improved quality of service (QoS), by allowing the requester of a zero-knowledge proof to specify which mechanism best fits their specific needs. Alternatively, it may provide a nice user experience that abstracts away the mechanism and provides an explicit decision based off of a set of expressed preferences. Notably, this idea is not specific to or designed for Aztec, but informed by the many conversations we’ve had over the past few months with proving marketplaces.

Our thinking

Since the QoS desired from each zero knowledge proof requester is likely different from that of any other requester (with different preferences on semi-fungible attributes, such as timeliness, costs, guarantees, predictability, etc), any decentralized prover marketplace has two choices:

  1. Choose one mechanism by which Provers coordinate, and hope that it roughly fits enough requesters requirements, or
  2. Choose several mechanisms and find an allocation to requesters, that better maps to their requirements (lower loss of quality of service).

If several mechanisms are chosen, there exist again two choices:

  1. Let requesters directly interface with any particular mechanism (as per their own understanding of the mechanism), or
  2. Provide some UX abstraction mechanism that facilitates this choice (such as letting requesters specify QoS metrics / KPIs that are important, non-negotiable, etc) which are then matched with a particular mechanism for their requests.

These can then be enhanced/tuned over time: If a particular auction mechanism is first identified as best match, but a requester over time finds that particular QoS metrics are not resulting as needed, one can either move to a different mechanism, or tune that particular instance of the mechanism.

For example, Aztec could then have their own interface, which is tuned over time via QoS evaluation, and even though the mechanism was originally “general purpose”, several instances of it later exist for different requesters with different parametrization within the same marketplace.

To the individual Provers for a given marketplace, a similar interface should exist. They should specify what they can commit to, with certain properties matchable to demand QoS. This should in theory allow anyone to connect, without caring who is on the other side, not breaking demand or supply constraints, but delivering more targetted (aka less lossy) QoS.

Any centralized marketplace will logically do such a routing internally anyways - a requester that interfaces with them will likely over time specify more and more QoS metrics they have, allowing the centralized marketplace to provide better service over time by adapating the internal routing/coordination.

We think that decentralized marketplaces may want to do something similar, even though it is more complex compared to a centralized marketplace. Forcing any one particular mechanism for coordination on all requesters will likely mean that some are happier than others, with the sad requesters moving somewhere else entirely - as any tuning of the one canonical mechanism might make life more happy for one requester, while at the same time another gets more sad.

While a future landscape of proving marketplaces might consist of different entities, each specializing in one mechanism for prover coordination, this could run into risks of fragmenting the market rather than making it more competitive - as each chosen mechanism fits well for one type of requesters, and less so for others, requesters might face a situation where most proofs are resulting from the one well fitting marketplace, increasing friction and switching costs especially when things go wrong. Instead, a healthy and competitive market could instead optimize quality of service for requesters and provers through a market of mechanisms, lowering costs while ensuring requesters specific needs are met better and better over time.

35 Likes

While I think the basic intuition that different proof requesters have slightly different requirements is correct, I think these all converge on essentially the same network design for almost 100% of current commercial usecases.

Let’s look at some of the parameters users may have preferences on: speed, cost, redundancy, censorship resistance.

Speed & cost are primarily a function of resource requirements. Broadly speaking, more resources equals more speed & cost and vice versa. This can and should be user configurable.

Redundancy is just replicated work. The relationship with costs is similar. The more redundancy there is, the more machines you are using and the higher the costs are. This also can and should be user configurable.

Finally, let’s look at censorship resistance. This is a bit tricky, because it’s partially a question of delivery (how likely is it that your workload is distributed in the network) and a question of execution (how likely is it that a given node completes the work). There may be user preference on this layer as there is in any blockchain system, but I think it’s likely that the majority of users have some definition of “enough”, which most such systems need to meet.

In closing, I think most quality of service elements which apply to proving marketplaces/networks can and should be implemented in all proving markets for them to be competitive and so doesn’t lead to a diversity of different kinds of networks with different QoS.

P.s. there are other features to consider like latency, devex, etc, which I didn’t touch on, but the same dynamic applies.

40 Likes

Hey Teemu. Thank you for your thoughtful response.

I think these all converge on essentially the same network design for almost 100% of current commercial usecases.

This could maybe be true for current use cases, but is less likely to remain true for future use cases.

In general, I’d still disagree and say there’s no design convergence yet in my opinion. I know of proving “marketplaces” that are implementing various different mechanisms targeting various different levels of hardware requirements, security/verifiability assumptions, etc. For a concrete example, afaik, you’re currently doing a random leader election to a (small) proof races with configurable redundancy, and I talked to one today implementing a first priced auction and not worrying about redundancy within the protocol at all. The logic being that at a sufficient economic bond, they’re sufficiently incentive to have out of protocol redundancy.

There may be user preference on this layer as there is in any blockchain system, but I think it’s likely that the majority of users have some definition of “enough”, which most such systems need to meet.

We would argue that many Layer-2’s are not censorship resistant currently, and that seems fine for them / their users, and therefore market precedent would prove this wrong. Therefore it’s likely the need for proving marketplaces is similarly diversfied. The idea is predicated on a lack of clear alignment on what is enough, even amongst L2’s.

Let me explain why through some (exaggerated, but hopefully illustrative) examples.


User persona: There is a bot attempting to do arbitrage on Aztec.

Scenario: For sake of the example: assume Aztec has 12s block times that align with Ethereum’s, ie it’s a based sequenced rollup attempting to go as fast as L1. The bot is specifically generating a simple transaction that goes from a (private) account to a (public) AMM and back into a (private) account. It may take 60s to generate this client side proof for the account interactions on a standard laptop, but it should be much faster with a dedicated set of proving infrastructure running on larger machines. The bot doesn’t care about privacy, just profit. The bot knows it’s going to make a decent amount of money in this arbitrage if facilitated timely, and so is relatively price insensitive. It also does not necessarily care about safety, or censorship resistance, as this is a short term trade. It either wants the proof by time T, or not at all.

Naively & articulating the exaggerated case for sake of example, this set of preferences could be something like this based on their relative sensitivity & QoS definitions. Let’s use the examples I originally articulated in my tweet response to you and you articulated in this post, while also acknowledging that there are more attributes or qualities requesters may care about.

Speed Costs Safety Censorship resistance
10/10 3/10 1/10 1/10

In this case, a requester could effectively define their preferences as: { 10, 3, 1, 1 }

Let’s then look at how different designs provide quality of service.


Design A - election then race

Registration

  1. Provers stake some tokens to register
  2. Provers can specify what hardware they’re running
  3. Requesters can specify their proof’s hardware requirements
  4. (There may or may not be a mechanism to verify that these registrations are consistent)

Usage

  1. Requesters ask for a proof, provide data, and specify N levels of redundancy they would like
  2. Design A facilitates a random leader (prover) election, among those with matching hardware requirements, and elects N provers
  3. These N elected provers now all race to produce proofs
  4. Each of the N elected prover that submits verified proofs is eligible for a time-decaying reward

Design B - pure race

Registration

  1. Provers stake tokens for sybil detection

Usage

  1. Requesters ask for a proof
  2. All provers may try and produce a proof as soon as it’s in the mempool
  3. Those with out sufficient hardware won’t try
  4. All provers that submit verified proofs are eligible for a time-decaying reward

In this case, assuming all else is equal, Design A that has a random leader election & restricts the race to a given set of participants (that may not have access to the most latency sensitive geolocation or otherwise) will always be slower than Design B and therefore provide a worse quality of service from the example requester’s perspective.


One simple option to improve Design A and allow it to provide a better QoS to our example requester is to allow requesters to specify an election set of 0, and that would enact a global FCFS proof race. This however is an example of what we are attempting to tease out — a protocol that is more opinionated about it’s choice of mechanism may have to contiuously tweak and modify those mechanisms in weird or unforseen ways to accomodate changes in the very, very new market in order to continue providing what is generally perceieved as “good quality of service”. This is what I believe you’re arguing for, and was originally articulated as “choose one mechanism and hope that it roughly fits enough requesters requirements” – and continuously tuning this mechanism to “roughly fit more requesters reqiurements”. Rather than attempting to continuously tweak and modify a specific mechanism, this post suggests that projects consider implementing various different mechanisms.


This may map very differently than a transparent (low cr concerns) rollup who needs guarantees that a proof will be generated within a longer time horizon, for example, a rollup settling every 24 hours. Their preferences could be: {speed: 3, cost: 2, safety: 10, cr: 1} and likely doesn’t necessitate a race at all.


I could provide another example of a user that is extra sensitive to cost, but not sensitive to other aspects of the specified (example, limited, illlustrative, +other disclaimer) preference set. eg. {speed: 2, cost: 10, safety: 3, cr: 1} In this case, a design that specifically implements an auction is likely going to provide a relatively better quality of service and generally provide a better price. Sure, you could tweak the parameterization or mechanism that is articulated in design A as an election → proof race, ie you could restrict the election to provers that are known or have signaled to be willing to prove proofs of type X under cost Y. But this never ending pursuit of tuning one mechanism as customers with different definitions of QoS is what we would aim to bring awareness for projects to proactively consider and potentially avoid.

23 Likes

Couldn’t agree more, this is exactly the thought process behind how Kalypso is designed. The only real constraint I would say is not fragmenting prover liquidity.

Here’s how things work in Kalypso. The core of the protocol is just a simple assigned tasks registry with the following responsibilities:

  • prover protection by enforcing capacity limits if specified
  • requester protection by enforcing QoS guarantees if specified through staking/slashing mechanisms

This can be fed by a frontend (in the compiler sense) with tasks that are assigned to provers. Crucially, it can be fed by multiple independent frontends running their own algorithms as long as they respect the capacity limits of the individual provers. While Kalypso currently ships with a orderbook + matching engine frontend, I can easily imagine additional mechanisms like an auction based frontend feeding it.

Ofc, given the decentralized nature of Kalypso, there would be a small amount of conflicts with multiple frontends at really high utilizations but even here, the impact can be kept to a minimum with some creative designs like partial resolutions and backup assignees.

IMO this is not a binary choice, we can and should try to do both. I see this as analogous to the intents debate where 1 is direct execution of a particular mechanism and 2 captures the intent and tries to find an appropriate mechanism (or a combination) to execute it. And from this perspective, 2 would be something that is built on top on 1.

21 Likes

To be clear: It is quite possible that for a good while (and maybe forever!) cost per proof will be the determining quality for all requesters (or at least enough of them to facilitate a competitive marketplace).
Yet, at some point this quality might be fulfilled “enough” for most requester types. When costs are lowered, and new use-cases become feasible, will they all care exactly the same about cost, cr, various interpretations/metrics of decentralization, latency, redundancy, and whichever other requirements and preferences might surface? Can the mechanism you chose - and which might have large switching costs for your marketplace if assumed to be the mechanism - facilitate the best experience for all your requesters? Maybe you could even lower costs for many of the requesters even more if you could choose different trade-offs for them, giving more requesters a better service and being able to then pay more provers for their compute sustainably?
In the end, this might boil down to a (here simplified) assumption: If the overhead (costs of complexity) of enabling more mechanisms is higher than the increased revenue/savings from less friction (for different types of requesters, also allowing them to request more proofs), then it might not be a good idea for you to plan for more (economic) mechanisms. Similarly, if you expect the proof requester landscape to look very monolithic, with all of them caring about cost and otherwise not very differentiated concerning other “quality of service” attributes, then again it makes sense to choose the one mechanism that achieves exactly that.
In the end, a future decentralized proof marketplace landscape should be able to sustainably maintain a competitive market. This might mean lowering costs as the absolute principle, but it could also mean facilitating the matching of a more diverse group of requesters and provers.

22 Likes