To be clear: It is quite possible that for a good while (and maybe forever!) cost per proof will be the determining quality for all requesters (or at least enough of them to facilitate a competitive marketplace).
Yet, at some point this quality might be fulfilled “enough” for most requester types. When costs are lowered, and new use-cases become feasible, will they all care exactly the same about cost, cr, various interpretations/metrics of decentralization, latency, redundancy, and whichever other requirements and preferences might surface? Can the mechanism you chose - and which might have large switching costs for your marketplace if assumed to be the mechanism - facilitate the best experience for all your requesters? Maybe you could even lower costs for many of the requesters even more if you could choose different trade-offs for them, giving more requesters a better service and being able to then pay more provers for their compute sustainably?
In the end, this might boil down to a (here simplified) assumption: If the overhead (costs of complexity) of enabling more mechanisms is higher than the increased revenue/savings from less friction (for different types of requesters, also allowing them to request more proofs), then it might not be a good idea for you to plan for more (economic) mechanisms. Similarly, if you expect the proof requester landscape to look very monolithic, with all of them caring about cost and otherwise not very differentiated concerning other “quality of service” attributes, then again it makes sense to choose the one mechanism that achieves exactly that.
In the end, a future decentralized proof marketplace landscape should be able to sustainably maintain a competitive market. This might mean lowering costs as the absolute principle, but it could also mean facilitating the matching of a more diverse group of requesters and provers.
22 Likes