Account guardians, EIP1271, and recursive proofs

With the goal of stress-testing the current Account Abstraction design, we would want to implement some new use cases. We recently built a TouchID account contract as part of an offsite, and it’d be nice to tackle a social recovery scenario next. For this, we can take a page out of Argent’s Guardians feature, as described by Julien:

[…] the original Argent implementation of guardians on L1 (e.g. for social recovery). On L1 guardians are accounts, i.e. they can be EOA or smart contracts when the guardian is another Argent user. And these guardians approve actions by providing a signature (so that they don’t have to pay for their guardian duty). However, that model of guardians is not working with 4337 […] since you cannot access the storage of another contract during validation making it impossible to verify the signature of a guardian contract (i.e. you cannot call is_valid_signature on the guardian).

In other words, we’d need to implement something like EIP1271 on our account contracts, so an account can “vouch” for something without having to actively submit a tx. This way:

  • Guardians sign a message that “the signing key for account 0x123 should be changed to pub2”
  • A tx is submitted to account 0x123 with the signatures
  • Account 0x123 validates that each signature is valid asking the respective guardian account contract
  • The account updates its pubkey

However, there’s a complication: if the signing key for an account is private, how can the sender of the tx create a proof that includes it, without having access to the decryption keys of the guardian? We have a few options, some of them good, some of them not so much:

1. Guardians share their decryption keys with tx sender

Given decryption keys are scoped by contract, they’d just be revealing their signing public key to the tx sender (typically the user behind 0x123). This is probably the easiest way out, but sharing private keys is not a really nice approach. It’s also interactive, and not well-suited for other EIP712-like scenarios.

2. Store signing public keys unencrypted

Another easy way out: if the keys are not encrypted, you don’t need decryption keys to read them (I believe a meme with this template should go here). In general, less private state makes composition easier. And we could argue that the signing pubkey is not something that needs to be kept private. But we can do better.

3. Each guardian sends an individual tx

Same to how the original Gnosis MultisigWallet worked: instead of each owner signing an offchain message, they actually submit a tx that stores their intent on-chain, which can be executed once the threshold is reached. This works with the current design: each guardian sends a tx from their account to 0x123’s, creating a private note encrypted for 0x123 that states that they validate a pubkey change.

Note that the “encrypted for 0x123” could be a problem if the user behind that account has lost their keys. Maybe we need to support encrypting with an arbitrary pubkey in that scenario?

4. Recursive ZKPs

EIP1271 uses the same mechanism as the protocol but at the app level (signature verification) to check that a claim by a user. Why not do the same? Guardians could simply share a ZKP that they have signed a message, and then the single submitted tx would recursively verify all these ZKPs before making any changes.

However, this means we need a primitive for verifying a ZKP from a Noir contract. This is probably not the same as verifying a ZKP in vanilla Noir, since this ZKP will need access to state (eg the guardian’s current signing key).

It’s not clear to me whether the ZKP submitted by the guardian would be an instance of an App Circuit, or needs to be wrapped in a Kernel Circuit, or could be an arbitrary one. My gut feeling is that, if we need to prove stuff of the state of the chain, we probably need something very similar to a Kernel Circuit proof, but it needs to somehow specify that it is an “offchain” proof and not an actual tx that can be submitted.



Great write up. I think we don’t have this limitation, but I may be miss-understanding.

Would the following flow work?

  1. Store the state of the guardians in the Users account contract, encrypted for the USER, e.g in a mapping.
  2. Define two entrypoint functions, one for authorising wallet transactions execute_payload and one for changing signing keys update_signer.
  3. execute_payload needs a signature from the users private key.
  4. update_signer will pass if there are sufficient signatures from guardians. This function can access the contracts storage, and check signatures against the guardians mapping. Guardians would send messages offchain, and the user would construct a local zkp to ensure privacy and send to update_signer.

I think this is the issue: you’re restricting guardians to use a specific signature scheme, and to store a copy of their keys in your contract. So you’re appointing keys as your guardians, not accounts.

Ideally you’d want to leverage whatever account abstraction scheme each individual guardian has chosen for themselves, so you could have a guardian that uses TouchID, another that uses Ethereum sigs, and another who’s actually a multisig, all that being transparent to your contract.


I really like the idea around recursive ZKP’s with the account contracts, as it also will allow us to have something like depositing funds into defi contracts from the private domain WITHOUT needing approvals :eyes:.

Recall that funds can be spend if you can generate a valid proof and only someone having knowledge of the notes and being able to satisfy the account contract can make such a proof.
This means that a transferFrom function could practically take from, to, amount, proof and do a staticcall to from to see if the proof is valid for a transfer(to, amount) function call. If it is, do the transfer, and go back to the execution you were in the middle of otherwise the entire thing reverts.

If an oracle is used to generate the proof, dapps could look quite similar to what people expect of solidity dapps, but without needing to deal with lingering approvals :tada:

// Private function to deposit into X
fn deposit(asset: Address, amount: u120) -> ... {
  // Perform input validation 

  // Encoded function signature and args
  let transfer_call = ...;
  // Oracle call to get a proof.
  // The oracle could either generate or pass it on
  let proof = oracle.get_proof(asset, transfer_call);
  // Transfer the assets into this contract

  // Some accounting logic in this contract. 
  // The actual defi stuff.

I love that as an oracle call, makes the code so much easier to write and to understand. And it’s the perfect use case for an oracle, literally asking a magic being “hey, get me a valid value for this”.

I’m wondering how would be best for a wallet to handle these proof requests. It’d be nice if a full simulation could be run just collecting these proof requests, and then the wallet presents all requests in batch to the user. But this requires bypassing proof authentication checks during this initial simulation run, which may need a new keyword to support…?

Alternatively, the wallet could just generate these proofs just to get the simulation running, and not broadcast them (or integrate them into the main proof) until the user has actually greenlighted it. But it makes me nervous to first use the user’s keys and then ask. Anyway, this is not something we need to decide now.


Say that there are two modes that you can run the execution in:

  • logic which don’t do any proving, but just execute the logic (essentially current brillig such that proof generation is bypassed locally?).
  • prove where you a building the proofs

When you are figuring the tx out and showing effects to the user, you would run it in logic mode, and any call that is made to an account contract that is controlled by your wallet to check the isValid or whatever the function name will be can be mocked to just return true. That way you can simulate the thing as if the user said yes to everything. If the user accepts the effects of the transaction he can then have a batch of requests that he can accept, he will know what calls each of the requests makes etc.
If accepting all, you start actually building the proof. Up until that point, you were just “playing”, and it should be super quick hence you don’t need to do any actual proof yet.


Do you envision this to be defined at the Noir level, or by the wallet itself? In other words: would the isValid function in the account contract implementation have an onlyProve decorator? Or would the wallet know which functions of its own account contracts are meant to verify proofs, and then just replace those out when running the simulator?

After writing this I think that what makes the most sense is going with the latter.


Just the simulator, you don’t need to change anything in the contract for it.
You could probably do it similarly to eth_call where you can use the state overrides to insert different code inplace of the account contract for the simulation.

You might want to replace the entire thing as you can then do the simulation without needing to do any signatures etc (validation of entry-point just replaced with an all good as well).


This is a fantastic suggestion.

I tried to adapt the the syntax below for guardians as the proof would be generated off-chain on another device. To facilitate this, I think we will need a way to do a foreignCall inside the contract, or at a minimum mutate the context.

Flow for recovery:

  1. Guardian gets a request off-chain for recovery and generates a proof locally for the users account contract recoverToBackupKey method. (Maybe via some RPC method as @LasseAztec mentioned.

    This proof will update the spending key in the users account to a pre-determined key. The proof is generated locally and passed to the user via a secure channel.

  2. The user will call the recovery method on their account contract (which has no auth) just needs a satisfying proof. They will supply the proof received from the guardian as an input, which should let the simulation run locally and successfully call out to the guardians contract entrypoint to forward the request on.

  3. The user will generate a proof including the foreign proof of the above and be able to submit their tx.

fn recovery(proof: proof, guardian: Field) -> ... {

  // we may need a foreign context to avoid overloading functions with who the caller is.
  let foreignContext = {
    sender: guardian
  // call the guardians account contract with a payload via 'foreignCall' 
//  this will call back to this contract and execute `recoverToBackupKey`.
  Account::at(guardian).entrypoint.foreignCall(foreignContext, proof, payload?);

I think the foreignCall is a bad idea since it essentially pass over the control flow to a userprovided contract inside any defi contract if used for the transfer case mentioned above. Also, if you wanted that, could you not just use the entryPoint directly instead?

The nice thing with the isValid is that you can do it as a static call from tokens such that the control is not passed along.


How are we thinking of preventing replays from these proofs? If we validate them using static-call, then we cannot include a nonce in them and flag it as consumed. Or should it be responsibility of the caller to track those nonces?


The cases where I expect you to use isValid the caller and the owner is the same entity, so I don’t think replaying himself is a big issue. The proof is only used in private functions, so it would never be seen by anyone else that might wanna replay it.


But how about the “guardians” scenario, where the caller is the owner of the account to be recovered, who collected the proofs from the guardians? In that case, isValid gets called on the accounts of the guardians, who are not the callers.

Should we have a different method for that, or relax isValid to also handle these scenarios?


Should we have a different method for that, or relax isValid to also handle these scenarios?

Relaxing isValid makes it almost useless in my eyes as it becomes insecure as hell. If you want the case for passing on control flow you should use another function, why not just call the entrypoint?


Changing a little on the interface. We don’t actually need to pass in a proof for the isValid function to be executed, it could just used the oracle in there. Meaning that we only need to check the transfer_call if following my example from earlier. Makes it independent of the specific implementation of isValid in the given account contract so also makes it easier to build the integrations.

@spalladino, if we have transient storage I think we can do something neat to make sure that someone else that you gave an is_valid to cannot be replayed.

As part of the payload at the entrypoint we have a nonce, if storing this nonce in transient storage, the nonce can be part of the input to the validation in is_valid meaning that it won’t be replayable. If you need it to be set by someone else, you could potentially have a function that is not static where the nonce is set etc.