L1->L2 Messaging Pitfall

With the TokenBridge example, suppose I do something like this:

l2Recipient = 0x1234
amount = 0x9999
secretHash = computeSecretHash("foo")

tx1 = l1Bridge.depositToAztecPublic(l2Recipient, amount, secretHash)
// send another message with the exact same content
tx2 = l1Bridge.depositToAztecPublic(l2Recipient, amount, secretHash)

Only one of the messages will be consumable because the message nullifier’s end up being equivalent, so after consuming one, attempting to consume the second one fails with:

Assertion failed: L1-to-L2 message is already nullified '!self.nullifier_exists(nullifier, self.this_address())'

It seems L1->L2 messaging is relying on the secretHash being different each time, but it doesn’t seem reasonable to rely on the end user to always provide a unique value.

This seems like a potentially painful gotcha, unless I’m missing something obvious.

Assuming this is actually a problem:

The simplest solution would be to add a unique value to the message in the TokenBridge contract, but that relies on all bridge contract developers knowing about this pitfall and planning for it. Considering it was missed in the example contract, that seems likely to cause problems for someone in the future.

I think a more appropriate solution would be to have the L1 Inbox contract add something unique to the message that can help ensure the uniqueness of the nullifier. The most straight forward method would probably be to use the totalMessagesInserted counter value that already exists, doing something like:

// combine `totalMessagesInserted` with the original `contentHash`
contentHash = sha256ToField(abi.encode(totalMessagesInserted, contentHash));
// add `totalMessagesInserted` to the event log 
emit MessageSent(<existing values>, totalMessagesInserted);

Does this seem reasonable? I can open a github issue & potentially send a PR if folks agree.

Good catch!

This is actually mentioned in this section of the docs.

This is only required when bridging tokens publicly. When consuming the message on L2 privately, between the secret_hash_for_redeeming_minted_notes and secret_for_L1_to_L2_message_consumption, the nullifier should be unique. Here is the relevant source code.

Thanks for your reply. I missed that bit in the docs, and I’m now struggling to understand what exactly it means – does it mean the index value here should be included in the nullifier computation, and there’s a bug in the implementation because it’s not doing that?

For the private claim, while there is potentially another source for uniqueness here, I can easily see a lazy wallet implementation or a user just trying to save mental effort on key management and still using duplicate keys. Considering the failure condition here (for both private & public) is likely tokens that are permanently locked in the bridge and unclaimable on L2, it would be great to add a guardrail here by adding having the Aztec Inbox add some uniqueness.

1 Like

Posting this for visibility for everyone else:

L1 to L2 messages also have a leaf_index which is an increasing number. So you can use the same secret and still consume such messages.

E.g. Consuming message publicly on L2 here

context.consume_l1_to_l2_message(
            content_hash,
            secret,
            storage.portal_address.read_public(),
            message_leaf_index
        );

the nullifier computed above uses the leaf_index.

How do you get the message_leaf_index?
Calling node.getL1ToL2MessageMembershipWitness(startIndex) on aztecJS may not give you the appropriate index. The simulator doesn’t know which indices are consumed vs not. So it can only give the index it knows about (hence the startIndex offset).

This is why if you fetch index from this method, you may get “error nullifier consumed” (it returns an index that was used with the secret)

So you can get your appropriate leaf_index from the L1 Inbox contract.

Inbox emits a MessageSent(message, index, tree) event. As of Oct 2024, this index is not exactly the same as the index needed on L2. This will be fixed here. The tree is of constant size. So the index you want, message_leaf_index = index + tree*size

2 Likes