An M of N Bitcoin Multisig Scheme

erik aronesty
5 min readAug 28, 2018

Currently, the leading proposal for a multisignature scheme is an M of M scheme. I would propose an additional protocol that provides for a threshold M of N scheme that has utility in circumstances where key loss is a concern.

The scheme causes signing to be interactive. But the security guarantees and ability to alter the landscape of share participants makes up for this shortcoming in certain use cases.

I’m going to assume that G is a DLP-hard group of some kind, and that curve operations are done using elliptic-curve terminology. EI: Points can be added, and the generator is multiplied by a private key to get a public key.

Terminology:

G : the agreed upon group (secp256k1)
M : the number of participants required to sign.
N : the total number of participants. (N≥M)
H : a hash function of some kind. (SHA3)
P: the order of group G

The public parameters “G”, “M” and “N” must be agreed upon by all participants. If not, consensus verification will fail.

M of M Scheme

I’m going to begin with a description of the M of M scheme. Extending it to M of N requires an application of “secret redistribution”, which is well reviewed and proven not to leak information about the underlying secrets. I will provide an explanation of how to use it at the end.

To initiate a multisig scheme, each of the first “M” participants generates a secure random number “x”. Then, each participant “i” publishes their public key, G*xi.

In order to prevent one member from publishing “after” receiving the others, and controlling his key to influence x, the message transmitting the public key is signed using the sender’s private share. Standard schnorr signing can be used here, and must verified to prevent a rogue key attack.

Most threshold schemes require share verification. MuSig, by comparison, does not require this step, but its downsides are inability to trivially add/revoke participants…. and as a result whole conferences are held to resolve this issue which would be otherwise simple and secure. It’s far better to have a flexible, simple and easy to verify scheme… than a complex one that protects against one kind of bad code.

Each party interprets the public keys as shares in a Shamir sharing scheme. Put simply, lagrange coefficients are calculated using Xi = H(G*xi). The coefficients are ordered, based on the sort order of X.

for i1 in sorted(Xis):
coeff[i1]=1
for i2 in Xis:
diff = (i2-i1)%P
inv = modinv(diff, P)
val = i2 *inv%P
coeff[i1]=coeff[i1]*val%P

These coefficients are used to multiply each point by. The results are summed, and that is the “bitcoin public key” — the key that these participants are signing for.

for share in shares:
point = coincurve.PublicKey(share.data)
s = point.multiply(coeff[share.index])
vals.append(s)
point_sum = coincurve.PublicKey.combine_keys(vals)

So, now we have an M of M public key. Nobody can possibly know the private key, and having M-1 devices gives you zero knowledge of the implied private key “x”. In addition there is no meaningful way share participants can influence the resulting implied key “x” … without also solving the discrete log problem for G*x.

We can call this G*x, or “the public key”… in the signing scheme below.

Signing:

The participants in the scheme can now generate Schorr-style multisignatures.

When signing, each participant “i” rolls a random nonce “ki” and publishes G*ki, signing their message with the “xi*ki” as the private key. After verifying that G*xi*G*ki is valid, the participants must then perform the same coefficient multiplication to produce G*k. Again, a weak G*k cannot be produced by M-1 participants without solving the DLP. Assume someone knows all of the G*ki’s and waits to generate their own… they would need to

There is no significant loss of security if a set of pregenerated random G*ki values are published during the original public key establishment ritual. In that way, an offline signer manually given an iteration number can, at a later time, produce a valid signature.

R  = G * k * x
e = H(R, M)
si = ri - ei * xi

The signature share are points (Si, Ei), and, after interpolation: (S, E).

It’s clear that a single particpant cannot choose a “weak signature”. In the proposed Shamir’s scheme, the value of M-1 shares has zero impact on the entropy contained in the final result. The same logic applies to the “implied k” of the signature.

Verification:

The signatures must also be subjected to the lagrange coefficients algorithm above. The value of e must be the same for all signature shares.

Each si is combined to a single “S” by multiplying the coeffs (calculated above) and the result is summed:

s = sum(coeff[i] * si)

Finally, you perform the same verification as in Schnorr.

rv = G*s + G*x*e
ev = H(rv, M)
assert(ev == e)

The math winds up about the same as well. The only variable the attacker controls at signature time is k. M-1 attackers don’t affect the randomness of the M’th k.

Derviation showing this still works.

rv = G*(k - x*e) + G(*x*e)
rv = G*(k) == R

One thing to understand is that after interpolation, the resulting “x” and Gx and “signatures” are combined correctly.

Shamir secrets shares are homomorphic in addition, subtraction and multiplication, so this is something we get “for free” — all operations computed on the shares are implicitly computed on the interpolated result.

It is also why ECDSA can’t be used in a Shamir scheme. The modular inverse function will not work and shares could not be combined.

Addition of Share Participants:

Thus far, this scheme can be done without “rounds of approvals”. Share participants can generate keys and nonces air-gapped and offline and publish only the public portions. They can sign with participation — but participation can occurring in parallel and be broadcast.

In order to create additional participants, however, these must be added to a larger implied polynomial using “secret redistribution”.

A quick summary of how this is done doesn’t do it justice, but there are lots of implementations and descriptions of secret share redistribution to choose from. For example: https://github.com/dfinity/vss

Summary: Each of M participants “shards” his share into N “sub shares” labeled 1..N, using Shamir’s secret sharing algorithm for an M of N scheme and using random coefficients. These subshares are, further, delivered to each of the corresponding share participants. They are then separately interpolated and combined to form new shares “xi” . Each participant must then publish the new G*xi. Each participant can now verify that the new combined and implied key G*x for all M subsets of N is the same as the original key G*x. ( Failure to verify this implies that one of the participants is functioning incorrectly and cannot sign — not that there is some sort of weakness. )

Redistribution can be used to *remove* participants, *add* them, or otherwise change the M of N scheme without compromising the scheme.

It’s notable that when using “secret redistribution” the addition of share participants does not change the verification algorithm, nor does it expose any information about the secret key or the implied secret key.

In some scenarios, the security and safety advantages of a secret redistribution mechanism far outweigh the disadvantages of an interactive scheme.

Caveats:

Also important to note that this entire article (and multisig schemes in general) assume that there are other mechanisms in place for verifying share participants. Any attacker that controls M of N keys using MITM techniques will be able to sign using any multisig scheme.

Public keys should be shared in-person using QR codes or, at least, using verbal or other verification techniques for keys shared online.

--

--