Multi-party Computation notes.

$$ \gdef\delim#1#2#3{\mathopen{}\mathclose{\left#1 #2 \right#3}} \gdef\p#1{\delim({#1})} \gdef\ps#1{\delim\{{#1}\}} \gdef\box#1{\delim[{#1}]} \gdef\floor#1{\delim\lfloor{#1}\rfloor} \gdef\bbox#1{\delim{\llbracket}{#1}{\rrbracket}} \gdef\set#1{\mathcal{#1}} \gdef\vec#1{\mathbf{#1}} \gdef\mat#1{\mathrm{#1}} $$

A Secure Multi-Party Computation Scheme (MPC) allows a set of parties $\set P$ to compute a function over inputs while keeping the inputs secret.

MPC is a family of related protocols, with slightly different security guarantees. You can compare this to proving protocols being zero-knowledge or not, except in MPC there are more dimensions.

Properties of MPC schemes

Semi-honest. All parties follow the protocol correctly but are curious. This means that while they will not deviate from the defined steps of the protocol, they may try to learn additional information from the data they observe during the execution of the protocol.


Question: Does this mean a single malicious actor can affect the outcome undetected, but never learn the secret?

Semi-honest three party computation.

I'm considering a series of MPC protocol following [AFL+16], MZ17, MR18. These are called ABY protocols as they convert between arithmetic, binary and Yao representations. I will skip the Yao representation as it is not necessary for completeness and only adds better performance for

Linear Secret Sharing Schemes

A Secret Sharing Scheme (SSS) is a set of two protocols:

In Distribution, a dealer takes a secret s and distribute it to n parties. In Reconstruction, a subset of the parties reveal their secret shares in order to learn the secret s. It is guaranteed that if disallowed sets of parties will not be able to learn anything about the original secret.

A Secret Sharing Scheme is associated to an Access Structure, this is the set of sets of parties which can recover the secret (or equivalently the set of sets of parties which cannot access the secret). Access structures are always monotonically increasing, i.e. if you add a party to a set which can access the secret then the bigger set can also access the secret. There are a number of different methods to perform secret sharing.

In multi-party computation we are usually interested in so-called Linear Secret Sharing Schemes (LSSSs) as they allow parties to locally compute linear functions of their secrets for free.

In Shamir Secret Sharing the number of parties which can be adversarial t is a threshold of the total number of parties n. In multi-party computation protocols one usually has either t<n/2 or t<n/3. Shamir is a form of LSSS, and usually the one used to introduce concepts in introductory courses.

Another popular LSSS scheme is called Replicated Secret Sharing, this is less efficient (usually) than Shamir Sharing for threshold structures. However, with Replicated Secret Sharing one is able to represent any access structure.

The simplest secret sharing scheme is one in which all parties need to come together to recover the secret. This form of sharing is the basis of almost all multi-party computation protocols in the dishonest majority setting. In this secret sharing each party P_i has a value x_i with the actual secret being x = x_1 + ... + x_n.

Note: Creating a random zero alsoa useful operation.

In an LSSS we have encoded $[x]$ such that we can locally compute $[a] + [b]$ and $c⋅[a]$.

Affine transformation:

Or generally we can compute any affine transformation of the secrets locally $\mat A⋅[\vec x]+\vec b$.


Then we can do a round of multiplications as an affine bilinear algorithm:

$$ \mat C ⋅ \left( \left(\mat A⋅[\vec x]+\vec a\right) ∘ \left(\mat B⋅[\vec x]+\vec b\right)\right) + \vec c $$

In the replicated scheme, multiplication converts from replicated to un-replicated form.

This can be un-done by replicating the results again.

The inverse transform, from replicated to plain shares is trivially forgetting shares.


Beaver Triples

Assume the

Replicated shares


Additive MPC

Given a group $𝔾$ and $n$ shares $\vec x ∈ 𝔾^n$, we can define a linear encoding such that for values $\vec v$, randomness $\vec r$ we have:

$$ \begin{bmatrix} \mat M_v \\ \mat M_r \\ \mat M_c \end{bmatrix} ⋅ \begin{bmatrix} \vec x\\ 1 \end{bmatrix} = \begin{bmatrix} \vec v \\ \vec r \\ \vec 0 \end{bmatrix} $$

Where $\mat M$ is matrix with linearly independent rows.

For example the scheme below is

$$ \begin{bmatrix} 1 & 1 & 1 & 0 \\ \hline 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ \end{bmatrix} ⋅ \begin{bmatrix} x_0 \\ x_1 \\ x_2 \\ 1 \end{bmatrix} = \begin{bmatrix} v \\ r_0 \\ r_1 \end{bmatrix} $$

TODO: This $\mat M_r$ matrix is arbitrary, it can be any $2 × 3$ matrix.

Arithmetic encoding

Given a ring $ℤ_{2^k}$ and $x∈ℤ_{2^k}$.v

$$ ⟦x⟧ = (x_0, x_1, x_2) \text{ s.t. } x = x_0 + x_1 + x_2 $$

Each paricipant receives two shares $(x_i, x_{i+1})$.

Note that each value has $2^{2⋅k}$ valid encodings.

Zero. A share for zero can be generated by producing three random numbers $r_i$ and computing

$$ ⟦0⟧ = (r_0 - r_1, r_1 - r_2, r_2 - r_0) $$

This can be done locally if each participant has a pseudo-random number generator for $r_i$ and $r_{i+1}$, so at least 3 participants are required for security.

Encoding. To encode a cleartext value $n$

$$ ⟦n⟧ = (n, 0, 0) $$

Idea: Since $3^{-1}$ always exists in $ℤ_{2^k}$ we could do $(n,n,n)/3$.

Mixed addition. Adding a cleartext value $a ∈ ℤ_{2^k}$ can be done by adding $(x_0 + a, x_1, x_2)$.

Mixed multiplication. Multiplying by a cleartext value $a ∈ ℤ_{2^k}$ can be done by adding $(a⋅x_0, b⋅x_1, c⋅x_2)$.

Addition. The sum of two shares can be computed locally

$$ ⟦x⟧ + ⟦y⟧ = (x_0 + y_0, x_1 + y_1, x_2 + y_2) $$

Multiplication. For multiplication $⟦z⟧ = ⟦x⟧ ⋅ ⟦y⟧ $ we need to find a $(z_0, z_1, z_2)$ such that

$$ \begin{aligned} z_0 + z_1 + z_2 &= \p{x_0 + x_1 + x_2} ⋅ \p{y_0 + y_1 + y_2} \\ &= x_0 ⋅ y_0 + x_1 ⋅ y_0 + x_2 ⋅ y_0 \\ &+ x_0 ⋅ y_1 + x_1 ⋅ y_1 + x_2 ⋅ y_1 \\ &+ x_0 ⋅ y_2 + x_1 ⋅ y_2 + x_2 ⋅ y_2 \\ \end{aligned} $$

the terms can be grouped such that each party can compute one share $z_i$ locally given $x_i, x_{i+1}, y_i, y_{i+1}$:

$$ \begin{aligned} z_0 &= x_0 ⋅ y_0 + x_0 ⋅ y_1 + x_1 ⋅ y_0 \\ z_1 &= x_1 ⋅ y_1 + x_1 ⋅ y_2 + x_2 ⋅ y_1 \\ z_2 &= x_2 ⋅ y_2 + x_2 ⋅ y_0 + x_0 ⋅ y_2 \end{aligned} $$

As a potential optimization, we can also trade multiplications for additions using:

$$ \begin{aligned} z_0 &= \p{x_0 + x_1} ⋅ \p{y_0 + y_1} - x_1 ⋅ y_1 \\ z_1 &= \p{x_1 + x_2} ⋅ \p{y_1 + y_2} - x_2 ⋅ y_2 \\ z_2 &= \p{x_2 + x_0} ⋅ \p{y_2 + y_0} - x_0 ⋅ y_0 \\ \end{aligned} $$

Each node $i$ can now communicate share $i$ to node $i-1$ so that all nodes have a full complement of shares again. Before we do this we add a freshly generated share of zero to scramble the values and make sure node $i-1$ learns nothing about share $i+1$.

Multplication For a two-party setting from B91 as in PSSY20. First the parties generate random shares $a, b$ and $c = a⋅b$, then

$$ z_i &= (x_i + a_i)⋅(y_i + b_i) - (x_i + a_i)⋅b_i - a_i⋅(y_i + b_i) + c_i $$

Two party multiply?

$$ \begin{aligned} z_0 + z_1 &= \p{x_0 + x_1} ⋅ \p{y_0 + y_1} \\ &= x_0 ⋅ y_0 + x_1 ⋅ y_0 \\ &+ x_0 ⋅ y_1 + x_1 ⋅ y_1 \\ \end{aligned} $$


This multiplication method can potentially work for different number parties $n$ and collusion resistance $k$. The above is presented using $3$ shares, $3$ parties and a collusion resistance of $1$ (i.e., $≥ 2$ parties need to collude to steal the secret). We can represent this scheme by share allocation as

$$ \ps{\ps{0, 1}, \ps{1,2}, \ps{2,0}} $$

We can also design a $10$ party scheme with collusion resistance of $2$ using $5$ shares:

$$ \ps{ \ps{0, 1}, \ps{0, 2}, \ps{0, 3}, \ps{0, 4}, \ps{1, 2}, \ps{1, 3}, \ps{1, 4}, \ps{2, 3}, \ps{2, 4}, \ps{3, 4} } $$

Question: What other scemes are possible? I.e. can we create allocations of shares such that no union of $k$ shares contains all shares, but each pair of shares occurs in at least one allocation.

Question: Can we design a $9$ node scheme with collusion resistance of $3$ using $9$ shares by applying the 3-share scheme recursively? Or does the top level re-sharing for multiplication leak too much?


Question: Can we do inversion using Hensel lifting in $\log_2 k$ rounds?

$$ y' = y ⋅ \p{2 - x ⋅ y} $$

Computed in $ℤ_2 ⊂ ℤ_{2^2} ⊂ ℤ_{2^4} ⊂ ℤ_{2^8} ⊂ ⋯ ⊂ ℤ_{2^k}$, so round complexity $\log_2 k$ but communication complexity only $2⋅k$.'s_lemma#Quadratic_lifting

Bit shift right

We want to compute

$$ \left\llbracket x \right\rrbracket $$

Binary encoding

Use the above arithmetic procedure to create $ℤ_{2^1} = 𝔽_2$. Now create a vector of $k$ such elements $𝔽_2^k$ to create a $k$-bit bitvector. We can apply the arithmetic addition and multiplication operations of $𝔽_2$ elementwise to the vector to construct xor and and operations, denoted with $⊕$ and $∧$ respectively. These elementwise operation happen in parallel such that, e.g. $x = x_0 ⊕ x_1 ⊕_2$ and the shares of $z = x ∧ y$ are computed as

$$ \begin{aligned} z_0 &= (x_0 ∧ y_0) ⊕ (x_0 ∧ y_1) ⊕ (x_1 ∧ y_0) \\ z_1 &= (x_1 ∧ y_1) ⊕ (x_1 ∧ y_2) ⊕ (x_2 ∧ y_1) \\ z_2 &= (x_2 ∧ y_2) ⊕ (x_2 ∧ y_0) ⊕ (x_0 ∧ y_2) \end{aligned} $$

The communication complexity is exactly the same as due to parallelization the same number of rounds is required and the size of $ℤ_{2^k}$ is the same as $𝔽_2^k$.

Note that any cleartext permutation of the vector indices can be applied locally.

Note that negation, $¬$, is the same as xoring with the all-ones vector, and other bolean operations can be constructed from $¬, ⊕, ∧$.

TODO: This applies to any kind of vectorization.



i.e. arithmetic to binary.

Generically we can trivially convert from $ℤ_{2^k}$ to $ℤ_{2^h}$ as long as $h ≤ k$. Taking $\box{a}_b$ to mean $a$ modulo $b$.

$$ z = \box{\sum_i x_i}_b = \sum_i \box{x_i}_b $$


i.e. binary to arithmetic

This is more challenging as we need to account for wraparound.

Question: Is there some polynomial we can construct and use Hensel lifting on?

Two's complement encoding

Given $\mathrm{bits}: ℤ_{2^k} → 𝔽_2^k$ the function mapping an number in $ℤ_{2^k}$ to a bitvector in $𝔽_2^k$ representing that number in two's complement.

Given arithemtic shares $⟦x⟧ = (x_1, x_2, x_3)$ we can compute the two's complement encoding of each share locally as

$$ \begin{aligned} ⟦\mathrm{bits}(x_0)⟧^{\mathsf B} &= (\mathrm{bits}(x_1),0,0) \\ ⟦\mathrm{bits}(x_1)⟧^{\mathsf B} &= (0, \mathrm{bits}(x_1),0) \\ ⟦\mathrm{bits}(x_2)⟧^{\mathsf B} &= (0, 0, \mathrm{bits}(x_2)) \end{aligned} $$

These can then be (locally) randomized by adding a random $r ∈ 𝔽_2^k$.

To finish constructing $⟦\mathrm{bits}(x)⟧$ we need to sum these shares. But this is an arithmetic sum that we need to execute over bitvectors in twos complement repesentation. This can be done using any kind of adder circuit. Minimal depth circuits are preffered to reduce the round complexity.

Note that two's complement is only one method of encoding and others can be used as long as the addition circuit is modified accordingly.

Q: Do encodings other than two's complement have better complexity for the adder circuit? How would Gray code addition work?

Two's complement decoding



Making robust against miscomputation:

Shamir multi-sharing

Remco Bloemen
Math & Engineering