# Hyrax Commitments

$$ \gdef\p#1{\left({#1}\right)} \gdef\vec#1{\mathbf{#1}} \gdef\mat#1{\mathrm{#1}} \gdef\eq{\mathrm{eq}} \gdef\setn#1{\mathcal #1} $$

Hyrax commitments are introduced in WTS⁺17. They are a polynomial commitment scheme designed for multi-linear extenstions (MLEs).

## Certain Multivariate Sums as Tensor Contractions

Given a finite multi-variate basis $\setn B = \setn B_1 × \setn B_2 × ⋯ × \setn B_n$ with $\setn B_i ⊂ 𝔽$ and a sum of the form $$ \sum_{\vec b ∈ \setn B} g(\vec b) ⋅ \prod_{k ∈ [n]} e_k(b_k) $$ where $g : 𝔽^n → 𝔽$ and $e_k: 𝔽 → 𝔽$ are arbitrary and $b_k$ denotes the $k$-th component of $\vec b$. Given any binary partitioning $[n] = \setn A ∪ \setn B$ we can rewrite this as $$ \sum_{\vec a ∈ \setn B_{\setn A}}\sum_{\vec b ∈ \setn B_{\setn B}} g(\vec a, \vec b) ⋅ \p{\prod_{k ∈ \setn A} e_k(a_k)} ⋅ \p{\prod_{k ∈ \setn B} e_k(b_k)} $$ Observe that the products only depend on $\vec a$ and $\vec b$ respectively. Since $\setn B_{\setn A}$ and $\setn B_{\setn B}$ are finite we can enumerate their values as $\{\vec a_i\} = \setn B_{\setn A}$ and $\{\vec b_j\} = \setn B_{\setn B}$ and define $$ \begin{aligned} g_{ij} &= g(\vec a_i, \vec b_j) & u_i &= \prod_{k ∈ \setn A} e_k((\vec a_i)_k) & v_j &= \prod_{k ∈ \setn B} e_k((\vec b_j)_k) \end{aligned} $$ we can then rewrite the sum as a vector-matrix-vector product, also known as a rank-$2$ tensor contraction $$ \sum_{i ∈ [|\setn B_{\setn A}|]}\sum_{j ∈ [|\setn B_{\setn B}|]} g_{ij} ⋅ u_i ⋅ v_j $$ This generalizes to $q$-ary partitionings resulting in rank-$q$ tensors $g$ and $q$ vectors to contract with. Higher rank tensors do not appear to give an advantage in the Hyrax commitments described below.

## MLE Evaluation as Rank-2 Tensor Contraction

Recall a multi-linear extenion $\tilde f: 𝔽^n → 𝔽$ over $\vec b ∈ \{0, 1\}^n$ has $2^n$ coefficients $f_{\vec b} ∈ 𝔽$ and is defined as
$$
\begin{aligned}
\tilde{f}(\vec x) &= \sum_{\vec b ∈ [2^n]} f_{\vec b} ⋅ \eq(\vec b_, \vec x) \\
\end{aligned}
$$
where $\eq$ is the *equality function* that indicates $\vec b = \vec x$ on the basis $\{0, 1\}^n$. It is given explicitely by
$$
\begin{aligned}
\eq(\vec x, \vec y) &= \prod_{i∈[n]} \eq(x_i, y_i) \\
\eq(x, y) &= x⋅y + (1-x)⋅(1-y)\\
\end{aligned}
$$

Observe that for a given $\vec x$ the sum $\tilde{f}(\vec x)$ satisfies the form required to rewrite as a rank-2 tensor contraction $\sum_{\vec a \vec b} f_{\vec a \vec b} ⋅ u_{\vec a}(\vec x) ⋅ v_{\vec b}(\vec x)$.

## Polynomial Evaluation as Rank-2 Tensor Contraction

A $2^n$-term polynomial $f$ evaluated in $x$ is

$$ f(x) = \sum_{i ∈ [2^n]} f_i ⋅ \prod_{k∈[n]} \p{x^{2^k}}^{i_k} $$

where $i_k$ denotes the $k$-th bit of the number $i$.

## Hyrax Commitments

Given a tensor contraction $\tilde f(\vec x) = \sum_{\vec a \vec b} f_{\vec a \vec b} ⋅ u_{\vec a}(\vec x) ⋅ v_{\vec b}(\vec x)$ with $f_{\vec a \vec b}$ known to the prover and $u_{\vec a}(\vec x)$, $v_{\vec b}(\vec x)$ known to prover and verifier. The prover wants to commit to $f_{\vec a \vec b}$ so that it can later open $\tilde f(\vec x)$.

Take a linear vector commitment scheme $\operatorname{commit}: 𝔽^m → \setn C$ that supports dot-product proofs. To commit to $f_{\vec a \vec b}$ the prover computes $c_{\vec a} = \operatorname{commit}(f_{\vec a-})$ where $f_{\vec a-}$ is the vector with $\vec a$ held fixed. The prover sends $\{c_{\vec a} \}_{\vec a ∈ \setn B_{\setn A}}$ to the verifier.

To open at $\vec x$ the prover send $\tilde{f}(\vec x)$, the prover and verifier compute vectors $u_{\vec a} = u_{\vec a}(\vec x)$ and $v_{\vec b} = v_{\vec b}(\vec x)$ and combine the commitments $c = \sum_{\vec a ∈ \setn B_{\setn A}} u_{\vec a} ⋅ c_{\vec a}$. The prover and verifier then engage the dot-product-proof protocol to prove that $\tilde{f}(\vec x)$ is the dot produt of $c$ and $v_{\vec b}$.

The size of the commitment is $|\setn B_{\vec A}|$ times the size of a vector commitment. The size of the opening argument is a single dot-product-opening argument. The prover work to commit is $|\setn B_{\vec A}|$ times a size $|\setn B_{\vec B}|$ vector commitment. The prover and verifier work on opening is computing the vectors $u_{\vec a}, v_{\vec b}$, the linear combination $c$ and the dot-product-argument.

The case where $|\setn B_{\vec A}| = |\setn B_{\vec B}| = \sqrt{|\setn B|}$ is called the *square-root* Hyrax commitment.

## Pederson vector commitments

To construct a linear vector commitment scheme for $𝔽^m$ pick a discrete-log hard cyclic group of order $|𝔽|$ (note that this is also a $𝔽$-vector space). Pick random generators $\mathrm h, \{\mathrm g_i\}_{i∈ [m]}$.

To commit to $\vec x ∈ 𝔽^m$, the prover picks random $s ∈ 𝔽^×$ and sends $\mathrm c = s⋅\mathrm h + \sum_{i} x_i ⋅ \mathrm g_i$. The prover and verifier can compute linear combinations of the commitments, the prover also linearly combines the $s$ values. To open the prover sends $s, \vec x$ and the verifier checks the commitment.

When $m=1$ we call this a *scalar commitment scheme*.

### Proof of Equality

Given two commitments $\mathrm c_1, \mathrm c_2$ with secrets $s_1, s_2$. To proof they commit to the same vector without revealing,

**Prover**picks random $s ∈ 𝔽^×$ and sends $\mathrm c = s⋅\mathrm h$.**Verifier**sends random $r ∈ 𝔽^×$.**Prover**sends $z = r ⋅ (s_1 - s_2) + s$.**Verifier**checks $z⋅\mathrm h = r ⋅ (\mathrm c_1 - \mathrm c_2) + \mathrm c$

### Proof of Product

Given scalars $a, b, c$, and their commitments $\mathrm c_a, \mathrm c_b, \mathrm c_c$, with secrets $s_a, s_b, s_c$. To proof that $a ⋅ b = c$,

**Prover**picks random $u, v, s_u, s_v, s_w ∈ 𝔽^×$ and sends $$ \begin{aligned} \mathrm u &= s_u ⋅ \mathrm h + u ⋅ \mathrm g \\ \mathrm v &= s_v ⋅ \mathrm h + v ⋅ \mathrm g \\ \mathrm w &= s_w ⋅ \mathrm h + v ⋅ \mathrm c_a \\ \end{aligned} $$**Verifier**sends random $r ∈ 𝔽^×$.**Prover**sends $$ \begin{aligned} z_{s_a} &= s_u + r ⋅ s_a & z_a &= u + r ⋅ a \\ z_{s_b} &= s_v + r ⋅ s_b & z_b &= v + r ⋅ b \\ z &= s_w + r ⋅ (s_c - s_a ⋅ b) \\ \end{aligned} $$**Verifier**checks that $$ \begin{aligned} \mathrm u + r ⋅ \mathrm c_a &= z_{s_a} ⋅ \mathrm h + z_a ⋅ \mathrm g \\ \mathrm v + r ⋅ \mathrm c_b &= z_{s_b} ⋅ \mathrm h + z_b ⋅ \mathrm g \\ \mathrm w + r ⋅ \mathrm c_c &= z ⋅ \mathrm h + z_b ⋅ \mathrm c_a \\ \end{aligned} $$

The first two equalities are straight forward, the last one's left and right side expand to

$$ \begin{aligned} \mathrm w + r ⋅ \mathrm c_c &= s_w ⋅ \mathrm h + v ⋅ (s_a ⋅ \mathrm h + a ⋅\mathrm g) + r ⋅ (s_c ⋅ \mathrm h + c ⋅\mathrm g) \\ &= (s_w + v⋅s_a +r⋅s_c) ⋅ \mathrm h + (v ⋅ a + r⋅c)⋅\mathrm g \\ z ⋅ \mathrm h + z_b ⋅ \mathrm c_a &= (s_w + r ⋅ (s_c - s_a ⋅ b)) ⋅ \mathrm h + (v + r ⋅ b) ⋅ (s_a ⋅ \mathrm h + a ⋅\mathrm g) \\ &= (s_w + v ⋅ s_a + r ⋅ s_c) ⋅ \mathrm h + (v ⋅ a + r ⋅ b ⋅ a) ⋅ \mathrm g \\ \end{aligned} $$

### Proof of Dot Product

Given vector $\vec a$ with commitment $\mathrm c_a$ and secret $s_a$, a scalara $c$ with commitment $\mathrm c_c$ and secret $s_c$ and public vector $\vec b$. To proof that $c = \vec a ⋅ \vec b$,

**Prover**picks random $\vec s ∈ (𝔽^×)^m$ and $s_u, s_v ∈ 𝔽^×$ and sends $$ \begin{aligned} \mathrm u &= s_u ⋅ \mathrm h + \sum_i s_i ⋅ \mathrm g_i \\ \mathrm v &= s_v ⋅ \mathrm h + (\vec s ⋅ \vec b) ⋅ \mathrm g \\ \end{aligned} $$**Verifier**sends random $r ∈ 𝔽^×$.**Prover**sends $$ \begin{aligned} z_u &= s_u + r ⋅ s_a \\ z_v &= s_v + r ⋅ s_c \\ \vec z &= \vec s + r ⋅ \vec a \\ \end{aligned} $$**Verifier**checks $$ \begin{aligned} \mathrm u + r ⋅ \mathrm c_a &= z_u ⋅ \mathrm h + \sum_i z_i ⋅ \mathrm g_i \\ \mathrm v + r ⋅ \mathrm c_c &= z_v ⋅ \mathrm h + (\vec z ⋅ \vec b) ⋅ \mathrm g \\ \end{aligned} $$

The first equation is straightforward, the second one expands to

$$ \begin{aligned} \mathrm v + r ⋅ \mathrm c_c &= s_v ⋅ \mathrm h + (\vec s ⋅ \vec b) ⋅ \mathrm g + r⋅(s_c⋅\mathrm h + c ⋅ \mathrm g) \\ &= (s_v + r ⋅ s_c)⋅ \mathrm h + (\vec s ⋅ \vec b + r ⋅ c) ⋅ \mathrm g \\ z_v ⋅ \mathrm h + (\vec z ⋅ \vec b) ⋅ \mathrm g &= (s_v + r ⋅ s_c) ⋅ \mathrm h + ((\vec s + r ⋅ \vec a) ⋅ \vec b) ⋅ \mathrm g \\ &= (s_v + r ⋅ s_c) ⋅ \mathrm h + (\vec s ⋅ \vec b + r ⋅ \vec a ⋅ \vec b) ⋅ \mathrm g \\ \end{aligned} $$

## Bulletproofs

**To do.** The above proof-of-dot-product has proof size linear in the vector length. In WTS⁺17 a Bulletproof based argument is presented that is logarithmic in vector length.

However, remember that in Hyrax the commited vector length is not $|\setn B|$ but tuneable and proof sizes $\mathcal{O}(\sqrt{|\setn B|})$ can be achieved without Bulletproofs.

## References

- WTS⁺17 Riad S. Wahby, Ioanna Tzialla, Abhi Shelat, Justin. Thaler, and Michael Walfish (2017). Doubly-efficient zkSNARKs without trusted setup. In S&P, 2018.