Folding Schemes

\gdef\delim#1#2#3{\mathopen{}\mathclose{\left#1 #2 \right#3}} \gdef\p#1{\delim({#1})} \gdef\vec#1{\mathbf{#1}} \gdef\mat#1{\mathrm{#1}} \gdef\set#1{\mathcal{#1}} \gdef\F{\mathbb{F}} \gdef\G{\mathbb{G}} \gdef\R{\mathrm{R}} \gdef\P{\mathsf{cm}} \gdef\H{\mathsf{H}} \gdef\Z{\mathrm{Z}} \gdef\bigop#1#2{\mathop{\Large #1}\limits_{#2}} \gdef\zkp#1#2#3{\p{ \begin{array}{c}#1\\\hline#2\end{array} \middle\vert \begin{aligned}#3\end{aligned}}}

Nova

Given a cycle of curves \G_1, \G_2 with scalar fields \F_1, \F_2 (and thus base fields \F_2, \F_1). Construct the pairs (with element-wise operations) \G = \G_1 × \G_2, \R = \F_1 × \F_2. Note that \R degrades to a ring, \G_i is a vector space over \F_i and \G is and \R-module.

Construct family of cryptographic hashes \H_X: Y → X. Where elements of the inputs set (defined implicitly) map to pseudo-random elements in the target set X. We will use the shorthand \H_1 and \H_1 for \H_{\F_1} and \H_{\F_2} respectively.

Construct a Pedersen commitment \P: \R^n → \G by fixing a random vector in \vec g ∈ \p{\G_{\setminus \{0\}}}^n and taking the linear combination: \P(\vec x) = \sum_i x_i ⋅ g_i. Factor \P into two commitments \P_1: \F_1 → \G_1 and \P_1: \F_2 → \G_2.

Relaxed R1CS

Given a computable partial function f: \R^u × \R^v → \R^v on a ring \R.

\vec v = f(\vec u_n, … f(\vec u_1, f(\vec u_0, \vec v_0)))…)

Given a computable function f: \R^u → \R^v on a ring \R. We arithmatize the function to \mat A, \mat B, \mat C ∈ \R^{m×n} such that

f(\vec u) = \vec v \quad ⇔ \quad \bigop{∃}{\begin{subarray}{l} \vec w ∈ \R^t \\ \end{subarray}} \left\{\begin{aligned} \vec z &= \begin{bmatrix}1 & \vec u & \vec v & \vec w\end{bmatrix}\\ \mat C ⋅ \vec z &= \p{\mat A ⋅ \vec z} ⊙ \p{\mat B ⋅ \vec z}\\ \end{aligned}\right.

For convenience we will write \vec x = [\vec u\ \vec v]. An instance is a tuple (e, w, s, \vec x)∈\G×\G×\R×\R^{u+v} such that there exists a \vec w ∈ \R^n, \vec e ∈ \R^m that satisfy

\begin{aligned} e &= \P(\vec e) & \vec z &= \begin{bmatrix}s & \vec x & \vec w \end{bmatrix}\\ w &= \P(\vec w) & \p{\mat A ⋅ \vec z} ⊙ \p{\mat B ⋅ \vec z} &= s ⋅ \mat C ⋅ \vec z + \vec e\\ \end{aligned}

Relaxed R1CS instances form an \R-module. The zero instance is (0, 0, 0, \vec 0). The scalar product of (e, w, s, \vec x) and r ∈ \R is (r²⋅e, r⋅w, r⋅s, r⋅\vec x). The sum of (e_1, w_1, s_1, \vec x_1) and (e_2, w_2, s_2, \vec x_2) is (e_1 + e_2 + \P(\vec d), w_1 + w_2, s_1 + s_2, \vec x_1 + \vec x_2) where

\vec d = \p{\mat A ⋅ \vec z_1} ⊙ \p{\mat B ⋅ \vec z_2} + \p{\mat A ⋅ \vec z_2} ⊙ \p{\mat B ⋅ \vec z_1} - \mat C ⋅ \p{s_1 ⋅ \vec z_2 + s_2 ⋅\vec z_1}

Instances of the form (0, w, 1, \vec x) correspond directly to regular R1CS instances.

To update prover state:

\p{\mat A ⋅ \vec z, \mat B ⋅ \vec z, \mat C ⋅ \vec z, s, s⋅\mat C ⋅ \vec z - \p{\mat A ⋅ \vec z} ⊙ \p{\mat B ⋅ \vec z}}

\p{r⋅\vec a, r⋅\vec b, r⋅\vec c, r⋅s, r²⋅\vec e}

\p{\vec a_1 + \vec a_2, \vec b_1 + \vec b_2, \vec c_1 + \vec c_2, \vec e_1 + \vec e_2 + \vec d}

\vec d = \vec a_1 ⊙ \vec b_2 + \vec a_2 ⊙ \vec b_1 - s_1 ⋅ \vec c_2 - s_2 ⋅\vec c_1

The important thing is that linear combinations can be computed element-wise.

We can also compute \vec e by recovering it from the added instance as \vec e = \vec a ⊙ \vec b - s ⋅ \vec c. This may be more efficient?

Note. An instance can be generated for any \vec x by solving for \vec e. Only when s ≠ 0 and \vec e = \vec 0 does the instance proof a f(\vec u) = \vec v claim. So we are interested in the subspace spanned by those instances.

Note. For every R1CS (A, B, C) we can construct an R1CS (A', A', C') with at most twice the constraints. The verification relation then becomes \p{\mat A ⋅ \vec z}^{⊙2} = s⋅\mat C ⋅ \vec z + \vec e. If we add two instances we get

$$ \vec d = 2⋅\p{\mat A ⋅ \vec z_1}⊙\p{\mat A ⋅ \vec z_2}

the main advantage is one fewer state vector to carry around. \vec e = \vec a^{⊙2} - s ⋅ \vec c.

Customizable Constraint System (HyperNova)

\sum_{\set T ∈ \set S}\ \bigodot_{M ∈ \set T}\ \mat M ⋅ \vec w = \vec 0

ProtoStar

Generic constraints, test for f(\vec w) = \vec 0 for polynomial f. R1CS translates as

f_i(\vec w) = (\mat A ⋅ \vec w)_i ⋅ (\mat B ⋅ \vec w)_i - (\mat C ⋅ \vec w)_i

Instance (w; \vec w) satisfies iff w = \P(\vec w) and f(\vec w) = \vec 0.

Given two instances interpolate

ω(X) = X⋅\vec w_1 + (1 - X)⋅ \vec w_2

from the initial instances we have f(ω(0)) = ω_0 and f(ω(1)) = ω_1.

f(ω(X)) = X⋅ ω_1 + (1 - X) ⋅ ω_2 + \mathrm{Z}_{{0,1}}(X)⋅ q(X)

where \mathrm{Z}_{{0,1}}(X) = X^2-X is the polynomial that is zero on \{0,1\}.

Evaluate in random r and have a error term \vec e = f(ω(r)) = q(r).

New instance (w; \vec e, \vec w) st. w = \P(\vec w) and f(\vec w) = \vec e. Interpolate

\begin{aligned} ε(X) &= X⋅\vec e_1 + (1 - X) ⋅ \vec e_2 \\ ω(X) &= X⋅\vec w_1 + (1 - X) ⋅ \vec w_2 \end{aligned}

New equation is

f(ω(X)) = X ⋅ \vec e_1 + (1 - X) ⋅ \vec e_2 + \mathrm{Z}_{{0,1}}(X)⋅ \vec q(X)

new instance is (\vec q(r), ω(r))

Generalizing this, we can pick a set of basis points \mathcal A ⊆ \R and interpolate.

\begin{aligned} ε(X) &= \sum_{i∈[0,n)} \Z_{\mathcal A\setminus\{a_i\}}(X) ⋅\vec e_i \\ ω(X) &= \sum_{i∈[0,n)} \Z_{\mathcal A\setminus\{a_i\}}(X) ⋅\vec w_i \end{aligned}

\begin{aligned} f(ω(X)) = ε(X) + \Z_{\mathcal A}(X) ⋅ \vec q(X) \end{aligned}

Nova

At each step the circuit produces commitments to the witness and the cross term.

\begin{aligned} w &= \P(\vec w) & d &= \P(\vec d) \end{aligned}

R1

\zkp{x_0,x_1}{i}{ asd \\ asd \\ asdas }

SuperNova generalizes this to allow multiple functions f_i where the previous function selects the applicable next step.

References


Maybe

Remco Bloemen
Math & Engineering
https://2π.com