BN254 Point Compression for L2s

\gdef\delim#1#2#3{\mathopen{}\mathclose{\left#1 #2 \right#3}} \gdef\p#1{\delim({#1})} \gdef\set#1{\delim\{{#1}\}} \gdef\floor#1{\delim\lfloor{#1}\rfloor} \gdef\mod#1{\delim[{#1}]} \gdef\Z{\mathbb{Z}} \gdef\F{\mathbb{F}} \gdef\G{\mathbb{G}} \gdef\i{\mathrm{i}} \gdef\op#1{\small \mathsf{#1}}

A Groth16 proof has two \G_1 points and one \G_2. In the BN254 pairing curve these take 64 and 128 bytes respectively uncrompressed totaling 256 bytes for a proof. With compression points will take only 32 and 64 bytes respectively, halving the size of a proof.

On Ethereum calldata costs 16 gas per byte, so any compression would need to perform better than that. This gives a budget of 512 and 1024 gas for \G_1 and \G_2 point compression respectively. This will be hard to achieve. On Optimism the cost of calldata is higher by a factor

\frac{\mathtt{l1\_gas\_price}}{\mathtt{l2\_gas\_price}}⋅\mathtt{dynamic\_overhead}

Where the dynamic_overhead is set to 0.684 and the gas price ratio fluctuates wildly with the last half year ranging between 50 to 20\,000. Taking the lower bound this puts the compression budget at 550 gas per byte, or 17k gas per \G_1 point and double for \G_2. This seems doable, let's do it!

The field \F_1

The base field \F_1 = \Z/p\Z are the numbers modulo p where

\begin{aligned} p &= 36⋅t^4 + 36⋅t^3 + 24⋅t^2 + 6⋅t + 1 &\hspace{2em} t &= 4965661367192848881 \end{aligned}

Addition and multiplications are trivially implemented using EVMs addmod and mulmod operations at 8 gas each. \floor{\frac{2^{256}-1}{p-1}} = 5 so we can add up to five elements using add without overflowing. Negation can be done using p-n but the result will be unreduced in range [1,p]. \floor{\frac{2^{256}-1}{p}} = 5 so we can also add up to five unreduced negations without overflowing.

Exponentiation can be done using repeated multiplication. Efficient strategies for specific exponents can be found using addchain so the exact cost depends on the exponent. Exponentiation can also be done using the modexp precompile which for \F_1 costs 200 to 1349 gas, depending on the exponent.

Inversion in \F_1

By Fermat's little theorem, in \F_1 we have a = a^p. Dividing both sides by a^2 we get

a^{-1} = a^{p-2}

Computing this takes 247⋅\op{Square}_{\F_1} + 56⋅\op{Mul}_{\F_1} using an addition chain generated with addchain. Implemented using mulmod this would take at least 2\,424 gas. The modexp precompile costs 1349 gas.

To do. Inversion using half-extended GCD?

Square roots in \F_1

a = a^p taking the square root of both sides we would get \sqrt{a} = a^{\frac{p}{2}} but \frac{p}{2} is not an integer so this does not lead to a usable method. Instead \frac{p-1}{2} is an integer. By Fermat's little theorem in \F_1 we have a^p=a. Dividing both sides by a we get a^{p-1}=1 and taking the square root on both sides we get a^{\frac{p-1}{2}} = \pm 1. We also have \mod{p}_4 = 3 so \frac{p+1}{4} is an integer. Observe

\p{\pm a^{\frac{p+1}{4}}}^2 = a^{\frac{p+1}{2}} = a^{\frac{p-1}{2}}⋅a = \pm a

So if a^{\frac{p+1}{2}} = 1 we have then \sqrt{a} = \pm a^{\frac{p+1}{4}}. It can be shown that if a^{\frac{p+1}{2}} = -1 the square root does not exist, so this method covers all square roots. In fact, only half of the elements in \F_1 have a square root, and if it exists for a it is absent for -a and vice versa.

The exponential \frac{p+1}{4} takes 246⋅\op{Square}_{\F_1} + 54⋅\op{Mul}_{\F_1} using an addition chain generated with addchain, which takes at least 2\,400 gas using mulmod. Using modexp the exponent takes 1338 gas.

The field \F_2

Construct the extension field \F_2 = \F_1[\i]/(\i^2+1). That is, each element is written x_0 + x_1 ⋅ \i with \i^2 = -1, much like the complex numbers. In fact, we get all of our complex arithmetic identities:

\begin{aligned} \p{a + b⋅\i} + \p{c + d⋅\i} &= \p{a + c} + \p{b + d}⋅\i \\ \p{a + b⋅\i} - \p{c + d⋅\i} &= \p{a - c} + \p{b - d}⋅\i \\ \p{a + b⋅\i} ⋅ \p{c + d⋅\i} &= \p{a⋅c - b⋅d} + \p{a⋅d+b⋅c}⋅\i \\ \p{a + b⋅\i}^2 &= \p{a^2 - b^2} + \p{2⋅a⋅b}⋅\i \\ \p{a + b⋅\i}^{-1} &= \p{\frac{a}{a^2 + b^2}} + \p{\frac{b}{a^2 + b^2}}⋅\i \\ \end{aligned}

Multiplication as above requires 2⋅\op{Add}_{\F_1} + 4⋅\op{Mul}_{\F_1}. It can be done with fewer multiplications using Karatsuba's method. Take u = a⋅c, v = b⋅d and s=\p{a+b}⋅\p{c+d}, then the result is \p{u-v}+\p{s-u-v}⋅\i. This requires 5⋅\op{Add}_{\F_1} + 3⋅\op{Mul}_{\F_1}. Since addmod and mulmod cost the same in EVM, this is not a worthwhile trade-off. (To do. We can change some to cheaper adds)

Squaring as above requires \op{Add}_{\F_1} + 2⋅\op{Square}_{\F_1} + \op{Mul}_{\F_1} + \op{Double}_{\F_1}. Factoring a^2 + b^2 as \p{a+b}⋅\p{a-b} we can exchange one \op{Square}_{\F_1} for an \op{Add}_{\F_1}.

Square roots in \F_2

Square root is the hard part. We can again take inspiration from Gnark and see that it uses algorithm 9 from ARH12. This amounts to computing

\begin{aligned} α &= a^{\frac{q-1}{2}} &\hspace{2em} \sqrt{a} &= \pm a^{\frac{q+1}{4}} ⋅ \begin{cases} \i & α = -1 \\ \p{1 + α}^{\frac{q-1}{2}} & α ≠ -1 \\ \end{cases} \end{aligned}

This is a lot of exponentiation in \F_2, for which we cannot use the affordable precompile. So instead we consider a more traditional approach, algorithm 8 from ARH12. Give x+y⋅\i with x,y∈\F_1 we are looking for a,b∈\F_1 such that:

\begin{aligned} x + y⋅\i &= \p{a + b⋅\i}^2 \\ &= \p{a^2 - b^2} + \p{2⋅a⋅b}⋅\i \\ \end{aligned}

which gives the system of quadratic equations

\begin{aligned} x &= a^2 - b^2 \\ y &= 2⋅a⋅b \\ \end{aligned}

The solution to this is

\begin{aligned} a &= \pm \sqrt{\frac{x\pm \sqrt{x^2 + y^2}}{2}} \\ b &= \frac{y}{2⋅a} \\ \end{aligned}

Show me

we take b = \frac{y}{2⋅a} and substitute this back in the first equation and apply the quadratic formula

\begin{aligned} x &= a^2 - \p{\frac{y}{2⋅a}}^2 \\ x &= a^2 - \frac{y^2}{4⋅a^2} \\ 4⋅a^2⋅x &= 4⋅a^4 - y^2 \\ 0 &= \p{-y^2} - 4⋅x ⋅\p{a^2}+ 4⋅\p{a^2}^2 \\ a^2 &= \frac{-\p{- 4⋅x}\pm \sqrt{\p{- 4⋅x}^2-4⋅\p{4}⋅\p{-y^2}}}{8} \\ a^2 &= \frac{4⋅x \pm \sqrt{16⋅x^2+16⋅y^2}}{8} \\ a^2 &= \frac{x\pm \sqrt{x^2 + y^2}}{2} \\ a &= \pm \sqrt{\frac{x\pm \sqrt{x^2 + y^2}}{4}} \\ \end{aligned}

The inner \pm needs to have a specific value, only one will result in a square for the outer square root. There does not seem to be an efficient method of computing this sign other than trying both values. Since trying is expensive (involves a modexp) it is better to provide it as a hint. The outer \pm reflects the two possible solutions of the quadratic equation.

Computing the square root requires two \F_1 square roots and one \F_1 inversion, which can be done using three calls to modexp costing 4\,025 gas total.

Compressing \G_1 points

The \G_1 elliptic curve are the (x, y)∈\F_1^2 satisfying a short Weierstrass equation

y^2 = x^3 + 3

One such solution is (1, 2), which is the chosen generator for this group.

Give an x coordinate we can recover the y coordinate by taking the square root of x^3+3. This has either two solutions y, -y or none. This means that we can distinguish the point as (when fully reduced) exactly one will be >\frac{p}{2} and exactly one will be an odd number.

Compressing \G_2 points

The \G_2 elliptic curve are the (x, y)∈\F_2^2 satisfying a short Weierstrass equation

y^2 = x^3 + \frac{3}{9+\i}

where \frac{3}{9+\i} = \frac{27}{82} - \frac{3}{82} ⋅ \i.

Given an x coordinate we can recover y the same as before, but now with math happening in \F_2. Given x = a + b ⋅ \i:

\begin{aligned} x^3 &= \p{a + b ⋅ \i}^3 \\ &= \p{a^3 - 3 ⋅a ⋅b^2} + \p{3 ⋅a^2 ⋅b - b^3}⋅\i \\ &= a^3 + 3 ⋅ a^2 ⋅ b⋅\i + 3⋅ a ⋅ b^2 ⋅\i^2 + b^3 ⋅\i^3 \\ &= \p{a^3 - 3⋅ a ⋅ b^2} + \p{3 ⋅ a^2 ⋅ b - b^3}⋅\i \\ x^3 + \frac{3}{9+\i} &= \p{\frac{27}{82}+a^3 - 3⋅ a ⋅ b^2} - \p{\frac{3}{82} + b^3 - 3 ⋅ a^2 ⋅ b}⋅\i \\ \end{aligned}

The y coordinate is a square root of this value. As before, -y is also a solution.

Compressing two points

In Ote15 an efficient method for storing two points on the same curve y^2=x^3 + a⋅x + b is presented. (As mentioned by Zac Williamson). Given two points (x_1, y_1), (x_2, y_2) on a curve compute

\begin{aligned} A &= \frac{x_1 - x_2}{y_1 - y_2} &\hspace{2em} B &= y_1 - y_2 &\hspace{2em} C &= x_2 \end{aligned}

we can recover the two points by

\begin{aligned} x_1 &= A⋅B + C &\hspace{1em} x_2 &= C &\hspace{1em} y_1 &= \frac{A⋅\p{\p{x_1 + x_2}^2 - x_1⋅x_2 + a} + B}{2} &\hspace{1em} y_2 &= y_1 - B \end{aligned}

For the special case y_1 = y_2 we can instead store (x_1, x_2, y_1). An extra bit is required to distinguish these cases.

Implementation and benchmark

An implementation of the above \G_1 and \G_2 compression is done in Solidity in Verifier.sol.

These numbers are close enough that we can consider doing both \G_1 and \G_2 compression. For a Groth16 verification this adds 11\,366 gas overhead, so becomes worthwhile at 89 gas/byte. This is not economical on Ethereum, but it is on Optimism.

Safety

This compression method takes in existing proofs, so zero-knowledge is unaffected. After decompression it calls the original verifier, so soundness is unaffected. This leaves completeness, which is only preserved if the method works for all valid proofs. This can be accomplished by making sure the method works on all points in \G_1 and \G_2, taking care to handle zero values and the point at infinity correctly. (Although it is at best extremely unlikely a valid proof will contain those.)

Appendix: Abusing ECADD?

Given a,b,c,d ∈ \F_1 such that (a, b),(c,d) ∈ \G_1 the ecadd precompile returns for 150 gas

\begin{aligned} x &= \p{\frac{d-b}{c-a}}^2-a-c \\ y &= \p{2⋅a + c}⋅\p{\frac{d-b}{c-a}}- \p{\frac{d-b}{c-a}}^3 - b \\ \end{aligned}

if (a, b)≠(c,d), or else

\begin{aligned} x &= \p{\frac{3⋅a^2}{2⋅b}}^2-2⋅a \\ y &= \p{3⋅a}⋅\p{\frac{3⋅a^2}{2⋅b}}-\p{\frac{3⋅a^2}{2⋅b}}^3-b \\ \end{aligned}

This contains an inversion for much cheaper than we can do, but it seems hard to extract this. The first case could maybe be abused to compute \frac{1}{a^2}, but turning this in to a problem with valid \G_1 points is going to be challenging.

References

https://eprint.iacr.org/2020/010.pdf

Remco Bloemen
Math & Engineering
https://2π.com