How to write any function in Solidity
Rational approximations
\frac{P(x_i)}{Q(x_i)} = f(x_i)
P(x) = p_0 + p_1 x + p_2 x^2 + \dots
Q(x) = 1 + q_1 x + q_2 x^2 + \dots
P(x_i) = Q(x_i) f(x_i)
\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots \\ 1 & x_1 & x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 & x_0 & x_0^2 & \cdots \\ 1 & x_1 & x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} 1 \\ q_1 \\ q_2 \\ \vdots \end{bmatrix} \odot \begin{bmatrix} f(x_0) \\ f(x_1) \\ f(x_2) \\ \vdots \end{bmatrix}
\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots \\ 1 & x_1 & x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \end{bmatrix} = \begin{bmatrix} f(x_0) & f(x_0) x_0 & f(x_0) x_0^2 & \cdots \\ f(x_1) & f(x_1)x_1 & f(x_1)x_1^2 & \cdots \\ f(x_2) & f(x_2)x_2 & f(x_2)x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} 1 \\ q_1 \\ q_2 \\ \vdots \end{bmatrix}
\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots \\ 1 & x_1 & x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \end{bmatrix} = \begin{bmatrix} f(x_0) x_0 & f(x_0) x_0^2 & \cdots \\ f(x_1)x_1 & f(x_1)x_1^2 & \cdots \\ f(x_2)x_2 & f(x_2)x_2^2 & \cdots \\ \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} q_1 \\ q_2 \\ \vdots \end{bmatrix} + \begin{bmatrix} f(x_0) \\ f(x_1) \\ f(x_2) \\ \vdots \end{bmatrix}
\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots & -f(x_0)x_0 & -f(x_0)x_0^2 & \cdots \\ 1 & x_1 & x_1^2 & \cdots & -f(x_1)x_1 & -f(x_1)x_1^2 & \cdots \\ 1 & x_2 & x_2^2 & \cdots & -f(x_2)x_2 & -f(x_2)x_2^2 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \\ q_1 \\ q_2 \\ \vdots \end{bmatrix} = \begin{bmatrix} f(x_0) \\ f(x_1) \\ f(x_2) \\ \vdots \end{bmatrix}
Remez algorithm
By the Equioscilation theorem we know that when P and Q are optimal, the errors is oscilating between \pm E. Given the points of extremal error, x_i, this means
\frac{P(x_i)}{Q(x_i)} - f(x_i) = (-1)^i E
This formula is the basis of the Remez algorithm. We itteratively solve for this equation, and use the result to find better approximations of the extremal points x_i.
\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots & -f(x_0)x_0 & -f(x_0)x_0^2 & \cdots & -1 \\ 1 & x_1 & x_1^2 & \cdots & -f(x_1)x_1 & -f(x_1)x_1^2 & \cdots & +1 \\ 1 & x_2 & x_2^2 & \cdots & -f(x_2)x_2 & -f(x_2)x_2^2 & \cdots & -1 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ \vdots \\ q_1 \\ q_2 \\ \vdots \\ E \end{bmatrix} = \begin{bmatrix} f(x_0) \\ f(x_1) \\ f(x_2) \\ \vdots \end{bmatrix}
TODO. In the complex domain, the ideal errors seem to be e^{\pi i \frac{i}{n}} E, this suggests that in the real domain we can use \cos \frac{i}{n} E. In a way, this is a generalization of the 'Modified Remez Algorithm' that also adds zero-error points \frac{P(x_i)}{Q(x_i)} - f(x_i) = 0.
Clenshaw algorithm and generalization of basis
We started out with polynomials of the form
P(x) = p_0 + p_1 x + p_2 x^2 + \dots
If \vert x \vert > 1, the x^i terms will grow exponentially, which can lead to numericall instabilities. A simple trick to avoid this is to apply a linear function to the domain to map it into [-1, 1].
A more sophisticated way of avoiding it is by changing our basis polynomials. Let's say that till now we have used the basis
\phi_i(x) = x^i
such that
P(x) = p_0 \phi_0(x) + p_1 \phi_1(x) + p_2 \phi_2(x) + \dots
If these basis functions satisfy a recurrence
\phi_{i+1}(x) = \alpha_i(x) \phi_i(x) + \beta_i(x) \phi_{i-1}(x)
then a generalization of Horner's evaluation mechanism applies:
The regular basis \phi_i(x) = x^i satisfies with \alpha_i(x) = x and \beta_i(x) = 0.
The Chebyshev basis satisfies \alpha_i(x) = 2x and \beta_i(x) = -1.
r_k = p_k + \alpha_k(x) r_{k+1} + \beta_{k+1} r_{k+2}
Assuming \alpha and \beta do not depend on the index:
Horner case using one variable
a = p_k + x * a
Chebyshev case using two variables
a = p_4 + 2x * b - a
b = p_3 + 2x * a - b
a = p_2 + 2x * b - a
b = p_1 + 2x * a - b
Chebyshev uses one more variable and has an additional subtraction, but same number of multiplications.
TODO Barycentric basis
https://web.archive.org/web/20200211232711/https://www.mn.uio.no/math/english/people/aca/michaelf/papers/rational.pdf