# Mathemagic: exp and ln

After basic arithmetic, the exponential function $\exp\p{\dummyarg}$ and natural logarithm $\ln\p{\dummyarg}$ are the most useful functions to implement because they allow for a range of other operations:

$$ \begin{aligned} a ⋅ b &= \exp\p{\ln\p a + \ln\p b} \\ a / b &= \exp\p{\ln\p a - \ln\p b} \\ a^b &= \exp\p{b ⋅ \ln\p{a}} \\ \log_b a &= \ln\p{a} / \ln\p{b} \\ \sqrt{a} &= \exp\p{½ ⋅ \ln\p{a}} \\ \sqrt[3]{a} &= \exp\p{⅓ ⋅ \ln\p{a}} \\ \end{aligned} $$

Since you can implement multiplication and division, you just need $+, -, \exp$ and $\ln$ to construct a full set of operations. But usually there are more efficient methods for multiplication and division available. You can also construct more efficient dedicated operations for square roots and other functions, see my introduction to approximation theory.

The choice of natural basis $\mathrm{e} = 2.71828…$ is conventional, but actually doesn't matter much. The above relations are equally valid in any base and the implementations just as efficient. The natural basis does have advantages in analysis due to nicer differential identities.

## Fixed-point arithmetic

So far we used real numbers $ℝ$, but in our target platform we will use 256-bit integers $[-2^{255}, 2^{255})$. Using integers directly is not very useful because they can not represent fractional values. To solve this, we will use fixed-point arithmetic, that is, we treat each number as if it is in units of a quintillionth ($10^{-18}$). This is the same as putting a fixed decimal point before the 18-th digit, hence the name “fixed point”. To convert between integers and the real numbers they represent we define these to functions:

$$ \begin{aligned} \mathtt{to\_fix}\p{x} : ℝ → ℤ &= \floor{½ + 10^{18} ⋅ x} \\ \mathtt{from\_fix}\p{x}: ℤ → ℝ &= \frac{x}{10^{18}} \\ \end{aligned} $$

The $\mathtt{to\_fix}$ function looses information by rounding. Fixed-point is an approximation method and it's important to do the straightforward-but-tedious error analysis to make sure this error is tolerable. In the above definitions I used round-to-nearest. Often it helps to bias the error in a safe direction, for example in finance you may want to *round down* for amounts you send out and *round up* for amounts you receive.

## Fixed-point exponential function

Let's implement a fixed-point exponential function $\mathtt{expWad}$. Conversion to fixed point gives us:

$$ \mathtt{expWad}\p{x} = \mathtt{to\_fix}\p{ \exp\p{\mathtt{from\_fix}\p{x} } }= \floor{½ + 10^{18} ⋅ \exp\p{\frac{x}{10^{18}}}} $$

The implementation works as follows

- Handle underflow and overflow cases.
- Convert from $10^{-18}$ fix point to $2^{-96}$ fix point for increased intermediate precision and more efficient multiplications.
- Further reduce the input range to $(-½ \ln 2, ½ \ln 2)$ by factoring out powers of two from the result. We will add those back using a shift in the end.
- Approximate $\exp$ using a $(6,7)$-term rational function evaluated in $2^{-96}$ fixed point.
- Convert back to $10^{-18}$ fixed point and add powers of two.

A lot of the scaling factors can be merged. I was worried that the $10^{-18}$ base would be less efficient, but it turns out the conversion cost is negligible as it can mostly be absorbed in other operations. Both the denominator and numerator of the rational function are made monic to save a two multiplications. The denominator is evaluated using Horner's method, but the numerator uses a preprocessed method to save another multiplication. Preprocessing did not seem effective for the denominator, potentially due to some suboptimality in stack handling. There might be some slight room for improvement there.

Except for the range reduction in step 3, this strategy is very generic and can be applied to any smooth single-variable function.

Care is taken to make sure that $\exp\p{0}$ is exactly $1$ as one would expect. As far as I know the function is monotonic, but I have not proven this and there may be exceptions. The maximum absolute error on the reduced range is designed to be $10^{-20}$. Over the whole domain this a relative error of at most $10^{-20}$. So for small values the result is near perfect. It takes $411$ gas.

```
contract Math {
// Computes the exponential function in 1e18 fixed point using 411 gas.
function expWad(int256 x) internal pure returns (int256 r) {
unchecked {
// When the result is < 0.5 we return zero. This happens when
// x <= floor(log(0.5e18) * 1e18) ~ -42e18
if (x <= -42139678854452767551) return 0;
// When the result is > (2**255 - 1) / 1e18 we can not represent it as an
// int. This happens when x >= floor(log((2**255 - 1) / 1e18) * 1e18) ~ 135.
if (x >= 135305999368893231589) revert("EXP_OVERFLOW");
// x is now in the range (-42, 136) * 1e18. Convert to (-42, 136) * 2**96
// for more intermediate precision and a binary basis. This base conversion
// is a multiplication by 1e18 / 2**96 = 5**18 / 2**78.
x = (x << 78) / 5**18;
// Reduce range of x to (-½ ln 2, ½ ln 2) * 2**96 by factoring out powers
// of two such that exp(x) = exp(x') * 2**k, where k is an integer.
// Solving this gives k = round(x / log(2)) and x' = x - k * log(2).
int256 k = ((x << 96) / 54916777467707473351141471128 + 2**95) >> 96;
x = x - k * 54916777467707473351141471128;
// k is in the range [-61, 195].
// Evaluate using a (6, 7)-term rational approximation.
// p is made monic, we'll multiply by a scale factor later.
int256 y = x + 1346386616545796478920950773328;
y = ((y * x) >> 96) + 57155421227552351082224309758442;
int256 p = y + x - 94201549194550492254356042504812;
p = ((p * y) >> 96) + 28719021644029726153956944680412240;
p = p * x + (4385272521454847904659076985693276 << 96);
// We leave p in 2**192 basis so we don't need to scale it back up for the division.
int256 q = x - 2855989394907223263936484059900;
q = ((q * x) >> 96) + 50020603652535783019961831881945;
q = ((q * x) >> 96) - 533845033583426703283633433725380;
q = ((q * x) >> 96) + 3604857256930695427073651918091429;
q = ((q * x) >> 96) - 14423608567350463180887372962807573;
q = ((q * x) >> 96) + 26449188498355588339934803723976023;
assembly {
// Div in assembly because solidity adds a zero check despite the unchecked.
// The q polynomial won't have zeros in the domain as all its roots are complex.
// No scaling is necessary because p is already 2**96 too large.
r := sdiv(p, q)
}
// r should be in the range (0.09, 0.25) * 2**96.
// We now need to multiply r by:
// * the scale factor s = ~6.031367120.
// * the 2**k factor from the range reduction.
// * the 1e18 / 2**96 factor for base conversion.
// We do this all at once, with an intermediate result in 2**213
// basis, so the final right shift is always by a positive amount.
r = int256((uint256(r) * 3822833074963236453042738258902158003155416615667) >> uint256(195 - k));
}
}
}
```

## Fixed-point natural logarithm

The implementation of $\mathtt{lnWad}$ is nearly identical to $\mathtt{expWad}$ except for a different range reduction strategy and a different $(8,8)$-term rational approximation. The reduction strategy is to factor out powers of two from the input

$$ \ln\p{x} = \ln\p{2^k ⋅ x'} = k ⋅ \ln\p{2} + \ln{x'} $$

where $k = \mathtt{log2}\p{x} = \floor{\log_2 x}$. See below for the implementation of $\mathtt{log2}$.

Care is taken to make sure $\ln\p{1}$ is exactly $0$ as one would expect. As before, as far as I know the the function is monotonic, but I have not proven this and there may be exceptions. The maximum absolute error over the whole range is about $10^{-18}$, which is near perfect. It takes $614$ gas which can be reduced to $585$ by inlining $\mathtt{log2}$.

```
contract Math {
// Computes the natural logarithm in 1e18 fixed point using 675 gas.
// Reverts if x is negative or zero.
function lnWad(int256 x) internal pure returns (int256 r) {
unchecked {
require(x > 0, "UNDEFINED");
// We want to convert x from 10**18 fixed point to 2**96 fixed point.
// We do this by multiplying by 2**96 / 10**18. But since
// ln(x * C) = ln(x) + ln(C), we can simply do nothing here
// and add ln(2**96 / 10**18) at the end.
// Reduce range of x to (1, 2) * 2**96
// ln(2^k * x) = k * ln(2) + ln(x)
int256 k = int256(log2(uint256(x))) - 96;
x <<= uint256(159 - k);
x = int256(uint256(x) >> 159);
// Evaluate using a (8, 8)-term rational approximation.
// p is made monic, we will multiply by a scale factor later.
int256 p = x + 3273285459638523848632254066296;
p = ((p * x) >> 96) + 24828157081833163892658089445524;
p = ((p * x) >> 96) + 43456485725739037958740375743393;
p = ((p * x) >> 96) - 11111509109440967052023855526967;
p = ((p * x) >> 96) - 45023709667254063763336534515857;
p = ((p * x) >> 96) - 14706773417378608786704636184526;
p = p * x - (795164235651350426258249787498 << 96);
// We leave p in 2**192 basis so we don't need to scale it back up for the division.
// q is monic by convention.
int256 q = x + 5573035233440673466300451813936;
q = ((q * x) >> 96) + 71694874799317883764090561454958;
q = ((q * x) >> 96) + 283447036172924575727196451306956;
q = ((q * x) >> 96) + 401686690394027663651624208769553;
q = ((q * x) >> 96) + 204048457590392012362485061816622;
q = ((q * x) >> 96) + 31853899698501571402653359427138;
q = ((q * x) >> 96) + 909429971244387300277376558375;
assembly {
// Div in assembly because solidity adds a zero check despite the unchecked.
// The q polynomial is known not to have zeros in the domain.
// No scaling required because p is already 2**96 too large.
r := sdiv(p, q)
}
// r is in the range (0, 0.125) * 2**96
// Finalization, we need to:
// * multiply by the scale factor s = 5.549…
// * add ln(2**96 / 10**18)
// * add k * ln(2)
// * multiply by 10**18 / 2**96 = 5**18 >> 78
// mul s * 5e18 * 2**96, base is now 5**18 * 2**192
r *= 1677202110996718588342820967067443963516166;
// add ln(2) * k * 5e18 * 2**192
r += 16597577552685614221487285958193947469193820559219878177908093499208371 * k;
// add ln(2**96 / 10**18) * 5e18 * 2**192
r += 600920179829731861736702779321621459595472258049074101567377883020018308;
// base conversion: mul 2**18 / 2**192
r >>= 174;
}
}
}
```

### Discrete binary logarithm

The $\mathtt{lnWad}$ requires the discrete binary logarithm $\mathtt{log2}$, also known as find last set because it returns the index of the most significant non-zero bit.

I experimented with a de Bruijn based version but a simpler SWAR approach was more efficient. Vectorized did get the de Bruijn approach to perform better by 13 gas, so I will present his version here, which takes $191$ gas:

```
contract Math {
// Compute the discrete binary logarithm of x using 191 gas.
// Requires x to be non-zero
function log2(uint256 x) internal pure returns (uint256 r) {
assembly {
r := shl(7, lt(0xffffffffffffffffffffffffffffffff, x))
r := or(r, shl(6, lt(0xffffffffffffffff, shr(r, x))))
r := or(r, shl(5, lt(0xffffffff, shr(r, x))))
// For the remaining 32 bits, use a De Bruijn lookup.
x := shr(r, x)
x := or(x, shr(1, x))
x := or(x, shr(2, x))
x := or(x, shr(4, x))
x := or(x, shr(8, x))
x := or(x, shr(16, x))
r := or(r, byte(shr(251, mul(x, shl(224, 0x07c4acdd))),
0x0009010a0d15021d0b0e10121619031e080c141c0f111807131b17061a05041f))
}
}
}
```

## Usage

The code in this post can be used under an MIT License. An early version with the rational derivation code is in an experiment repo. A polished and tested version lives in the Solmate repo to be included in v7. It has already seen use in Paradigm's VRGDA library and their projects ArtGobblers and 0xMonaco.

If you like this you may also enjoy my muldiv and full-mul implementations and my introduction to approximation theory.