Learning hard problems: CMA-ES

The covariance matrix adaptation evolution strategy is black-box optimization algorithm. That performs well in practice [source].

We have a function $f: \R^n → \R$ that we want to minimize.

The algorithm iterates three steps:

  1. Generate new solutions $\vec x_i \in \R^n$.
  2. Sort solutions by fitness $f(x_i) ≤ f(x_{i+1})$.
  3. Update the internal state using re-ordered samples.

Since $f$ is only used to order the candidate solutions, very few assumptions on $f$ are made.

The state of the algorithm is:

The initial values are $C = I$, $\vec p = \vec q = \vec 0$.

The parameters of the algoritm are:

Generating new solutions

A total of $λ$ new solutions $\vec x_0, \dots, \vec x_{λ -1}$ are drawn from the multinomial distribution

$$ \vec x_i ∼ \mathcal N (\vec m, σ^2 ⋅ C) $$

This is the same as $\vec m + σ ⋅ \mathcal N (\vec 0, C)$, the solutions are all pertubations of the mean $\vec m$, which represents the current best estimate.

Sorting solutions

Sort the solutions by $f$ such that

$$ f(\vec x_0) ≤ f(\vec x_1) ≤ \cdots ≤ f(\vec x_{λ - 1}) $$

Update the mean

$$ \vec m' = \sum_i w_i ​⋅ \vec x_i $$

Where $w_i$ are the recombination weights such that $\sum_i w_i = 1$ and $w_i \geq w_{i+1}$. Typically there is a $0 < μ ≤ λ/2$ such that $w_i = \frac{1}{μ}$ for $i < μ$ and zero otherwise.

$$ μ = \left\lfloor \frac{λ}{2} \right\rfloor $$

$$ w_i = \frac{1}{\Sigma} \ln(μ + 2) - \ln i $$

Update the step-size

First the isotropic path is updated

$$ \vec p' = (1 - c_p) \vec p + $$

Update the covariance matrix

First the anisotropic path is updated


Remco Bloemen
Math & Engineering