TY - JOUR
T1 - Markov chain analysis of cumulative step-size adaptation on a linear constrained problem
AU - Chotard, Alexandre
AU - Auger, Anne
AU - Hansen, Nikolaus
N1 - Publisher Copyright:
© 2015 by the Massachusetts Institute of Technology.
PY - 2015/12/1
Y1 - 2015/12/1
N2 - This paper analyzes a (1, λ)-Evolution Strategy, a randomized comparison-based adaptive search algorithm optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first, the case where the step-size is constant, and second, the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain de- scribing the behavior of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behavior. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence, depending on the dimension, constraint angle, population size, and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.
AB - This paper analyzes a (1, λ)-Evolution Strategy, a randomized comparison-based adaptive search algorithm optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first, the case where the step-size is constant, and second, the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain de- scribing the behavior of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behavior. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence, depending on the dimension, constraint angle, population size, and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.
KW - CMA-ES
KW - Constrained problem
KW - Continuous optimization
KW - Cumulative step-size adaptation
KW - Evolution strategies
U2 - 10.1162/EVCO_a_00166
DO - 10.1162/EVCO_a_00166
M3 - Article
C2 - 26406165
AN - SCOPUS:84950286503
SN - 1063-6560
VL - 23
SP - 611
EP - 640
JO - Evolutionary Computation
JF - Evolutionary Computation
IS - 4
ER -