Anytime Performance Assessment in Blackbox Optimization Benchmarking

Research output: Contribution to journalArticlepeer-review

Abstract

We present concepts and recipes for the anytime performance assessment when benchmarking optimization algorithms in a blackbox scenario. We consider runtime - oftentimes measured in the number of blackbox evaluations needed to reach a target quality - to be a universally measurable cost for solving a problem. Starting from the graph that depicts the solution quality versus runtime, we argue that runtime is the only performance measure with a generic, meaningful, and quantitative interpretation. Hence, our assessment is solely based on runtime measurements. We discuss proper choices for solution quality indicators in single- and multi-objective optimization, as well as in the presence of noise and constraints. We also discuss the choice of the target values, budget-based targets, and the aggregation of runtimes by using simulated restarts, averages, and empirical cumulative distributions which generalize convergence graphs of single runs. The presented performance assessment is to a large extent implemented in the comparing continuous optimizers (COCO) platform freely available at https://github.com/numbbo/coco.

Original languageEnglish
Pages (from-to)1293-1305
Number of pages13
JournalIEEE Transactions on Evolutionary Computation
Volume26
Issue number6
DOIs
Publication statusPublished - 1 Dec 2022

Keywords

  • Anytime optimization
  • benchmarking
  • blackbox optimization
  • performance assessment
  • quality indicator

Fingerprint

Dive into the research topics of 'Anytime Performance Assessment in Blackbox Optimization Benchmarking'. Together they form a unique fingerprint.

Cite this