Benchmarking the pure random search on the bi-objective BBOB-2016 testbed

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-ob- jective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a base- line, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, 106 . n candidate solutions are sampled uniformly within the sampling box [..5; 5]n.

Original languageEnglish
Title of host publicationGECCO 2016 Companion - Proceedings of the 2016 Genetic and Evolutionary Computation Conference
EditorsTobias Friedrich
PublisherAssociation for Computing Machinery, Inc
Pages1217-1223
Number of pages7
ISBN (Electronic)9781450343237
DOIs
Publication statusPublished - 20 Jul 2016
Externally publishedYes
Event2016 Genetic and Evolutionary Computation Conference, GECCO 2016 Companion - Denver, United States
Duration: 20 Jul 201624 Jul 2016

Publication series

NameGECCO 2016 Companion - Proceedings of the 2016 Genetic and Evolutionary Computation Conference

Conference

Conference2016 Genetic and Evolutionary Computation Conference, GECCO 2016 Companion
Country/TerritoryUnited States
CityDenver
Period20/07/1624/07/16

Keywords

  • Benchmarking
  • Bi-objective optimization
  • Black-box optimization

Fingerprint

Dive into the research topics of 'Benchmarking the pure random search on the bi-objective BBOB-2016 testbed'. Together they form a unique fingerprint.

Cite this