Benchmarking Algorithms from the platypus Framework on the Biobjective bbob-biobj Testbed

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

One of the main goals of the COCO platform is to produce, collect, and make available benchmarking performance data sets of optimization algorithms and, more concretely, algorithm implementations. For the recently proposed biobjective bbob-biobj test suite, less than 20 algorithms have been benchmarked so far but many more are available to the public. We therefore aim in this paper to benchmark several available multiobjective optimization algorithms on the bbob-biobj test suite and discuss their performance. We focus here on algorithms implemented in the platypus framework (in Python) whose main advantage is its ease of use without the need to set up many algorithm parameters.

Original languageEnglish
Title of host publicationGECCO 2019 Companion - Proceedings of the 2019 Genetic and Evolutionary Computation Conference Companion
PublisherAssociation for Computing Machinery, Inc
Pages1905-1911
Number of pages7
ISBN (Electronic)9781450367486
DOIs
Publication statusPublished - 13 Jul 2019
Event2019 Genetic and Evolutionary Computation Conference, GECCO 2019 - Prague, Czech Republic
Duration: 13 Jul 201917 Jul 2019

Publication series

NameGECCO 2019 Companion - Proceedings of the 2019 Genetic and Evolutionary Computation Conference Companion

Conference

Conference2019 Genetic and Evolutionary Computation Conference, GECCO 2019
Country/TerritoryCzech Republic
CityPrague
Period13/07/1917/07/19

Keywords

  • Benchmarking
  • Bi-objective optimization
  • Black-box optimization

Fingerprint

Dive into the research topics of 'Benchmarking Algorithms from the platypus Framework on the Biobjective bbob-biobj Testbed'. Together they form a unique fingerprint.

Cite this