Stopping criteria, initialization, and implementations of BFGS and their effect on the BBOB test suite

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Benchmarking algorithms is a crucial task to understand them and to make recommendations for which algorithms to use in practice. However, one has to keep in mind that we typically compare only algorithm implementations and that care must be taken when making general statements about an algorithm while implementation details and parameter settings might have a strong impact on the performance. In this paper, we investigate those impacts of initialization, internal parameter setting, and algorithm implementation over different languages for the well-known BFGS algorithm. We must conclude that even in the default setting, the BFGS algorithms in Python's scipy library and in Matlab's fminunc differ widely-with the latter even changing significantly over time.

Original languageEnglish
Title of host publicationGECCO 2018 Companion - Proceedings of the 2018 Genetic and Evolutionary Computation Conference Companion
PublisherAssociation for Computing Machinery, Inc
Pages1513-1517
Number of pages5
ISBN (Electronic)9781450357647
DOIs
Publication statusPublished - 6 Jul 2018
Event2018 Genetic and Evolutionary Computation Conference, GECCO 2018 - Kyoto, Japan
Duration: 15 Jul 201819 Jul 2018

Publication series

NameGECCO 2018 Companion - Proceedings of the 2018 Genetic and Evolutionary Computation Conference Companion

Conference

Conference2018 Genetic and Evolutionary Computation Conference, GECCO 2018
Country/TerritoryJapan
CityKyoto
Period15/07/1819/07/18

Keywords

  • BFGS
  • Benchmarking
  • Black-box optimization
  • Implementation impact
  • Influence of parameters
  • Initialization

Fingerprint

Dive into the research topics of 'Stopping criteria, initialization, and implementations of BFGS and their effect on the BBOB test suite'. Together they form a unique fingerprint.

Cite this