Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization

Research output: Contribution to journalConference articlepeer-review

Abstract

Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the Adam algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Łojasiewicz property.

Original languageEnglish
Pages (from-to)225-240
Number of pages16
JournalProceedings of Machine Learning Research
Volume129
Publication statusPublished - 1 Jan 2020
Event12th Asian Conference on Machine Learning, ACML 2020 - Bangkok, Thailand
Duration: 18 Nov 202020 Nov 2020

Keywords

  • Adaptive gradient methods
  • Kurdyka-Łojasiewicz inequality
  • Nonconvex optimization

Fingerprint

Dive into the research topics of 'Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization'. Together they form a unique fingerprint.

Cite this