Abstract
Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the Adam algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Łojasiewicz property.
| Original language | English |
|---|---|
| Pages (from-to) | 225-240 |
| Number of pages | 16 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 129 |
| Publication status | Published - 1 Jan 2020 |
| Event | 12th Asian Conference on Machine Learning, ACML 2020 - Bangkok, Thailand Duration: 18 Nov 2020 → 20 Nov 2020 |
Keywords
- Adaptive gradient methods
- Kurdyka-Łojasiewicz inequality
- Nonconvex optimization