Abstract
This paper presents a new regularization approach – termed OpReg-Boost – to boost the convergence of online optimization and learning algorithms. In particular, the paper considers online algorithms for optimization problems with a time-varying (weakly) convex composite cost. For a given online algorithm, OpReg-Boost learns the closest algorithmic map that yields linear convergence; to this end, the learning procedure hinges on the concept of operator regression. We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver that exploits a closed-form solution of simple quadratically-constrained quadratic programs (QCQPs). Simulation results showcase the superior properties of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and Anderson acceleration.
| Original language | English |
|---|---|
| Pages (from-to) | 138-152 |
| Number of pages | 15 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 168 |
| Publication status | Published - 1 Jan 2022 |
| Event | 4th Annual Learning for Dynamics and Control Conference, L4DC 2022 - Stanford, United States Duration: 23 Jun 2022 → 24 Jun 2022 |
Keywords
- acceleration
- online optimization
- operator regression
- weakly convex