OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression

Nicola Bastianello, Andrea Simonetto, Emiliano Dall’Anese

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper presents a new regularization approach – termed OpReg-Boost – to boost the convergence of online optimization and learning algorithms. In particular, the paper considers online algorithms for optimization problems with a time-varying (weakly) convex composite cost. For a given online algorithm, OpReg-Boost learns the closest algorithmic map that yields linear convergence; to this end, the learning procedure hinges on the concept of operator regression. We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver that exploits a closed-form solution of simple quadratically-constrained quadratic programs (QCQPs). Simulation results showcase the superior properties of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and Anderson acceleration.

Original languageEnglish
Pages (from-to)138-152
Number of pages15
JournalProceedings of Machine Learning Research
Volume168
Publication statusPublished - 1 Jan 2022
Event4th Annual Learning for Dynamics and Control Conference, L4DC 2022 - Stanford, United States
Duration: 23 Jun 202224 Jun 2022

Keywords

  • acceleration
  • online optimization
  • operator regression
  • weakly convex

Fingerprint

Dive into the research topics of 'OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression'. Together they form a unique fingerprint.

Cite this