Dual Prediction-Correction Methods for Linearly Constrained Time-Varying Convex Programs

Research output: Contribution to journalArticlepeer-review

Abstract

Devising efficient algorithms to solve continuously-varying strongly convex optimization programs is key in many applications, from control systems to signal processing and machine learning. In this context, solving means to find and track the optimizer trajectory of the continuously-varying convex optimization program. Recently, a novel prediction-correction methodology has been put forward to set up iterative algorithms that sample the continuously-varying optimization program at discrete time steps and perform a limited amount of computations to correct their approximate optimizer with the new sampled problem and predict how the optimizer will change at the next time step. Prediction-correction algorithms have been shown to outperform more classical strategies, i.e., correction-only methods. Typically, prediction-correction methods have asymptotical tracking errors of the order of h2, where h is the sampling period, whereas classical strategies have order of h. Up to now, prediction-correction algorithms have been developed in the primal space, both for unconstrained and simply constrained convex programs. In this paper, we show how to tackle linearly constrained continuously-varying problem by prediction-correction in the dual space and we prove similar asymptotical error bounds as their primal versions.

Original languageEnglish
Article number8502778
Pages (from-to)3355-3361
Number of pages7
JournalIEEE Transactions on Automatic Control
Volume64
Issue number8
DOIs
Publication statusPublished - 1 Aug 2019
Externally publishedYes

Keywords

  • Dual ascent
  • parametric programming
  • prediction-correction methods
  • time-varying convex optimization

Fingerprint

Dive into the research topics of 'Dual Prediction-Correction Methods for Linearly Constrained Time-Varying Convex Programs'. Together they form a unique fingerprint.

Cite this