Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

Amirhossein Ajalloeian, Andrea Simonetto, Emiliano Dall'Anese

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper considers an online proximal-gradient method to track the minimizers of a composite convex function that may continuously evolve over time. The online proximal-gradient method is "inexact," in the sense that: (i) it relies on an approximate first-order information of the smooth component of the cost; and, (ii) the proximal operator (with respect to the non-smooth term) may be computed only up to a certain precision. Under suitable assumptions, convergence of the error iterates is established for strongly convex cost functions. On the other hand, the dynamic regret is investigated when the cost is not strongly convex, under the additional assumption that the problem includes feasibility sets that are compact. Bounds are expressed in terms of the cumulative error and the path length of the optimal solutions. This suggests how to allocate resources to strike a balance between performance and precision in the gradient computation and in the proximal operator.

Original languageEnglish
Title of host publication2020 American Control Conference, ACC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2850-2857
Number of pages8
ISBN (Electronic)9781538682661
DOIs
Publication statusPublished - 1 Jul 2020
Externally publishedYes
Event2020 American Control Conference, ACC 2020 - Denver, United States
Duration: 1 Jul 20203 Jul 2020

Publication series

NameProceedings of the American Control Conference
Volume2020-July
ISSN (Print)0743-1619

Conference

Conference2020 American Control Conference, ACC 2020
Country/TerritoryUnited States
CityDenver
Period1/07/203/07/20

Fingerprint

Dive into the research topics of 'Inexact Online Proximal-gradient Method for Time-varying Convex Optimization'. Together they form a unique fingerprint.

Cite this