A fully stochastic primal-dual algorithm

Research output: Contribution to journalArticlepeer-review

Abstract

A new stochastic primal-dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions / operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.

Original languageEnglish
Pages (from-to)701-710
Number of pages10
JournalOptimization Letters
Volume15
Issue number2
DOIs
Publication statusPublished - 1 Mar 2021

Keywords

  • Primal dual algorithm
  • Stochastic optimization

Fingerprint

Dive into the research topics of 'A fully stochastic primal-dual algorithm'. Together they form a unique fingerprint.

Cite this