Ergodic convergence of a stochastic proximal point algorithm

Research output: Contribution to journalArticlepeer-review

Abstract

The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates (xn) given by xn+1= (I + λnA(ξn+1; : ))-1(xn); where (A(s; : ) : s ∈ E) is a collection of maximal monotone operators on a separable Hilbert space, (ξn) is an independent identically distributed sequence of random variables on E; and (λn) is a positive sequence in ℓ2\ℓ1. The weighted averaged sequence of iterates is shown to converge weakly to a zero (assumed to exist) of the Aumann expectation E (A(ξ1, . )) under the assumption that the latter is maximal. TWe consider applications to stochastic optimization problems of the form min E(f(ξ1; x)) w.r.t. x ∈∩m i=1 Xi, where f is a normal convex integrand and (Xi) is a collection of closed convex sets. In this case, the iterations are closely related to a stochastic proximal algorithm recently proposed by Wang and Bertsekas [Incremental Constraint Projection-Proximal Methods for Nonsmooth Convex Optimization, Tech. report, MIT, Cambridge, MA, 2013].

Original languageEnglish
Pages (from-to)2235-2260
Number of pages26
JournalSIAM Journal on Optimization
Volume26
Issue number4
DOIs
Publication statusPublished - 1 Jan 2016
Externally publishedYes

Keywords

  • Convex programming
  • Proximal point algorithm
  • Stochastic approximation

Fingerprint

Dive into the research topics of 'Ergodic convergence of a stochastic proximal point algorithm'. Together they form a unique fingerprint.

Cite this