Geom-Spider-EM: Faster variance reduced stochastic expectation maximization for nonconvex finite-sum optimization

Research output: Contribution to journalConference articlepeer-review

Abstract

The Expectation Maximization (EM) algorithm is a key reference for inference in latent variable models; unfortunately, its computational cost is prohibitive in the large scale learning setting. In this paper, we propose an extension of the Stochastic Path-Integrated Differential EstimatoR EM (SPIDER-EM) and derive complexity bounds for this novel algorithm, designed to solve smooth nonconvex finite-sum optimization problems. We show that it reaches the same state of the art complexity bounds as SPIDER-EM; and provide conditions for a linear rate of convergence. Numerical results support our findings.

Original languageEnglish
Pages (from-to)3135-3139
Number of pages5
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2021-June
DOIs
Publication statusPublished - 1 Jan 2021
Event2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
Duration: 6 Jun 202111 Jun 2021

Keywords

  • Expectation Maximization
  • Large scale learning
  • Latent variable analysis
  • Nonconvex stochastic optimization
  • Variance reduction

Fingerprint

Dive into the research topics of 'Geom-Spider-EM: Faster variance reduced stochastic expectation maximization for nonconvex finite-sum optimization'. Together they form a unique fingerprint.

Cite this