Skip to main navigation Skip to search Skip to main content

On-line expectation-maximization algorithm for latent data models

Research output: Contribution to journalArticlepeer-review

Abstract

Summary. We propose a generic on-line (also sometimes called adaptive or recursive) version of the expectation-maximization (EM) algorithm applicable to latent variable models of independent observations. Compared with the algorithm of Titterington, this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete-data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e. that of the maximum likelihood estimator. In addition, the approach proposed is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.

Original languageEnglish
Pages (from-to)593-613
Number of pages21
JournalJournal of the Royal Statistical Society. Series B: Statistical Methodology
Volume71
Issue number3
DOIs
Publication statusPublished - 1 Jun 2009

Keywords

  • Adaptive algorithms
  • Expectation-maximization
  • Latent data models
  • Mixture of regressions
  • On-line estimation
  • Polyak-Ruppert averaging
  • Stochastic approximation

Fingerprint

Dive into the research topics of 'On-line expectation-maximization algorithm for latent data models'. Together they form a unique fingerprint.

Cite this