Approximate Inference and Learning of State Space Models with Laplace Noise

Julian Neri, Philippe Depalle, Roland Badeau

Research output: Contribution to journalArticlepeer-review

Abstract

State space models have been extensively applied to model and control dynamical systems in disciplines including neuroscience, target tracking, and audio processing. A common modeling assumption is that both the state and data noise are Gaussian because it simplifies the estimation of the system's state and model parameters. However, in many real-world scenarios where the noise is heavy-tailed or includes outliers, this assumption does not hold, and the performance of the model degrades. In this paper, we present a new approximate inference algorithm for state space models with Laplace-distributed multivariate data that is robust to a wide range of non-Gaussian noise. Exact inference is combined with an expectation propagation algorithm, leading to filtering and smoothing that outperforms existing approximate inference methods for Laplace-distributed data, while retaining a fast speed similar to the Kalman filter. Further, we present a maximum posterior expectation-maximization (EM) algorithm that learns the parameters of the model in an unsupervised way, automatically avoids over-fitting the data, and provides better model estimation than existing methods for the Gaussian model. The quality of the inference and learning algorithms are exemplified through a diverse set of experiments and an application to non-linear tracking of audio frequency.

Original languageEnglish
Article number9411735
Pages (from-to)3176-3189
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume69
DOIs
Publication statusPublished - 1 Jan 2021

Keywords

  • Bayesian inference
  • EM algorithm
  • expectation propagation
  • heavy-tailed noise
  • machine learning
  • time series

Fingerprint

Dive into the research topics of 'Approximate Inference and Learning of State Space Models with Laplace Noise'. Together they form a unique fingerprint.

Cite this