High-dimensional Bayesian inference via the unadjusted Langevin algorithm

Research output: Contribution to journalArticlepeer-review

Abstract

We consider in this paper the problem of sampling a high-dimensional probability distribution π having a density w.r.t. the Lebesgue measure on Rd, known up to a normalization constant x ∣→ π(x) = e−U(x)/( Rd e−U(y)dy. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that U is continuously differentiable, ∇U is globally Lipschitz and U is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order 2 and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of these bounds is explicit. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are measurable and bounded. An illustration to Bayesian inference for binary regression is presented to support our claims.

Original languageEnglish
Pages (from-to)2854-2882
Number of pages29
JournalBernoulli
Volume25
Issue number4 A
DOIs
Publication statusPublished - 1 Jan 2019

Keywords

  • Langevin diffusion
  • Markov chain Monte Carlo
  • Metropolis adjusted Langevin algorithm
  • Rate of convergence
  • Total variation distance

Fingerprint

Dive into the research topics of 'High-dimensional Bayesian inference via the unadjusted Langevin algorithm'. Together they form a unique fingerprint.

Cite this