Analysis of langevin monte carlo via convex optimization

Alain Durmus, Szymon Majewski, Blazej Miasojedow

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as the first order optimization algorithm for an objective functional defined on the Wasserstein space of order 2. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from log-concave smooth target distribution on Rd. Based on this interpretation, we propose two new methods for sampling from a non-smooth target distribution. These new algorithms are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, which is a popular extension of the Unadjusted Langevin Algorithm for largescale Bayesian inference. Using the optimization perspective, we provide non-asymptotic convergence analysis for the newly proposed methods.

Original languageEnglish
JournalJournal of Machine Learning Research
Volume20
Publication statusPublished - 1 Feb 2019
Externally publishedYes

Keywords

  • Bayesian inference
  • Convex optimization
  • Gradient flow
  • Unadjasted Langevin Algorithm
  • Wasserstein metric

Fingerprint

Dive into the research topics of 'Analysis of langevin monte carlo via convex optimization'. Together they form a unique fingerprint.

Cite this