TY - JOUR
T1 - Bounding the Error of Discretized Langevin Algorithms for Non-Strongly Log-Concave Targets
AU - Dalalyan, Arnak S.
AU - Karagulyan, Avetik
AU - Riou-Durand, Lionel
N1 - Publisher Copyright:
©2022 Arnak Dalalyan, Avetik Karagulyan, Lionel Riou-Durand.
PY - 2022/9/1
Y1 - 2022/9/1
N2 - In this paper, we provide non-asymptotic upper bounds on the error of sampling from a target density over Rp using three schemes of discretized Langevin diffusions. The first scheme is the Langevin Monte Carlo (LMC) algorithm, the Euler discretization of the Langevin diffusion. The second and the third schemes are, respectively, the kinetic Langevin Monte Carlo (KLMC) for differentiable potentials and the kinetic Langevin Monte Carlo for twice-differentiable potentials (KLMC2). The main focus is on the target densities that are smooth and log-concave on Rp, but not necessarily strongly log-concave. Bounds on the computational complexity are obtained under two types of smoothness assumption: the potential has a Lipschitz-continuous gradient and the potential has a Lipschitz-continuous Hessian matrix. The error of sampling is measured by Wasserstein-q distances. We advocate for the use of a new dimension-adapted scaling in the definition of the computational complexity, when Wasserstein-q distances are considered. The obtained results show that the number of iterations to achieve a scaled-error smaller than a prescribed value depends only polynomially in the dimension.
AB - In this paper, we provide non-asymptotic upper bounds on the error of sampling from a target density over Rp using three schemes of discretized Langevin diffusions. The first scheme is the Langevin Monte Carlo (LMC) algorithm, the Euler discretization of the Langevin diffusion. The second and the third schemes are, respectively, the kinetic Langevin Monte Carlo (KLMC) for differentiable potentials and the kinetic Langevin Monte Carlo for twice-differentiable potentials (KLMC2). The main focus is on the target densities that are smooth and log-concave on Rp, but not necessarily strongly log-concave. Bounds on the computational complexity are obtained under two types of smoothness assumption: the potential has a Lipschitz-continuous gradient and the potential has a Lipschitz-continuous Hessian matrix. The error of sampling is measured by Wasserstein-q distances. We advocate for the use of a new dimension-adapted scaling in the definition of the computational complexity, when Wasserstein-q distances are considered. The obtained results show that the number of iterations to achieve a scaled-error smaller than a prescribed value depends only polynomially in the dimension.
KW - Approximate sampling
KW - Langevin Monte Carlo
KW - log-concave distributions
M3 - Article
AN - SCOPUS:85141802804
SN - 1532-4435
VL - 23
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -