TY - JOUR
T1 - Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems
T2 - An empirical bayesian approach. part ii: Theoretical analysis
AU - De Bortoli, Valentin
AU - Durmus, Alain
AU - Pereyra, Marcelo
AU - Vidal, Ana Fernandez
N1 - Publisher Copyright:
© 2020 Society for Industrial and Applied Mathematics.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - This paper presents a detailed theoretical analysis of the three stochastic approximation proximal gradient algorithms proposed in our companion paper [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] to set regularization parameters by marginal maximum likelihood estimation. We prove the convergence of a more general stochastic approximation scheme that includes the three algorithms of [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] as special cases. This includes asymptotic and nonasymptotic convergence results with natural and easily verifiable conditions, as well as explicit bounds on the convergence rates. Importantly, the theory is also general in that it can be applied to other intractable optimization problems. A main novelty of the work is that the stochastic gradient estimates of our scheme are constructed from inexact proximal Markov chain Monte Carlo samplers. This allows the use of samplers that scale efficiently to large problems and for which we have precise theoretical guarantees.
AB - This paper presents a detailed theoretical analysis of the three stochastic approximation proximal gradient algorithms proposed in our companion paper [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] to set regularization parameters by marginal maximum likelihood estimation. We prove the convergence of a more general stochastic approximation scheme that includes the three algorithms of [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] as special cases. This includes asymptotic and nonasymptotic convergence results with natural and easily verifiable conditions, as well as explicit bounds on the convergence rates. Importantly, the theory is also general in that it can be applied to other intractable optimization problems. A main novelty of the work is that the stochastic gradient estimates of our scheme are constructed from inexact proximal Markov chain Monte Carlo samplers. This allows the use of samplers that scale efficiently to large problems and for which we have precise theoretical guarantees.
KW - Empirical Bayes
KW - Image processing
KW - Inverse problems
KW - Markov chain Monte Carlo methods
KW - Proximal algorithms
KW - Statistical inference
KW - Stochastic optimization
U2 - 10.1137/20M1339842
DO - 10.1137/20M1339842
M3 - Article
AN - SCOPUS:85099017049
SN - 1936-4954
VL - 13
SP - 1990
EP - 2028
JO - SIAM Journal on Imaging Sciences
JF - SIAM Journal on Imaging Sciences
IS - 4
ER -