TY - JOUR
T1 - Maximum Entropy Methods for Texture Synthesis
T2 - Theory and Practice
AU - De Bortoli, Valentin
AU - Desolneux, Agnès
AU - Durmus, Alain
AU - Galerne, Bruno
AU - Leclaire, Arthur
N1 - Publisher Copyright:
© 2021 Society for Industrial and Applied Mathematics.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - Recent years have seen the rise of convolutional neural network techniques in exemplar-based image synthesis. These methods often rely on the minimization of some variational formulation on the image space for which the minimizers are assumed to be the solutions of the synthesis problem. In this paper we investigate, both theoretically and experimentally, another framework to deal with this problem using an alternate sampling/minimization scheme. First, we use results from information geometry to assess that our method yields a probability measure which has maximum entropy under some constraints in expectation. Then, we turn to the analysis of our method, and we show, using recent results from the Markov chain literature, that its error can be explicitly bounded with constants which depend polynomially on the dimension even in the nonconvex setting. This includes the case where the constraints are defined via a differentiable neural network. Finally, we present an extensive experimental study of the model, including a comparison with state-of-the-art methods.
AB - Recent years have seen the rise of convolutional neural network techniques in exemplar-based image synthesis. These methods often rely on the minimization of some variational formulation on the image space for which the minimizers are assumed to be the solutions of the synthesis problem. In this paper we investigate, both theoretically and experimentally, another framework to deal with this problem using an alternate sampling/minimization scheme. First, we use results from information geometry to assess that our method yields a probability measure which has maximum entropy under some constraints in expectation. Then, we turn to the analysis of our method, and we show, using recent results from the Markov chain literature, that its error can be explicitly bounded with constants which depend polynomially on the dimension even in the nonconvex setting. This includes the case where the constraints are defined via a differentiable neural network. Finally, we present an extensive experimental study of the model, including a comparison with state-of-the-art methods.
KW - Langevin algorithm
KW - Markov chains
KW - convolutional neural network
KW - information geometry
KW - maximum entropy
KW - texture synthesis
UR - https://www.scopus.com/pages/publications/105010077420
U2 - 10.1137/19M1307731
DO - 10.1137/19M1307731
M3 - Article
AN - SCOPUS:105010077420
SN - 2577-0187
VL - 3
SP - 52
EP - 82
JO - SIAM Journal on Mathematics of Data Science
JF - SIAM Journal on Mathematics of Data Science
IS - 1
ER -