Abstract
Denoising diffusion models have driven significant progress in the field of Bayesian inverse problems. Recent approaches use pre-trained diffusion models as priors to solve a wide range of such problems, only leveraging inference-time compute and thereby eliminating the need to retrain task-specific models on the same dataset. To approximate the posterior of a Bayesian inverse problem, a diffusion model samples from a sequence of intermediate posterior distributions, each with an intractable likelihood function. This work proposes a novel mixture approximation of these intermediate distributions. Since direct gradient-based sampling of these mixtures is infeasible due to intractable terms, we propose a practical method based on Gibbs sampling. We validate our approach through extensive experiments on image inverse problems, utilizing both pixel-and latent-space diffusion priors, as well as on source separation with an audio diffusion model.
| Original language | English |
|---|---|
| Pages (from-to) | 26830-26876 |
| Number of pages | 47 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 267 |
| Publication status | Published - 1 Jan 2025 |
| Externally published | Yes |
| Event | 42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada Duration: 13 Jul 2025 → 19 Jul 2025 |