Passer à la navigation principale Passer à la recherche Passer au contenu principal

HEAVY-TAILED DIFFUSION WITH DENOISING LÉVY PROBABILISTIC MODELS

Résultats de recherche: Le chapitre dans un livre, un rapport, une anthologie ou une collectionContribution à une conférenceRevue par des pairs

Résumé

Exploring noise distributions beyond Gaussian in diffusion models remains an open challenge. While Gaussian-based models succeed within a unified SDE framework, recent studies suggest that heavy-tailed noise distributions, like αstable distributions, may better handle mode collapse and effectively manage datasets exhibiting class imbalance, heavy tails, or prominent outliers. Recently, Yoon et al. (NeurIPS 2023), presented the Lévy-Itô model (LIM), directly extending the SDE-based framework to a class of heavy-tailed SDEs, where the injected noise followed an α-stable distribution, a rich class of heavy-tailed distributions. However, the LIM framework relies on highly involved mathematical techniques with limited flexibility, potentially hindering broader adoption and further development. In this study, instead of starting from the SDE formulation, we extend the denoising diffusion probabilistic model (DDPM) by replacing the Gaussian noise with α-stable noise. By using only elementary proof techniques, the proposed approach, Denoising Lévy Probabilistic Model (DLPM), boils down to vanilla DDPM with minor modifications. As opposed to the Gaussian case, DLPM and LIM yield different training algorithms and different backward processes, leading to distinct sampling algorithms. These fundamental differences translate favorably for DLPM as compared to LIM: our experiments show improvements in coverage of data distribution tails, better robustness to unbalanced datasets, and improved computation times requiring smaller number of backward steps.

langue originaleAnglais
titre13th International Conference on Learning Representations, ICLR 2025
EditeurInternational Conference on Learning Representations, ICLR
Pages79853-79886
Nombre de pages34
ISBN (Electronique)9798331320850
étatPublié - 1 janv. 2025
Evénement13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapour
Durée: 24 avr. 202528 avr. 2025

Série de publications

Nom13th International Conference on Learning Representations, ICLR 2025

Une conférence

Une conférence13th International Conference on Learning Representations, ICLR 2025
Pays/TerritoireSingapour
La villeSingapore
période24/04/2528/04/25

Empreinte digitale

Examiner les sujets de recherche de « HEAVY-TAILED DIFFUSION WITH DENOISING LÉVY PROBABILISTIC MODELS ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation