Don’t Forget Your Inverse DDIM for Image Editing

  • Guillermo Gomez-Trenado
  • , Pablo Mesejo
  • , Oscar Cordon
  • , Stephane Lathuiliere

Research output: Contribution to journalArticlepeer-review

Abstract

The field of text-to-image generation has undergone significant advancements with the introduction of diffusion models. Nevertheless, the challenge of editing real images persists, as most methods are either computationally intensive or produce poor reconstructions. This paper introduces SAGE (Self-Attention Guidance for image Editing)—a novel technique leveraging pre-trained diffusion models for image editing. SAGE builds upon the DDIM algorithm and incorporates a novel guidance mechanism utilizing the self-attention layers of the diffusion U-Net. This mechanism computes a reconstruction objective based on attention maps generated during the inverse DDIM process, enabling efficient reconstruction of unedited regions without the need to precisely reconstruct the entire input image. Thus, SAGE directly addresses the key challenges in image editing. The superiority of SAGE over other methods is demonstrated through quantitative and qualitative evaluations and confirmed by a statistically validated comprehensive user study, in which all 47 surveyed users preferred SAGE over competing methods. Additionally, SAGE ranks as the top-performing method in seven out of 10 quantitative analyses and secures second and third places in the remaining three.

Original languageEnglish
Pages (from-to)10-18
Number of pages9
JournalIEEE Computational Intelligence Magazine
Volume20
Issue number3
DOIs
Publication statusPublished - 1 Jan 2025

Fingerprint

Dive into the research topics of 'Don’t Forget Your Inverse DDIM for Image Editing'. Together they form a unique fingerprint.

Cite this