Skip to main navigation Skip to search Skip to main content

Face Aging via Diffusion-based Editing

  • Xiangyi Chen
  • , Stéphane Lathuilière

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, we address the problem of face aging-generating past or future facial images by incorporating age-related changes to the given face. Previous aging methods rely solely on human facial image datasets and are thus constrained by their inherent scale and bias. This restricts their application to a limited generatable age range and the inability to handle large age gaps. We propose FADING, a novel approach to address Face Aging via DIffusion-based editiNG. We go beyond existing methods by leveraging the rich prior of large-scale language-image diffusion models. First, we specialize a pretrained diffusion model for the task of face age editing by using an age-aware fine-tuning scheme. Next, we invert the input image to latent noise and obtain optimized null text embeddings. Finally, we perform text-guided local age editing via attention control. The quantitative and qualitative analyses demonstrate that our method outperforms existing approaches with respect to aging accuracy, attribute preservation, and aging quality.

Original languageEnglish
Publication statusPublished - 1 Jan 2023
Event34th British Machine Vision Conference, BMVC 2023 - Aberdeen, United Kingdom
Duration: 20 Nov 202324 Nov 2023

Conference

Conference34th British Machine Vision Conference, BMVC 2023
Country/TerritoryUnited Kingdom
CityAberdeen
Period20/11/2324/11/23

Fingerprint

Dive into the research topics of 'Face Aging via Diffusion-based Editing'. Together they form a unique fingerprint.

Cite this