GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space

  • Souhaib Attaiki
  • , Paul Guerrero
  • , Duygu Ceylan
  • , Niloy J. Mitra
  • , Maks Ovsjanikov

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We train a feed-forward text-to-3D diffusion generator for human characters using only single-view 2D data for supervision. Existing 3D generative models cannot yet match the fidelity of image and/or video generative models. State-of-the-art 3D generators are either trained with explicit 3D supervision and are thus limited by the volume and diversity of existing 3D data. Meanwhile, generators that can be trained with only 2D data as supervision typically produce coarser results, cannot be text-conditioned, and/or must revert to test-time optimization. We observe that GAN- and diffusion-based generators have complementary qualities: GANs can be trained efficiently with 2D supervision to produce high-quality 3D objects but are hard to condition on text. In contrast, denoising diffusion models can be conditioned efficiently but tend to be hard to train with only 2D supervision. We introduce GANFusion that starts by generating unconditional triplane features for 3D data using a GAN architecture trained with only single-view 2D data. We then generate random samples from the GAN, caption them, and train a text-conditioned diffusion model that directly learns to sample from the space of good triplane features that can be decoded into 3D objects. We evaluate the proposed method in the context of text-conditioned full-body human generation and show improvements over possible alternatives.

Original languageEnglish
Title of host publicationProceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3985-3995
Number of pages11
ISBN (Electronic)9798331510831
DOIs
Publication statusPublished - 1 Jan 2025
Event2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025 - Tucson, United States
Duration: 28 Feb 20254 Mar 2025

Publication series

NameProceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025

Conference

Conference2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
Country/TerritoryUnited States
CityTucson
Period28/02/254/03/25

Keywords

  • 3d generation
  • diffusion model
  • gan
  • text-conditioning

Fingerprint

Dive into the research topics of 'GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space'. Together they form a unique fingerprint.

Cite this