DARKGAN: EXPLOITING KNOWLEDGE DISTILLATION FOR COMPREHENSIBLE AUDIO SYNTHESIS WITH GANS

Javier Nistal, Stefan Lattner, Gaël Richard

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Generative Adversarial Networks (GANs) have achieved excellent audio synthesis quality in the last years. However, making them operable with semantically meaningful controls remains an open challenge. An obvious approach is to control the GAN by conditioning it on metadata contained in audio datasets. Unfortunately, audio datasets often lack the desired annotations, especially in the musical domain. A way to circumvent this lack of annotations is to generate them, for example, with an automatic audiotagging system. The output probabilities of such systems (so-called "soft labels") carry rich information about the characteristics of the respective audios and can be used to distill the knowledge from a teacher model into a student model. In this work, we perform knowledge distillation from a large audio tagging system into an adversarial audio synthesizer that we call DarkGAN. Results show that DarkGAN can synthesize musical audio with acceptable quality and exhibits moderate attribute control even with out-of-distribution input conditioning. We release the code and provide audio examples on the accompanying website.

Original languageEnglish
Title of host publicationProceedings of the International Society for Music Information Retrieval Conference
PublisherInternational Society for Music Information Retrieval
Pages484-492
Number of pages9
Publication statusPublished - 1 Jan 2021
Externally publishedYes

Publication series

NameProceedings of the International Society for Music Information Retrieval Conference
Volume2021
ISSN (Electronic)3006-3094

Fingerprint

Dive into the research topics of 'DARKGAN: EXPLOITING KNOWLEDGE DISTILLATION FOR COMPREHENSIBLE AUDIO SYNTHESIS WITH GANS'. Together they form a unique fingerprint.

Cite this