Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning

  • Salah Zaiem
  • , Titouan Parcollet
  • , Slim Essid

Research output: Contribution to journalConference articlepeer-review

Abstract

Contrastive learning enables learning useful audio and speech representations without ground-truth labels by maximizing the similarity between latent representations of similar signal segments. In this framework various data augmentation techniques are usually exploited to help enforce desired invariances within the learned representations, improving performance on various audio tasks thanks to more robust embeddings. Now, selecting the most relevant augmentations has proven crucial for better downstream performances. Thus, this work introduces a conditional independance-based method which allows for automatically selecting a suitable distribution on the choice of augmentations and their parametrization from a set of predefined ones, for contrastive self-supervised pre-training. This is performed with respect to a downstream task of interest, hence saving a costly hyper-parameter search. Experiments performed on two different downstream tasks validate the proposed approach showing better results than experimenting without augmentation or with baseline augmentations. We furthermore conduct a qualitative analysis of the automatically selected augmentations and their variation according to the considered final downstream dataset.

Original languageEnglish
Pages (from-to)669-673
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
Publication statusPublished - 1 Jan 2022
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sept 202222 Sept 2022

Keywords

  • data augmentation
  • self-supervised learning

Fingerprint

Dive into the research topics of 'Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning'. Together they form a unique fingerprint.

Cite this