Skip to main navigation Skip to search Skip to main content

Integrating Prior Knowledge in Contrastive Learning with Kernel

  • Benoit Dufumier
  • , Carlo Alberto Barbano
  • , Robin Louiset
  • , Edouard Duchesnay
  • , Pietro Gori

Research output: Contribution to journalConference articlepeer-review

Abstract

Data augmentation is a crucial component in unsupervised contrastive learning (CL). It determines how positive samples are defined and, ultimately, the quality of the learnt representation. In this work, we open the door to new perspectives for CL by integrating prior knowledge, given either by generative models-viewed as prior representations- or weak attributes in the positive and negative sampling. To this end, we use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the negative-positive coupling in the original InfoNCE loss. We draw a connection between contrastive learning and conditional mean embedding theory to derive tight bounds on the downstream classification loss. In an unsupervised setting, we empirically demonstrate that CL benefits from generative models to improve its representation both on natural and medical images. In a weakly supervised scenario, our framework outperforms other unconditional and conditional CL approaches. Source code is available at this https URL.

Original languageEnglish
Pages (from-to)8851-8878
Number of pages28
JournalProceedings of Machine Learning Research
Volume202
Publication statusPublished - 1 Jan 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Fingerprint

Dive into the research topics of 'Integrating Prior Knowledge in Contrastive Learning with Kernel'. Together they form a unique fingerprint.

Cite this