Skip to main navigation Skip to search Skip to main content

Negative sampling strategies for contrastive self-supervised learning of graph representations

Research output: Contribution to journalArticlepeer-review

Abstract

Contrastive learning has become a successful approach for learning powerful text and image representations in a self-supervised manner. Contrastive frameworks learn to distinguish between representations coming from augmentations of the same data point (positive pairs) and those of other (negative) examples. Recent studies aim at extending methods from contrastive learning to graph data. In this work, we propose a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL). It learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. We investigate different standard and new negative sampling strategies as well as a comparison without negative sampling approach. We demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks in both transductive and inductive learning setups.

Original languageEnglish
Article number108310
JournalSignal Processing
Volume190
DOIs
Publication statusPublished - 1 Jan 2022

Keywords

  • Contrastive learning
  • Graph neural network
  • Node classification
  • Self-Supervised learning

Fingerprint

Dive into the research topics of 'Negative sampling strategies for contrastive self-supervised learning of graph representations'. Together they form a unique fingerprint.

Cite this