Passer à la navigation principale Passer à la recherche Passer au contenu principal

Negative sampling strategies for contrastive self-supervised learning of graph representations

Résultats de recherche: Contribution à un journalArticleRevue par des pairs

Résumé

Contrastive learning has become a successful approach for learning powerful text and image representations in a self-supervised manner. Contrastive frameworks learn to distinguish between representations coming from augmentations of the same data point (positive pairs) and those of other (negative) examples. Recent studies aim at extending methods from contrastive learning to graph data. In this work, we propose a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL). It learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. We investigate different standard and new negative sampling strategies as well as a comparison without negative sampling approach. We demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks in both transductive and inductive learning setups.

langue originaleAnglais
Numéro d'article108310
journalSignal Processing
Volume190
Les DOIs
étatPublié - 1 janv. 2022

Empreinte digitale

Examiner les sujets de recherche de « Negative sampling strategies for contrastive self-supervised learning of graph representations ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation