SPARSIFYING THE UPDATE STEP IN GRAPH NEURAL NETWORKS

Research output: Contribution to journalConference articlepeer-review

Abstract

Message-Passing Neural Networks (MPNNs), the most prominent Graph Neural Network (GNN) framework, celebrate much success in the analysis of graph-structured data. Concurrently, the sparsification of Neural Network models attracts a great amount of academic and industrial interest. In this paper we conduct a structured, empirical study of the effect of sparsification on the trainable part of MPNNs known as the Update step. To this end, we design a series of models to successively sparsify the linear transform in the Update step. Specifically, we propose the ExpanderGNN model with a tuneable sparsification rate and the Activation-Only GNN, which has no linear transform in the Update step. In agreement with a growing trend in the literature the sparsification paradigm is changed by initialising sparse neural network architectures rather than expensively sparsifying already trained architectures. Our novel benchmark models enable a better understanding of the influence of the Update step on model performance and outperform existing simplified benchmark models such as the Simple Graph Convolution. The ExpanderGNNs, and in some cases the Activation-Only models, achieve performance on par with their vanilla counterparts on several downstream tasks, while containing significantly fewer trainable parameters. Our code is publicly available at: https://github.com/ChangminWu/ExpanderGNN.

Original languageEnglish
Pages (from-to)258-268
Number of pages11
JournalProceedings of Machine Learning Research
Volume196
Publication statusPublished - 1 Jan 2022
EventICML Workshop on Topology, Algebra, and Geometry in Machine Learning, TAG:ML 2022 - Virtual, Online, United States
Duration: 20 Jul 2022 → …

Fingerprint

Dive into the research topics of 'SPARSIFYING THE UPDATE STEP IN GRAPH NEURAL NETWORKS'. Together they form a unique fingerprint.

Cite this