TY - GEN
T1 - Maximizing Influence with Graph Neural Networks
AU - Panagopoulos, George
AU - Tziortziotis, Nikolaos
AU - Vazirgiannis, Michalis
AU - Malliaros, Fragkiskos
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/11/6
Y1 - 2023/11/6
N2 - Finding the seed set that maximizes the influence spread over a network is a well-known NP-hard problem. Though a greedy algorithm can provide near-optimal solutions, the subproblem of influence estimation renders the solutions inefficient. In this work, we propose GLIE, a graph neural network that learns how to estimate the influence spread of the independent cascade. GLIE relies on a theoretical upper bound that is tightened through supervised training. Experiments indicate that it provides accurate influence estimation for real graphs up to 10 times larger than the train set. Subsequently, we incorporate it into two influence maximization techniques. We first utilize Cost Effective Lazy Forward optimization substituting Monte Carlo simulations with GLIE, surpassing the benchmarks albeit with a computational overhead. To improve computational efficiency we develop a provably submodular influence spread based on GLIE's representations, to rank nodes while building the seed set adaptively. The proposed algorithms are inductive, meaning they are trained on graphs with less than 300 nodes and up to 5 seeds, and tested on graphs with millions of nodes and up to 200 seeds. The final method exhibits the most promising combination of time efficiency and influence quality, outperforming several baselines.
AB - Finding the seed set that maximizes the influence spread over a network is a well-known NP-hard problem. Though a greedy algorithm can provide near-optimal solutions, the subproblem of influence estimation renders the solutions inefficient. In this work, we propose GLIE, a graph neural network that learns how to estimate the influence spread of the independent cascade. GLIE relies on a theoretical upper bound that is tightened through supervised training. Experiments indicate that it provides accurate influence estimation for real graphs up to 10 times larger than the train set. Subsequently, we incorporate it into two influence maximization techniques. We first utilize Cost Effective Lazy Forward optimization substituting Monte Carlo simulations with GLIE, surpassing the benchmarks albeit with a computational overhead. To improve computational efficiency we develop a provably submodular influence spread based on GLIE's representations, to rank nodes while building the seed set adaptively. The proposed algorithms are inductive, meaning they are trained on graphs with less than 300 nodes and up to 5 seeds, and tested on graphs with millions of nodes and up to 200 seeds. The final method exhibits the most promising combination of time efficiency and influence quality, outperforming several baselines.
KW - graph neural networks
KW - graph representation learning
KW - influence maximization
UR - https://www.scopus.com/pages/publications/85190625067
U2 - 10.1145/3625007.3627293
DO - 10.1145/3625007.3627293
M3 - Conference contribution
AN - SCOPUS:85190625067
T3 - Proceedings of the 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2023
SP - 237
EP - 244
BT - Proceedings of the 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2023
A2 - Aditya Prakash, B.
A2 - Wang, Dong
A2 - Weninger, Tim
PB - Association for Computing Machinery, Inc
T2 - 15th IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2023
Y2 - 6 November 2023 through 9 November 2023
ER -