Graph Neural Networks (GNNs) have emerged as powerful tools for learning on graph-structured data, demonstrating state-of-the-art performance in various applications such as social network analysis, biological network modeling, and recommendation systems. However, the computational complexity of GNNs poses significant challenges for scalability, particularly with large-scale graphs. Parallelism in GNNs addresses this issue by distributing computation across multiple processors, using techniques such as data parallelism and model parallelism. Data parallelism involves partitioning the graph data across different processors, while model parallelism splits the neural network’s layers or operations. These parallelization strategies, along with optimizations such as asynchronous updates and efficient communication protocols, enable GNNs to handle larger graphs and improve training efficiency. This work explores the key computational kernels and looks for those where parallelism could significantly enhance the scalability and performance of GNNs, highlighting the algebraic aspects of each one. This is the first step to better compare recent advances and their implications for future research.

Parallelism in GNN: Possibilities and Limits of Current Approaches / Mele, Valeria; Carracciuolo, Luisa; Romano, Diego. - 15580 LNCS:(2025), pp. 236-248. ( 15th International Conference on Parallel Processing and Applied Mathematics, PPAM 2024 cze 2024) [10.1007/978-3-031-85700-3_17].

Parallelism in GNN: Possibilities and Limits of Current Approaches

Mele, Valeria
Primo
;
Carracciuolo, Luisa;Romano, Diego
2025

Abstract

Graph Neural Networks (GNNs) have emerged as powerful tools for learning on graph-structured data, demonstrating state-of-the-art performance in various applications such as social network analysis, biological network modeling, and recommendation systems. However, the computational complexity of GNNs poses significant challenges for scalability, particularly with large-scale graphs. Parallelism in GNNs addresses this issue by distributing computation across multiple processors, using techniques such as data parallelism and model parallelism. Data parallelism involves partitioning the graph data across different processors, while model parallelism splits the neural network’s layers or operations. These parallelization strategies, along with optimizations such as asynchronous updates and efficient communication protocols, enable GNNs to handle larger graphs and improve training efficiency. This work explores the key computational kernels and looks for those where parallelism could significantly enhance the scalability and performance of GNNs, highlighting the algebraic aspects of each one. This is the first step to better compare recent advances and their implications for future research.
2025
9783031856990
9783031857003
Parallelism in GNN: Possibilities and Limits of Current Approaches / Mele, Valeria; Carracciuolo, Luisa; Romano, Diego. - 15580 LNCS:(2025), pp. 236-248. ( 15th International Conference on Parallel Processing and Applied Mathematics, PPAM 2024 cze 2024) [10.1007/978-3-031-85700-3_17].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1018838
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact