论文标题
计算图神经网络:算法到加速器的调查
Computing Graph Neural Networks: A Survey from Algorithms to Accelerators
论文作者
论文摘要
图形神经网络(GNN)近年来已经在机器学习场景上爆炸,因为它们的模型和从图形结构数据中学习的能力。这种能力在许多固有关系的领域中具有很大的影响,传统神经网络的性能不佳。确实,正如最近的评论可以证明的那样,GNN领域的研究迅速发展,并导致了各种GNN算法变体的发展以及探索化学,神经学,电子或通信网络中开创性应用的探索。但是,在研究的当前阶段,出于多种原因,GNN的有效处理仍然是一个公开挑战。除了新颖性之外,GNN由于对输入图的依赖,密集和非常稀疏的操作的依赖,或者需要在某些应用中扩展到巨大的图形,因此很难计算。在这种情况下,本文旨在做出两个主要贡献。一方面,从计算的角度提出了对GNN领域的审查。这包括有关GNN基本面的简短教程,在过去十年中概述了该领域的演变,以及在不同GNN算法变体的多个阶段进行的操作摘要。另一方面,提供了当前软件和硬件加速度方案的深入分析,从该方案中,硬件软件,图形了解和以通信为中心的GNN加速器的视觉被蒸馏而来。
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data is inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of groundbreaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage of research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this paper aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.