论文标题

GCN在CPU和GPU上嵌入维度的建筑含义

Architectural Implications of Embedding Dimension during GCN on CPU and GPU

论文作者

Adiletta, Matthew, Brooks, David, Wei, Gu-Yeon

论文摘要

图形神经网络(GNN)是一类神经网络,旨在从数据的图形结构中提取信息。图形卷积网络(GCN)是一种广泛使用的GNN类型,用于转导图学习问题,用于从图形中学习信息。从架构的角度来看,GCN是一种具有挑战性的算法,这是由于固有的稀疏性,低数据再利用和庞大的内存能力要求。传统的神经算法利用了GPU的高计算能力来实现推理和训练的高性能。在这项工作中探讨了将GPU用于GCN推断的建筑决定。为了更好地理解图形大小,嵌入维度以及对性能的采样的含义,CPU和GPU上的GCN均已表征。

Graph Neural Networks (GNNs) are a class of neural networks designed to extract information from the graphical structure of data. Graph Convolutional Networks (GCNs) are a widely used type of GNN for transductive graph learning problems which apply convolution to learn information from graphs. GCN is a challenging algorithm from an architecture perspective due to inherent sparsity, low data reuse, and massive memory capacity requirements. Traditional neural algorithms exploit the high compute capacity of GPUs to achieve high performance for both inference and training. The architectural decision to use a GPU for GCN inference is a question explored in this work. GCN on both CPU and GPU was characterized in order to better understand the implications of graph size, embedding dimension, and sampling on performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源