论文标题

图形注意多层感知器

Graph Attention Multi-Layer Perceptron

论文作者

Zhang, Wentao, Yin, Ziqi, Sheng, Zeang, Li, Yang, Ouyang, Wen, Li, Xiaosen, Tao, Yangyu, Yang, Zhi, Cui, Bin

论文摘要

图形神经网络(GNN)在许多基于图的应用程序中取得了巨大成功。但是,在工业场景下,巨大的尺寸和高稀疏度阻碍了它们的应用。尽管为大规模图提出了一些可扩展的GNN,但它们对每个节点采用固定的$ k $ hop社区,从而在稀疏区域内采用大型传播深度时面临过度光滑的问题。为了解决上述问题,我们提出了一种新的GNN体系结构 - 图形注意多层感知器(GAMLP),该架构可以捕获不同图形知识量表之间的基本相关性。我们已经与天使平台部署了GAMLP,并进一步评估了现实世界数据集和大规模工业数据集的GAMLP。这14个图表数据集的大量实验表明,GAMLP在享有高可扩展性和效率的同时,达到了最先进的性能。具体来说,在我们的大规模腾讯视频数据集上的预测准确性方面,它的表现优于1.3 \%,同时达到了高达$ 50 \ times $ triped的速度。此外,它在开放图基准的最大同质和异质图(即OGBN-PAPERS100M和OGBN-MAG)的排行榜上排名第一。

Graph neural networks (GNNs) have achieved great success in many graph-based applications. However, the enormous size and high sparsity level of graphs hinder their applications under industrial scenarios. Although some scalable GNNs are proposed for large-scale graphs, they adopt a fixed $K$-hop neighborhood for each node, thus facing the over-smoothing issue when adopting large propagation depths for nodes within sparse regions. To tackle the above issue, we propose a new GNN architecture -- Graph Attention Multi-Layer Perceptron (GAMLP), which can capture the underlying correlations between different scales of graph knowledge. We have deployed GAMLP in Tencent with the Angel platform, and we further evaluate GAMLP on both real-world datasets and large-scale industrial datasets. Extensive experiments on these 14 graph datasets demonstrate that GAMLP achieves state-of-the-art performance while enjoying high scalability and efficiency. Specifically, it outperforms GAT by 1.3\% regarding predictive accuracy on our large-scale Tencent Video dataset while achieving up to $50\times$ training speedup. Besides, it ranks top-1 on both the leaderboards of the largest homogeneous and heterogeneous graph (i.e., ogbn-papers100M and ogbn-mag) of Open Graph Benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源