论文标题

将Pagerank转变为无限深度图神经网络

Transforming PageRank into an Infinite-Depth Graph Neural Network

论文作者

Roth, Andreas, Liebig, Thomas

论文摘要

尽管在深度学习的其他应用领域中取得了非常深的架构,但流行的图神经网络是浅层模型。这降低了建模能力,并使模型无法捕获远程关系。过度平滑的浅层设计的主要原因是导致节点状态变得更加相似的深度。我们建立在GNNS和Pagerank之间的紧密联系的基础上,为此,个性化的Pagerank介绍了对个性化向量的考虑。通过这个想法,我们提出了个性化的Pagerank图神经网络(PPRGNN),该神经网络将图形卷积网络扩展到无限深度模型,该模型有机会将邻居聚集重置回每个迭代中的初始状态。我们引入了一个很好的解释调整,以重置重置的机会并证明我们的方法对独特解决方案的融合,而无需放置任何限制,即使在无限地进行许多邻居聚集时。与个性化的Pagerank一样,我们的结果不会过度光滑。在这样做的同时,在保持内存复杂性恒定的同时,时间复杂性保持线性,而与网络的深度无关,使其比较大图。我们从经验上展示了方法对各种节点和图形分类任务的有效性。在几乎所有情况下,PPRGNN的表现都优于可比较的方法。

Popular graph neural networks are shallow models, despite the success of very deep architectures in other application domains of deep learning. This reduces the modeling capacity and leaves models unable to capture long-range relationships. The primary reason for the shallow design results from over-smoothing, which leads node states to become more similar with increased depth. We build on the close connection between GNNs and PageRank, for which personalized PageRank introduces the consideration of a personalization vector. Adopting this idea, we propose the Personalized PageRank Graph Neural Network (PPRGNN), which extends the graph convolutional network to an infinite-depth model that has a chance to reset the neighbor aggregation back to the initial state in each iteration. We introduce a nicely interpretable tweak to the chance of resetting and prove the convergence of our approach to a unique solution without placing any constraints, even when taking infinitely many neighbor aggregations. As in personalized PageRank, our result does not suffer from over-smoothing. While doing so, time complexity remains linear while we keep memory complexity constant, independently of the depth of the network, making it scale well to large graphs. We empirically show the effectiveness of our approach for various node and graph classification tasks. PPRGNN outperforms comparable methods in almost all cases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源