论文标题
对更强大的图形神经网络的模棱两可的位置编码
Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)在许多基于图的学习任务中都表现出很大的优势,但通常无法准确预测基于任务的节点(例如链接/主题预测等)的基于任务。最近,许多作品提出了通过使用随机节点功能或节点距离功能来解决此问题的。但是,它们的收敛速度缓慢,预测不准确或高复杂性。在这项工作中,我们重新访问允许使用由位置编码(PE)技术(例如Laplacian eigenmap,deepwalk等)给出的节点的位置特征的GNNS。与PE的GNNS经常受到批评,因为它们不能推广到不偏见的图形(象征性)或稳定。在这里,我们以原则性的方式研究了这些问题,并提出了一种可证明的解决方案,这是一类用严格的数学分析的称为PEG的GNN层。 PEG使用单独的通道来更新原始节点功能和位置功能。 PEG施加了置换量比W.R.T.原始节点功能并施加$ O(P)$(正交组)均值W.R.T.位置特征同时特征,其中$ p $是二手位置特征的维度。在8个现实世界网络上进行的广泛链接预测实验证明了PEG在概括和可伸缩性方面的优势。
Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task-based on sets of nodes such as link/motif prediction and so on. Many works have recently proposed to address this problem by using random node features or node distance features. However, they suffer from either slow convergence, inaccurate prediction, or high complexity. In this work, we revisit GNNs that allow using positional features of nodes given by positional encoding (PE) techniques such as Laplacian Eigenmap, Deepwalk, etc. GNNs with PE often get criticized because they are not generalizable to unseen graphs (inductive) or stable. Here, we study these issues in a principled way and propose a provable solution, a class of GNN layers termed PEG with rigorous mathematical analysis. PEG uses separate channels to update the original node features and positional features. PEG imposes permutation equivariance w.r.t. the original node features and imposes $O(p)$ (orthogonal group) equivariance w.r.t. the positional features simultaneously, where $p$ is the dimension of used positional features. Extensive link prediction experiments over 8 real-world networks demonstrate the advantages of PEG in generalization and scalability.