论文标题

通过可学习的传播操作员改善图形神经网络

Improving Graph Neural Networks with Learnable Propagation Operators

论文作者

Eliasof, Moshe, Ruthotto, Lars, Treister, Eran

论文摘要

图形神经网络(GNN)的传播操作员受到限制。在许多情况下,这些操作员通常仅包含非负元素,并且在跨渠道共享,从而限制了GNN的表现力。此外,有些GNN遭受了过度平滑的影响,从而限制了它们的深度。另一方面,卷积神经网络(CNN)可以学习多种传播过滤器,而像过度光滑的现象在CNN中通常不明显。在本文中,我们通过结合可训练的通道加权因子$ω$来学习和混合多个平滑和锐化的传播操作员,弥合这些差距。我们的通用方法称为$ω$ gnn,易于实现。我们研究两个变体:$ω$ GCN和$ω$ GAT。对于$ω$ GCN,我们理论上分析了其行为以及$ω$对获得的节点功能的影响。我们的实验证实了这些发现,并证明并解释了这两个变体如何不会过度光滑。此外,我们在节点和图形分类任务上尝试了15个现实世界数据集,其中我们的$ω$ GCN和$ω$ GAT与最先进的方法相同。

Graph Neural Networks (GNNs) are limited in their propagation operators. In many cases, these operators often contain non-negative elements only and are shared across channels, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge these gaps by incorporating trainable channel-wise weighting factors $ω$ to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called $ω$GNN, and is easy to implement. We study two variants: $ω$GCN and $ω$GAT. For $ω$GCN, we theoretically analyse its behaviour and the impact of $ω$ on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our $ω$GCN and $ω$GAT perform on par with state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源