论文标题

并非所有邻居都值得参加:半监督学习的图形选择性注意网络

Not All Neighbors Are Worth Attending to: Graph Selective Attention Networks for Semi-supervised Learning

论文作者

He, Tiantian, Zhou, Haicang, Ong, Yew-Soon, Cong, Gao

论文摘要

图形注意力网络(GATS)是用于分析各种现实情况下的图形数据的强大工具。为了学习下游任务的表示形式,盖茨在汇总功能时通常会访问中央节点的所有邻居。在本文中,我们表明,大部分邻居在许多现实世界中与中心节点无关,并且可以从邻居聚集中排除。采用提示,我们提出了选择性注意(SA)和一系列针对图神经网络(GNN)的新颖注意机制。 SA利用各种形式的可学习节点节点差异来获得每个节点的关注范围,从中排除了无关的邻居。我们进一步提出了图形选择性注意网络(SAT),以从通过不同的SA机制识别和研究的高度相关节点特征中学习表示形式。最后,对拟议的SAT的表达能力进行了理论分析,以及针对针对最先进的GNN的挑战性现实数据集进行了全面的经验研究,以证明SAT的有效性。

Graph attention networks (GATs) are powerful tools for analyzing graph data from various real-world scenarios. To learn representations for downstream tasks, GATs generally attend to all neighbors of the central node when aggregating the features. In this paper, we show that a large portion of the neighbors are irrelevant to the central nodes in many real-world graphs, and can be excluded from neighbor aggregation. Taking the cue, we present Selective Attention (SA) and a series of novel attention mechanisms for graph neural networks (GNNs). SA leverages diverse forms of learnable node-node dissimilarity to acquire the scope of attention for each node, from which irrelevant neighbors are excluded. We further propose Graph selective attention networks (SATs) to learn representations from the highly correlated node features identified and investigated by different SA mechanisms. Lastly, theoretical analysis on the expressive power of the proposed SATs and a comprehensive empirical study of the SATs on challenging real-world datasets against state-of-the-art GNNs are presented to demonstrate the effectiveness of SATs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源