论文标题

深图神经网络体系结构设计:从全球金字塔式收缩骨架到本地拓扑链接重新布线

A Deep Graph Neural Networks Architecture Design: From Global Pyramid-like Shrinkage Skeleton to Local Topology Link Rewiring

论文作者

Zhang, Gege

论文摘要

表达性在评估深层神经网络方面起着基本作用,并且与了解提高性能的极限密切相关。在本文中,我们提出了一个基于临界表达性的三层训练框架,包括全球模型收缩,权重演化和链接的重量重新布线。具体而言,我们提出了一个类似金字塔的骨骼,以克服影响信息传递的鞍点。然后,我们分析网络拓扑中模块化(聚类)现象的原因,并使用它来重新打开潜在的错误加权链接。我们对节点分类进行数值实验,结果证实,提出的训练框架在快速收敛性和鲁棒性到潜在的错误加权链接方面的性能显着提高。反过来,GNN上的架构设计又验证了GNN从动态和拓扑空间方面的表达性,并为设计更有效的神经网络提供了有用的指南。

Expressivity plays a fundamental role in evaluating deep neural networks, and it is closely related to understanding the limit of performance improvement. In this paper, we propose a three-pipeline training framework based on critical expressivity, including global model contraction, weight evolution, and link's weight rewiring. Specifically, we propose a pyramidal-like skeleton to overcome the saddle points that affect information transfer. Then we analyze the reason for the modularity (clustering) phenomenon in network topology and use it to rewire potential erroneous weighted links. We conduct numerical experiments on node classification and the results confirm that the proposed training framework leads to a significantly improved performance in terms of fast convergence and robustness to potential erroneous weighted links. The architecture design on GNNs, in turn, verifies the expressivity of GNNs from dynamics and topological space aspects and provides useful guidelines in designing more efficient neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源