论文标题
优化引起的图形隐式非线性扩散
Optimization-Induced Graph Implicit Nonlinear Diffusion
论文作者
论文摘要
由于问题过度,大多数现有的图形神经网络只能使用其固有有限的聚合层来捕获有限的依赖性。为了克服这一限制,我们提出了一种新型的图形卷积,称为图形隐式非线性扩散(GIND),该卷积隐含地可以访问无限的邻居啤酒花,同时具有非线性扩散的自适应聚集特征,以防止过度厚度。值得注意的是,我们表明,学到的表示形式可以正式化为显式凸优化目标的最小化器。使用此属性,我们可以从优化的角度从理论上表征我们的GIND的平衡。更有趣的是,我们可以通过修改相应的优化目标来诱导新的结构变体。具体而言,我们可以将先前的特性嵌入到平衡中,并引入跳过连接以促进训练稳定性。广泛的实验表明,GIND擅长捕获长期依赖性,并且在非线性扩散的同粒细胞和异性图上表现良好。此外,我们表明,我们模型的优化引起的变体也可以提高性能并提高训练稳定性和效率。结果,我们的GIND在节点级别和图形级任务上都获得了重大改进。
Due to the over-smoothing issue, most existing graph neural networks can only capture limited dependencies with their inherently finite aggregation layers. To overcome this limitation, we propose a new kind of graph convolution, called Graph Implicit Nonlinear Diffusion (GIND), which implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing. Notably, we show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective. With this property, we can theoretically characterize the equilibrium of our GIND from an optimization perspective. More interestingly, we can induce new structural variants by modifying the corresponding optimization objective. To be specific, we can embed prior properties to the equilibrium, as well as introducing skip connections to promote training stability. Extensive experiments show that GIND is good at capturing long-range dependencies, and performs well on both homophilic and heterophilic graphs with nonlinear diffusion. Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well. As a result, our GIND obtains significant improvements on both node-level and graph-level tasks.