论文标题

图神经网络中的基于约束的基于约束的传播

Deep Constraint-based Propagation in Graph Neural Networks

论文作者

Tiezzi, Matteo, Marra, Giuseppe, Melacci, Stefano, Maggini, Marco

论文摘要

深度学习技术的普及使人们对能够使用图形神经网络(GNNS)的启发的神经体系结构进行了对能够使用图表示的复杂结构的兴趣。我们将注意力集中在Scarselli等人最初提出的GNN模型上。 2009年,它通过迭代扩散过程编码图的节点的状态,在学习阶段必须在每个时期计算,直到达到可学习状态过渡函数的固定点,以在相邻节点之间传播信息。我们基于拉格朗日框架中受约束的优化,在GNN中提出了一种新颖的学习方法。学习过渡函数和节点状态是联合过程的结果,在该过程中,状态收敛程序是通过约束满意度机制隐式表示的,避免了迭代时期的时期程序和网络的展开。我们的计算结构在由权重,节点状态变量和Lagrange乘数组成的伴随空间中搜索Lagrangian的鞍点。通过加速扩散过程的多层约束,进一步增强了此过程。实验分析表明,所提出的方法与几个基准上的流行模型相比有利。

The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the originally proposed GNN model of Scarselli et al. 2009, which encodes the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源