论文标题

图形卷积神经网络作为参数cokleisli形态

Graph Convolutional Neural Networks as Parametric CoKleisli morphisms

论文作者

Gavranović, Bruno, Villani, Mattia

论文摘要

我们为带有$ n $ nodes的任意图定义了图形卷积神经网络的bicategory $ \ mathbf {gcnn} _n $。我们表明,可以通过已经存在的深度学习分类结构来考虑它,称为$ \ mathbf {para} $和$ \ mathbf {lens} $,将基本类别设置为comonad的Cokleisli类别。我们证明存在一个注射剂,忠实的2功能$ \ Mathbf {gcnn} _n \ to \ Mathbf {para}(\ Mathsf {cokl}(\ Mathbb {\ Mathbb {r Mathbb {r}我们表明,这种结构使我们能够将GCNN的邻接矩阵视为全局参数,而不是本地层的层次参数。这为我们提供了一种特定类型的归纳偏置GCNN拥有的高级分类表征。最后,我们假设GCNN可能对通用消息的图形神经网络,与模棱两可学习的联系以及激活功能的功能(缺乏)功能性。

We define the bicategory of Graph Convolutional Neural Networks $\mathbf{GCNN}_n$ for an arbitrary graph with $n$ nodes. We show it can be factored through the already existing categorical constructions for deep learning called $\mathbf{Para}$ and $\mathbf{Lens}$ with the base category set to the CoKleisli category of the product comonad. We prove that there exists an injective-on-objects, faithful 2-functor $\mathbf{GCNN}_n \to \mathbf{Para}(\mathsf{CoKl}(\mathbb{R}^{n \times n} \times -))$. We show that this construction allows us to treat the adjacency matrix of a GCNN as a global parameter instead of a a local, layer-wise one. This gives us a high-level categorical characterisation of a particular kind of inductive bias GCNNs possess. Lastly, we hypothesize about possible generalisations of GCNNs to general message-passing graph neural networks, connections to equivariant learning, and the (lack of) functoriality of activation functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源