论文标题
统一GCN:朝着将GCN与CNN连接
Unified GCNs: Towards Connecting GCNs with CNNs
论文作者
论文摘要
图形卷积网络(GCN)已被广泛证明其在图形数据表示和学习中的强大能力。现有的图形卷积层主要是基于图形信号处理和转换方面设计的,这些方面通常受到某些局限性,例如过度平滑,过度固定和非舒适性等。众所周知,我们都知道卷积神经网络(CNN)在许多计算机视觉和机器学习中都取得了巨大的成功。一个主要方面是,CNNS利用许多可学习的卷积过滤器(内核)来获取丰富的特征描述符,因此可以在视觉数据分析中具有很高的编码复杂模式的能力。另外,CNN在设计其网络体系结构(例如Mobilenet,Resnet,Xception等)方面具有灵活性。因此,自然出现一个问题是:我们可以像CNN中一样灵活地设计图形卷积层吗?在本文中,我们考虑从深度可分离卷积操作的一般角度将GCN与CNN连接起来。具体而言,我们表明GCN和GAT确实执行了一些特定的深度可分离卷积操作。这种新颖的解释使我们能够更好地了解GCN(GCN,GAT)和CNN之间的联系,并进一步激发了我们设计更多统一的GCN(UGCNS)。作为两个展示箱,我们实施了两个UGCN,即可分离的UGCN(S-UGCN)和一般UGCN(G-UGCN),以用于图数据表示和学习。在几个图表上进行的有希望的实验基准证明了所提出的UGCN的有效性和优势。
Graph Convolutional Networks (GCNs) have been widely demonstrated their powerful ability in graph data representation and learning. Existing graph convolution layers are mainly designed based on graph signal processing and transform aspect which usually suffer from some limitations, such as over-smoothing, over-squashing and non-robustness, etc. As we all know that Convolution Neural Networks (CNNs) have received great success in many computer vision and machine learning. One main aspect is that CNNs leverage many learnable convolution filters (kernels) to obtain rich feature descriptors and thus can have high capacity to encode complex patterns in visual data analysis. Also, CNNs are flexible in designing their network architecture, such as MobileNet, ResNet, Xception, etc. Therefore, it is natural to arise a question: can we design graph convolutional layer as flexibly as that in CNNs? Innovatively, in this paper, we consider connecting GCNs with CNNs deeply from a general perspective of depthwise separable convolution operation. Specifically, we show that GCN and GAT indeed perform some specific depthwise separable convolution operations. This novel interpretation enables us to better understand the connections between GCNs (GCN, GAT) and CNNs and further inspires us to design more Unified GCNs (UGCNs). As two showcases, we implement two UGCNs, i.e., Separable UGCN (S-UGCN) and General UGCN (G-UGCN) for graph data representation and learning. Promising experiments on several graph representation benchmarks demonstrate the effectiveness and advantages of the proposed UGCNs.