论文标题

在无监督图对比学习中引起结构和语义的全球知识

Eliciting Structural and Semantic Global Knowledge in Unsupervised Graph Contrastive Learning

论文作者

Ding, Kaize, Wang, Yancheng, Yang, Yingzhen, Liu, Huan

论文摘要

图对比度学习(GCL)最近以自我监督的方式吸引了学习概括性节点表示的兴趣。通常,GCL中的对比度学习过程是在图形神经网络(GNN)骨架中学到的表示的基础上执行的,该骨干骨基于其本地社区来转换和传播节点上下文信息。但是,共享类似特征的节点在地理上可能并不总是很接近,这对于无监督的GCL努力构成了巨大的挑战,因为它们在捕获这种全球图形知识方面的固有局限性。在这项工作中,我们通过提出一个简单而有效的框架来解决它们的固有局限性 - 具有结构和语义对比度学习的简单神经网络}(S^3-CL)。值得注意的是,借助提出的结构和语义对比学习算法,即使是简单的神经网络也可以学习具有宝贵的全球结构和语义模式的表达性节点表示。我们的实验表明,与最先进的无监督GCL方法相比,S^3-CL学到的节点表示在不同的下游任务上实现了出色的性能。实施和更多实验详细信息可在\ url {https://github.com/kaize0409/s-3-cl。}上公开获得。

Graph Contrastive Learning (GCL) has recently drawn much research interest for learning generalizable node representations in a self-supervised manner. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, nodes sharing similar characteristics may not always be geographically close, which poses a great challenge for unsupervised GCL efforts due to their inherent limitations in capturing such global graph knowledge. In this work, we address their inherent limitations by proposing a simple yet effective framework -- Simple Neural Networks with Structural and Semantic Contrastive Learning} (S^3-CL). Notably, by virtue of the proposed structural and semantic contrastive learning algorithms, even a simple neural network can learn expressive node representations that preserve valuable global structural and semantic patterns. Our experiments demonstrate that the node representations learned by S^3-CL achieve superior performance on different downstream tasks compared with the state-of-the-art unsupervised GCL methods. Implementation and more experimental details are publicly available at \url{https://github.com/kaize0409/S-3-CL.}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源