论文标题

图表上的局部对比度学习

Localized Contrastive Learning on Graphs

论文作者

Zhang, Hengrui, Wu, Qitian, Wang, Yu, Zhang, Shaofeng, Yan, Junchi, Yu, Philip S.

论文摘要

基于Infonce损失的对比学习方法在节点表示任务中流行在图形结构数据上。但是,它依赖数据增强及其二次计算复杂性可能导致不一致和效率低下问题。为了减轻这些局限性,在本文中,我们引入了一个简单而有效的对比模型,称为局部图对比度学习(简而言之)。 Local-GCL由两个关键设计组成:1)我们使用其一阶邻居直接为每个节点制定阳性示例,这使我们的方法摆脱了对精心设计的图形增强的依赖; 2)为了提高图表上对比度学习的效率,我们设计了一个内核对比损失,相对于图形大小,可以在线性的时间和空间复杂性中计算出来。我们提供理论分析以证明所提出方法的有效性和合理性是合理的。具有不同量表和属性的各种数据集上的实验表明,尽管其简单性,但Local-GCL在具有各种尺度和属性的图表上学习了自我监督的节点表示任务中的竞争性能。

Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源