论文标题

对比的拉普拉斯本征图

Contrastive Laplacian Eigenmaps

论文作者

Zhu, Hao, Sun, Ke, Koniusz, Piotr

论文摘要

图对比度学习在某些相似性概念下吸引/分散相似/不同节点对的节点表示。它可以与节点的低维嵌入结合在一起,以保留图的内在和结构特性。在本文中,我们通过对比度学习扩展了著名的Laplacian eigenmaps,并将其称为对比的Laplacian eigenmaps(Coles)。从以GAN为灵感的对比配方开始,我们表明,许多对比图嵌入模型的詹森 - 香农差异失败了,在脱节正和负分布中,它们在对比度环境中可能自然出现。相比之下,我们从分析上证明了Coles基本上最大程度地减少了Wasserstein距离的替代品,Wasserstein距离的替代品是在分离分布下良好应对的。此外,我们表明COL的损失属于所谓的块对抗性损失的家族,此前证明与通常使用的对比方法相比,相比是对成对损失的损失。我们在流行的基准/骨架上展示了Coles与DeepWalk,GCN,Graph2Gauss,DGI和Grace Baselines相比,Coles提供了有利的精度/可伸缩性。

Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. In this paper, we extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive formulation, we show that the Jensen-Shannon divergence underlying many contrastive graph embedding models fails under disjoint positive and negative distributions, which may naturally emerge during sampling in the contrastive setting. In contrast, we demonstrate analytically that COLES essentially minimizes a surrogate of Wasserstein distance, which is known to cope well under disjoint distributions. Moreover, we show that the loss of COLES belongs to the family of so-called block-contrastive losses, previously shown to be superior compared to pair-wise losses typically used by contrastive methods. We show on popular benchmarks/backbones that COLES offers favourable accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源