论文标题

通过潜在图预测进行自我监督的表示学习

Self-Supervised Representation Learning via Latent Graph Prediction

论文作者

Xie, Yaochen, Xu, Zhao, Ji, Shuiwang

论文摘要

图神经网络的自我监督学习(SSL)正在成为利用未标记数据的有前途的方式。当前,大多数方法基于从图像域改编的对比度学习,该学习需要观看生成和足够数量的负样本。相反,现有的预测模型不需要负面抽样,但缺乏关于借口训练任务的设计的理论指导。在这项工作中,我们提出了lagraph,这是一个基于潜在图预测的理论上扎根的预测SSL框架。 lagraph的学习目标被推导为自我监督的上限,以预测未观察到的潜在图。除了改进的性能外,Lagraph还为包括基于不变性目标的预测模型的最新成功提供了解释。我们提供了将lagraph与不同领域中相关方法进行比较的理论分析。我们的实验结果表明,劳拉在性能方面的优势和鲁棒性对于训练样本量减少了图形级别和节点级任务。

Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data. Currently, most methods are based on contrastive learning adapted from the image domain, which requires view generation and a sufficient number of negative samples. In contrast, existing predictive models do not require negative sampling, but lack theoretical guidance on the design of pretext training tasks. In this work, we propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction. Learning objectives of LaGraph are derived as self-supervised upper bounds to objectives for predicting unobserved latent graphs. In addition to its improved performance, LaGraph provides explanations for recent successes of predictive models that include invariance-based objectives. We provide theoretical analysis comparing LaGraph to related methods in different domains. Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源