论文标题
面具背后的内容:理解图形自动编码器的蒙版图形建模
What's Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders
论文作者
论文摘要
最后几年见证了一种有希望的自我监督的学习策略,被称为蒙面自动编码。但是,缺乏理论上对掩蔽在图形自动编码器(GAE)上的重要性的理论理解。在这项工作中,我们提出了蒙版的图形自动编码器(MaskGae),这是一个用于图形结构数据的自我监管的学习框架。与标准GAE不同,Maskgae采用蒙版的图形建模(MGM)作为原则上的借口任务 - 掩盖了一部分边缘,并试图用部分可见的,不掩盖的图形结构来重建缺失的部分。要了解米高梅是否可以帮助GAE学习更好的表示形式,我们提供理论和经验证据,以全面证明这项借口任务的好处是合理的。从理论上讲,我们在GAE和对比度学习之间建立了密切的联系,这表明MGM显着改善了GAE的自我监督学习方案。从经验上讲,我们在各种图基准上进行了广泛的实验,这证明了Maskgae在链接预测和节点分类任务上的优越性优于几个最先进的实验。
The last years have witnessed the emergence of a promising self-supervised learning strategy, referred to as masked autoencoding. However, there is a lack of theoretical understanding of how masking matters on graph autoencoders (GAEs). In this work, we present masked graph autoencoder (MaskGAE), a self-supervised learning framework for graph-structured data. Different from standard GAEs, MaskGAE adopts masked graph modeling (MGM) as a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure. To understand whether MGM can help GAEs learn better representations, we provide both theoretical and empirical evidence to comprehensively justify the benefits of this pretext task. Theoretically, we establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs. Empirically, we conduct extensive experiments on a variety of graph benchmarks, demonstrating the superiority of MaskGAE over several state-of-the-arts on both link prediction and node classification tasks.