论文标题

多任务自我监督的图形神经网络可实现更强大的任务概括

Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization

论文作者

Ju, Mingxuan, Zhao, Tong, Wen, Qianlong, Yu, Wenhao, Shah, Neil, Ye, Yanfang, Zhang, Chuxu

论文摘要

图形神经网络(GNN)的自学学习(SSL)近年来吸引了Graph Machine学习社区的越来越多的关注,因为它有能力学习表现性能嵌入嵌入而无需昂贵的标签信息。 GNNS常规SSL框架的一个弱点是,它们是通过一种哲学来学习的,例如相互信息最大化或生成重建。当应用于各种下游任务时,这些框架很少在每个任务中都同样出色,因为一个理念可能不会跨越所有任务所需的广泛知识。为了增强跨任务的任务概括,作为探索基本图模型的重要第一步,我们介绍了Paretognn,这是一个多任务SSL SSL框架,用于节点表示图表。具体而言,掌togn依者是通过观察多种哲学的多种借口任务来自我监督的。为了调和不同的哲学,我们探索了一种多梯度下降算法,使得帕特诺尼积极从每个借口的任务中学习,同时最大程度地减少潜在的冲突。我们对四个下游任务(即节点分类,节点群集,链接预测和分区预测)进行全面实验,我们的建议在11个广泛采用的基准数据集中实现了整个任务的最佳整体性能。此外,我们观察到,从多种哲学中学习不仅可以增强任务概括,而且增强了单个任务表现,这表明掌toge核通过从不同哲学中学到的脱节而互补的知识来实现​​更好的任务概括。我们的代码可在https://github.com/jumxglhf/paretognn上公开获取。

Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the graph machine learning community in recent years, owing to its capability to learn performant node embeddings without costly label information. One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy, such as mutual information maximization or generative reconstruction. When applied to various downstream tasks, these frameworks rarely perform equally well for every task, because one philosophy may not span the extensive knowledge required for all tasks. To enhance the task generalization across tasks, as an important first step forward in exploring fundamental graph models, we introduce PARETOGNN, a multi-task SSL framework for node representation learning over graphs. Specifically, PARETOGNN is self-supervised by manifold pretext tasks observing multiple philosophies. To reconcile different philosophies, we explore a multiple-gradient descent algorithm, such that PARETOGNN actively learns from every pretext task while minimizing potential conflicts. We conduct comprehensive experiments over four downstream tasks (i.e., node classification, node clustering, link prediction, and partition prediction), and our proposal achieves the best overall performance across tasks on 11 widely adopted benchmark datasets. Besides, we observe that learning from multiple philosophies enhances not only the task generalization but also the single task performances, demonstrating that PARETOGNN achieves better task generalization via the disjoint yet complementary knowledge learned from different philosophies. Our code is publicly available at https://github.com/jumxglhf/ParetoGNN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源