论文标题

通过图形摘要缩放R-GCN训练

Scaling R-GCN Training with Graph Summarization

论文作者

Generale, Alessandro, Blume, Till, Cochez, Michael

论文摘要

关系图卷积网络(R-GCN)的培训是一项内存强度的任务。在训练现实世界图期间需要存储的梯度信息的数量通常太大了,对于大多数GPU上可用的内存量。在这项工作中,我们尝试使用图形摘要技术来压缩图形,从而减少所需的内存量。在图表摘要上训练R-GCN后,我们将权重转回原始图,并尝试对其进行推理。我们在AIFB,MUTAG和AM数据集上获得合理的结果。 我们的实验表明,图形摘要上的训练可以产生与原始图上训练相当或更高的准确性。FURTHERMORE,如果我们花时间从方程式中计算摘要,我们会观察到使用图形摘要方法获得的较小的图形表示可以减少计算上的开销。但是,需要进一步的实验来评估其他图形摘要模型以及我们的发现是否也适用于非常大的图形。

Training of Relational Graph Convolutional Networks (R-GCN) is a memory intense task. The amount of gradient information that needs to be stored during training for real-world graphs is often too large for the amount of memory available on most GPUs. In this work, we experiment with the use of graph summarization techniques to compress the graph and hence reduce the amount of memory needed. After training the R-GCN on the graph summary, we transfer the weights back to the original graph and attempt to perform inference on it. We obtain reasonable results on the AIFB, MUTAG and AM datasets. Our experiments show that training on the graph summary can yield a comparable or higher accuracy to training on the original graphs.Furthermore, if we take the time to compute the summary out of the equation, we observe that the smaller graph representations obtained with graph summarization methods reduces the computational overhead. However, further experiments are needed to evaluate additional graph summary models and whether our findings also holds true for very large graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源