论文标题
扁平的图形卷积网络供推荐
Flattened Graph Convolutional Networks For Recommendation
论文作者
论文摘要
图形卷积网络(GCN)及其变体在各种推荐任务上取得了重要的性能。但是,许多现有的GCN模型倾向于在所有相关节点之间执行递归聚合,这可能会导致严重的计算负担,以阻止其应用于大规模推荐任务。为此,本文提出了扁平的GCN〜(FlatGCN)模型,该模型能够以与现有模型相比,其复杂性较小。我们的主要贡献是三倍。首先,我们提出了一种简化但功能强大的GCN体系结构,该体系结构使用一个扁平的GCN层来汇总邻域信息,而不是递归。 FlatGCN中的聚合步骤无参数,因此可以通过并行计算预先计算以节省内存和计算成本。其次,我们提出了一种信息丰富的邻居INFOMAX采样方法,以根据原则度量测量相邻节点之间的相关性来选择最有价值的邻居。第三,我们提出了一种层集成技术,该技术通过在最后一层组装层的邻域表示来提高学会表示的表现力。在三个数据集上进行的广泛实验证明,我们提出的模型在训练效率中的表现高于现有的GCN模型,并且在训练效率方面的速度均高出了几个数量级的速度。
Graph Convolutional Networks (GCNs) and their variants have achieved significant performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which can arise severe computational burden to hinder their application to large-scale recommendation tasks. To this end, this paper proposes the flattened GCN~(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer, instead of recursively. The aggregation step in FlatGCN is parameter-free such that it can be pre-computed with parallel computation to save memory and computational cost. Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes based on a principled metric. Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer. Extensive experiments on three datasets verify that our proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training efficiency.