论文标题
时空的潜在图表结构学习流量预测
Spatio-Temporal Latent Graph Structure Learning for Traffic Forecasting
论文作者
论文摘要
由于智能城市和城市计算的繁荣,准确的交通预测,即智能运输系统(ITS)的基础(ITS)的基础从未比如今更重要。最近,图形神经网络确实胜过传统方法。然而,最常规的基于GNN的模型效果很好,同时给出了预定义的图形结构。定义图结构的现有方法纯粹集中在空间依赖性上,而忽略了时间相关性。此外,在整个训练过程中应用的静态预定义邻接的语义总是不完整的,因此可以忽视可能调整模型的潜在拓扑。为了应对这些挑战,我们提出了一个新的流量预测框架 - 时空潜在的图形结构学习网络(ST-LGSL)。更具体地说,该模型采用基于多层感知器和K-Nearest邻居的图形生成器,该邻居从整个数据中学习了潜在的图形拓扑信息,从而考虑了空间和时间动力学。此外,随着MLP-KNN的初始化,基于KNN中基于地面真相的邻接矩阵和相似性度量,ST-LGSL聚集了拓扑,重点是地理和节点相似性。此外,生成的图作为时空预测模块的输入,结合了扩散图卷积和门控时间卷积网络。现实世界中两个基准测试数据集的实验结果表明,ST-LGSL的表现优于各种类型的最新基准。
Accurate traffic forecasting, the foundation of intelligent transportation systems (ITS), has never been more significant than nowadays due to the prosperity of smart cities and urban computing. Recently, Graph Neural Network truly outperforms the traditional methods. Nevertheless, the most conventional GNN-based model works well while given a pre-defined graph structure. And the existing methods of defining the graph structures focus purely on spatial dependencies and ignore the temporal correlation. Besides, the semantics of the static pre-defined graph adjacency applied during the whole training progress is always incomplete, thus overlooking the latent topologies that may fine-tune the model. To tackle these challenges, we propose a new traffic forecasting framework -- Spatio-Temporal Latent Graph Structure Learning networks (ST-LGSL). More specifically, the model employs a graph generator based on Multilayer perceptron and K-Nearest Neighbor, which learns the latent graph topological information from the entire data considering both spatial and temporal dynamics. Furthermore, with the initialization of MLP-kNN based on ground-truth adjacency matrix and similarity metric in kNN, ST-LGSL aggregates the topologies focusing on geography and node similarity. Additionally, the generated graphs act as the input of the Spatio-temporal prediction module combined with the Diffusion Graph Convolutions and Gated Temporal Convolutions Networks. Experimental results on two benchmarking datasets in real world demonstrate that ST-LGSL outperforms various types of state-of-art baselines.