论文标题
具有漂移感动的动态神经网络的时间领域概括
Temporal Domain Generalization with Drift-Aware Dynamic Neural Networks
论文作者
论文摘要
时间领域的概括是一个有前途但极具挑战性的领域,其目标是在时间变化的数据分布下学习模型,并推广到随着变化的趋势,从而看不见的数据分布。该领域的进步是:1)表征数据分布漂移及其对模型的影响,2)在跟踪模型动力学方面的表现力,以及3)对性能的理论保证。为了解决这些问题,我们提出了一个具有漂移感动的动态神经网络(Drain)框架的时间领域的概括。具体而言,我们将问题提出为贝叶斯框架,该框架共同模拟了数据和模型动力学之间的关系。然后,我们构建一个经常的图形生成方案,以表征跨不同时间点学到的动态图形结构的神经网络。它捕获了模型参数和数据分布的时间漂移,并且可以预测未来数据的模型。此外,我们还探讨了在具有挑战性的时间DG设置下模型性能的理论保证,并提供理论分析,包括不确定性和概括误差。最后,对具有时间漂移的几个现实基准测试的广泛实验证明了该方法的有效性和效率。
Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change. The advancement of this area is challenged by: 1) characterizing data distribution drift and its impacts on models, 2) expressiveness in tracking the model dynamics, and 3) theoretical guarantee on the performance. To address them, we propose a Temporal Domain Generalization with Drift-Aware Dynamic Neural Network (DRAIN) framework. Specifically, we formulate the problem into a Bayesian framework that jointly models the relation between data and model dynamics. We then build a recurrent graph generation scenario to characterize the dynamic graph-structured neural networks learned across different time points. It captures the temporal drift of model parameters and data distributions and can predict models in the future without the presence of future data. In addition, we explore theoretical guarantees of the model performance under the challenging temporal DG setting and provide theoretical analysis, including uncertainty and generalization error. Finally, extensive experiments on several real-world benchmarks with temporal drift demonstrate the effectiveness and efficiency of the proposed method.