论文标题
通过深度学习的减少订单模型对非线性参数化动态系统的长期预测
Long-time prediction of nonlinear parametrized dynamical systems by deep learning-based reduced order models
论文作者
论文摘要
最近已经提出了基于深度学习的降低订单模型(DL-ROM),以克服传统ROM共享的共同限制 - 例如,当应用于非依赖时间依赖性的参数化PDE时,仅通过适当的正交分解(POD)仅通过正交分解(POD)。特别是,由于先前通过POD和基于DL的预测框架降低了维度,POD-DL-ROM可以在训练阶段实现极端效率,并且比测试时实时性能更快,并且比实时性能更快。尽管如此,他们与传统的ROM在时间外推任务方面的表现不佳。这项工作旨在通过引入$μt$ -POD-LSTM-ROM框架来迈向使用DL算法的有效数值近似的DL算法。这项新型技术通过添加利用长短期内存(LSTM)细胞的两倍体系结构来扩展POD-DL-ROM框架,最终允许对训练窗口的复杂系统的演变进行长期预测,以实现未见的输入参数值。数值结果表明,这种经常性体系结构可以使时间窗口的外推比训练时间域大15倍,并且相对于已经闪电 - 快速的POD-DL-ROM,可以更好地测试时间性能。
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional ROMs - built, e.g., exclusively through proper orthogonal decomposition (POD) - when applied to nonlinear time-dependent parametrized PDEs. In particular, POD-DL-ROMs can achieve extreme efficiency in the training stage and faster than real-time performances at testing, thanks to a prior dimensionality reduction through POD and a DL-based prediction framework. Nonetheless, they share with conventional ROMs poor performances regarding time extrapolation tasks. This work aims at taking a further step towards the use of DL algorithms for the efficient numerical approximation of parametrized PDEs by introducing the $μt$-POD-LSTM-ROM framework. This novel technique extends the POD-DL-ROM framework by adding a two-fold architecture taking advantage of long short-term memory (LSTM) cells, ultimately allowing long-term prediction of complex systems' evolution, with respect to the training window, for unseen input parameter values. Numerical results show that this recurrent architecture enables the extrapolation for time windows up to 15 times larger than the training time domain, and achieves better testing time performances with respect to the already lightning-fast POD-DL-ROMs.