论文标题

通过增强学习的计算模型发现

Computational model discovery with reinforcement learning

论文作者

Bassenne, Maxime, Lozano-Durán, Adrián

论文摘要

这项研究的动机是利用人工智能研究的最新突破来解锁新颖的解决方案,以解决计算科学中遇到的重要科学问题。为了解决人类的情报限制,我们建议通过人工智能补充人类思维。我们的三管齐下的策略包括以分析形式表达的学习(i)模型,(ii)被评估后验和iii),使用参考解决方案中的积分数量作为先验知识。在(i)点中,我们追求与黑盒神经网络相反的可解释模型,而后者仅在学习过程中使用,以有效地参数可能模型的较大搜索空间。在第(ii)点中,学习的模型在计算求解器中的后验进行了动态评估,而不是基于预处理高保真数据的先验信息,从而考虑了求解器(例如其数字)的特异性。最后,在(iii)点,对新模型的探索仅由预定义的积分数量指导,例如,在雷诺(Reynolds)平均或大型涡流模拟(LES)中平均的工程兴趣。我们使用一个耦合的深钢筋学习框架和计算求解器,同时实现这些目标。强化学习与目标(i),(ii)和(iii)的结合将我们的工作与基于机器学习的先前建模尝试区分开来。在本报告中,我们通过增强学习提供了模型发现框架的高级描述。该方法详细介绍了在微分方程中发现缺失项的应用。描述了该方法的基本实例化,该实例发现发现汉堡方程中缺少术语。

The motivation of this study is to leverage recent breakthroughs in artificial intelligence research to unlock novel solutions to important scientific problems encountered in computational science. To address the human intelligence limitations in discovering reduced-order models, we propose to supplement human thinking with artificial intelligence. Our three-pronged strategy consists of learning (i) models expressed in analytical form, (ii) which are evaluated a posteriori, and iii) using exclusively integral quantities from the reference solution as prior knowledge. In point (i), we pursue interpretable models expressed symbolically as opposed to black-box neural networks, the latter only being used during learning to efficiently parameterize the large search space of possible models. In point (ii), learned models are dynamically evaluated a posteriori in the computational solver instead of based on a priori information from preprocessed high-fidelity data, thereby accounting for the specificity of the solver at hand such as its numerics. Finally in point (iii), the exploration of new models is solely guided by predefined integral quantities, e.g., averaged quantities of engineering interest in Reynolds-averaged or large-eddy simulations (LES). We use a coupled deep reinforcement learning framework and computational solver to concurrently achieve these objectives. The combination of reinforcement learning with objectives (i), (ii) and (iii) differentiate our work from previous modeling attempts based on machine learning. In this report, we provide a high-level description of the model discovery framework with reinforcement learning. The method is detailed for the application of discovering missing terms in differential equations. An elementary instantiation of the method is described that discovers missing terms in the Burgers' equation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源