论文标题

用于亚网格尺度标量通量建模的神经网络中的物理不变性

Physical invariance in neural networks for subgrid-scale scalar flux modeling

论文作者

Frezat, Hugo, Balarac, Guillaume, Sommer, Julien Le, Fablet, Ronan, Lguensat, Redouane

论文摘要

在本文中,我们提出了一种新的策略,该策略是在三维湍流不可压缩的流动中使用物理知识的神经网络(NNS)模拟亚网格尺度标量通量的。当通过直接数值模拟(DNS)数据进行训练时,最先进的神经网络(例如卷积神经网络)可能不会保留众所周知的物理先验,这反过来又可能质疑其在实例研究中的应用。为了解决这个问题,我们基于从物理定律得出的经典变换的不断变化和对称性调查了对模型的严格和软限制。从基于仿真的实验中,我们表明,提出的转换不变的NN模型既优于纯粹的数据驱动器和参数最新的亚网格尺度模型。在先验评估期间,所考虑的不向导被视为物理指标的正规化器,并限制了预测的亚网格级术语的分布尾巴更接近DNS。当大涡模拟过程中用作替代物时,它们还会增加模型的稳定性和性能。此外,显示转换不变的NN概括为在训练阶段尚未看到的制度。

In this paper we present a new strategy to model the subgrid-scale scalar flux in a three-dimensional turbulent incompressible flow using physics-informed neural networks (NNs). When trained from direct numerical simulation (DNS) data, state-of-the-art neural networks, such as convolutional neural networks, may not preserve well known physical priors, which may in turn question their application to real case-studies. To address this issue, we investigate hard and soft constraints into the model based on classical transformation invariances and symmetries derived from physical laws. From simulation-based experiments, we show that the proposed transformation-invariant NN model outperforms both purely data-driven ones as well as parametric state-of-the-art subgrid-scale models. The considered invariances are regarded as regularizers on physical metrics during the a priori evaluation and constrain the distribution tails of the predicted subgrid-scale term to be closer to the DNS. They also increase the stability and performance of the model when used as a surrogate during a large-eddy simulation. Moreover, the transformation-invariant NN is shown to generalize to regimes that have not been seen during the training phase.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源