论文标题

物理激活功能(PAFS):一种将物理学更有效诱导到物理信息的神经网络(PINN)的方法

Physical Activation Functions (PAFs): An Approach for More Efficient Induction of Physics into Physics-Informed Neural Networks (PINNs)

论文作者

Abbasi, Jassem, Andersen, Pål Østebø

论文摘要

近年来,试图通过物理知识神经网络(PINN)的演变来填补科学计算中深度学习(DL)方法与分析或数值方法之间的差距。但是,尽管如此,培训PINN和物理模型的最佳交织仍有许多并发症。在这里,我们介绍了物理激活功能(PAFS)的概念。这个概念提出的是,与其使用一般激活功能(AFS),例如所有神经元的relu,tanh和Sigmoid,不如使用通用AF,即它们的数学表达是从研究现象的物理定律中继承的。 PAF的公式可以灵感来自问题的分析解决方案中的术语。我们表明,PAF可以受到与研究现象相关的任何数学公式的启发,例如PDE系统的初始或边界条件。我们验证了PAF在多个PDE中的优势,包括谐波振荡,汉堡,对流传输方程和异质扩散方程。 PAF的主要优点是PINN与研究物理现象及其基本数学模型的更有效的约束和交织。这种增加的约束显着改善了针对未训练分布的测试数据的PINN的预测。此外,在不同情况下,PAF的应用将PINN的大小降低了75%。同样,在某些情况下,损失项的价值减少了1至2个数量级,这对于升级PINN的训练是值得注意的。查找最佳值所需的迭代也大大降低。可以得出结论,使用PAFS有助于生成更复杂的PINN,对于更长的预测范围而言,有效性更大。

In recent years, the gap between Deep Learning (DL) methods and analytical or numerical approaches in scientific computing is tried to be filled by the evolution of Physics-Informed Neural Networks (PINNs). However, still, there are many complications in the training of PINNs and optimal interleaving of physical models. Here, we introduced the concept of Physical Activation Functions (PAFs). This concept offers that instead of using general activation functions (AFs) such as ReLU, tanh, and sigmoid for all the neurons, one can use generic AFs that their mathematical expression is inherited from the physical laws of the investigating phenomena. The formula of PAFs may be inspired by the terms in the analytical solution of the problem. We showed that the PAFs can be inspired by any mathematical formula related to the investigating phenomena such as the initial or boundary conditions of the PDE system. We validated the advantages of PAFs for several PDEs including the harmonic oscillations, Burgers, Advection-Convection equation, and the heterogeneous diffusion equations. The main advantage of PAFs was in the more efficient constraining and interleaving of PINNs with the investigating physical phenomena and their underlying mathematical models. This added constraint significantly improved the predictions of PINNs for the testing data that was out-of-training distribution. Furthermore, the application of PAFs reduced the size of the PINNs up to 75% in different cases. Also, the value of loss terms was reduced by 1 to 2 orders of magnitude in some cases which is noteworthy for upgrading the training of the PINNs. The iterations required for finding the optimum values were also significantly reduced. It is concluded that using the PAFs helps in generating PINNs with less complexity and much more validity for longer ranges of prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源