论文标题

消息传递神经PDE求解器

Message Passing Neural PDE Solvers

论文作者

Brandstetter, Johannes, Worrall, Daniel, Welling, Max

论文摘要

到目前为止,很难进行部分微分方程(PDE)的数值解决方案(PDE)。最近,人们一直在努力建立神经 - 杂种求解器,该求解者是朝着完全端到端学习系统的现代趋势。到目前为止,大多数作品只能概括到将面临通用求解器的一部分属性,包括:分辨率,拓扑,几何,几何,边界条件,域,域名定期,维度等。在这项工作中,我们建立了一个求解器,满足这些属性,所有组件基于所有组合的距离,替换了所有距离的距离,以替换了整个启发式的距离,替换了整个距离 - 替换了整个计算机的计算。近似值。我们表明,神经信息传递求解器表示包含一些经典方法,例如有限差异,有限体积和WENO方案。为了鼓励训练自回归模型的稳定性,我们提出了一种基于零稳定性原理的方法,将稳定性视为域适应问题。我们在各种域拓扑,方程参数,离散化等方面验证了各种流体状流量问题的方法,以1D和2D的形式展示了快速,稳定和准确的性能。

The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far. Recently, there have been pushes to build neural--numerical hybrid solvers, which piggy-backs the modern trend towards fully end-to-end learned systems. Most works so far can only generalize over a subset of properties to which a generic solver would be faced, including: resolution, topology, geometry, boundary conditions, domain discretization regularity, dimensionality, etc. In this work, we build a solver, satisfying these properties, where all the components are based on neural message passing, replacing all heuristically designed components in the computation graph with backprop-optimized neural function approximators. We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes. In order to encourage stability in training autoregressive models, we put forward a method that is based on the principle of zero-stability, posing stability as a domain adaptation problem. We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源