论文标题
神经操作员:用于部分微分方程的图形内核网络
Neural Operator: Graph Kernel Network for Partial Differential Equations
论文作者
论文摘要
神经网络的经典发展主要是在有限维欧几里得空间和一组类别之间或两个有限维欧几里得空间之间进行映射。这项工作的目的是概括神经网络,以便他们可以在无限维空间(操作员)之间学习映射。我们工作中的关键创新是,在精心设计的网络体系结构中,一组网络参数可用于描述无限维空间之间的映射以及这些空间的不同有限维近似值之间的映射。我们通过组成非线性激活函数和一类积分运算符来制定无限维映射的近似值。内核集成是通过在图形网络上传递的消息来计算的。这种方法具有实质性的实际后果,我们将在输入数据到部分微分方程(PDE)及其解决方案之间的映射中说明。在这种情况下,这种学识渊博的网络可以在PDE的不同近似方法中概括(例如有限差异或有限元方法),以及与对应于不同基础分辨率和离散化水平的近似值。实验证实,与最先进的求解器的状态相比,提出的图形内核网络确实具有所需的属性并显示出竞争性能。
The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces. The purpose of this work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators). The key innovation in our work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators. The kernel integration is computed by message passing on graph networks. This approach has substantial practical consequences which we will illustrate in the context of mappings between input data to partial differential equations (PDEs) and their solutions. In this context, such learned networks can generalize among different approximation methods for the PDE (such as finite difference or finite element methods) and among approximations corresponding to different underlying levels of resolution and discretization. Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.