论文标题

贝叶斯对操作员内核学习的数据自适应先验

A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators

论文作者

Chada, Neil K., Lang, Quanjun, Lu, Fei, Wang, Xiong

论文摘要

内核在表示非局部依赖性方面有效,并且它们被广泛用于在功能空间之间设计运算符。因此,从数据中学习内核是一个普遍兴趣的反问题。由于非局部依赖性,逆问题可能会因数据依赖性的奇异反转操作符而严重不良。贝叶斯的方法通过先验来克服了不良的性。但是,如果数据诱导倒置操作员零特征值的特征空间中的扰动,则固定的非分类先验会导致后的平均值发散。我们在达到稳定的后验之前引入了数据自适应,该稳定的平均值始终具有较小的噪声限制。数据自适应PRIC的协方差是通过L-Curve方法对数据进行自适应的超参数的反转操作员。此外,我们提供了有关数据自适应先验的计算实践的详细分析,并在Toeplitz矩阵和积分运算符上证明了这一点。数值测试表明,固定的先验可能会导致在存在四种错误中的任何一种类型的情况下,导致后部平均值:离散误差,模型误差,部分观察和错误的噪声假设。相比之下,数据自适应的先验始终达到具有较小噪声限制的后均值。

Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源