论文标题

在拉普拉斯近似下可学习的不确定性

Learnable Uncertainty under Laplace Approximations

论文作者

Kristiadi, Agustinus, Hein, Matthias, Hennig, Philipp

论文摘要

拉普拉斯的近似值是构建贝叶斯神经网络(BNN)的经典,计算轻巧的手段。与其他近似BNN一样,人们不一定会期望校准引起的预测不确定性。在这里,我们开发出一种形式主义,以一种脱钩的方式明确“训练”不确定性。为此,我们介绍了拉普拉斯(Laplace Approxtroxation)网络的不确定性单元:与特定权重结构相关的隐藏单元,可以添加到任何预先训练的点估计网络中。由于其权重,这些单元不活跃 - 它们不会影响预测。但是它们的存在改变了损失景观的几何形状(尤其是Hessian),从而影响了网络近似值下网络的不确定性估计值。我们表明,可以通过不确定性的目标对此类单元进行训练,从而改善标准的Laplace近似值在各种不确定性量化任务中的性能。

Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly "train" the uncertainty in a decoupled way to the prediction itself. To this end, we introduce uncertainty units for Laplace-approximated networks: Hidden units associated with a particular weight structure that can be added to any pre-trained, point-estimated network. Due to their weights, these units are inactive -- they do not affect the predictions. But their presence changes the geometry (in particular the Hessian) of the loss landscape, thereby affecting the network's uncertainty estimates under a Laplace approximation. We show that such units can be trained via an uncertainty-aware objective, improving standard Laplace approximations' performance in various uncertainty quantification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源