论文标题

可扩展的贝叶斯神经网络通过一层输入增加

Scalable Bayesian neural networks by layer-wise input augmentation

论文作者

Trinh, Trung, Kaski, Samuel, Heinonen, Markus

论文摘要

我们介绍了隐性贝叶斯神经网络,这是一种简单且可扩展的方法,用于深度学习中的不确定性表示。标准的贝叶斯深度学习方法需要对数百万参数的后验分布进行不切实际的推断。取而代之的是,我们建议通过增加具有潜在变量的每个层的输入来诱导捕获神经网络的不确定性的分布。我们提出了适当的输入分布,并在大规模,数百万参数图像分类任务上表现出最先进的性能。

We introduce implicit Bayesian neural networks, a simple and scalable approach for uncertainty representation in deep learning. Standard Bayesian approach to deep learning requires the impractical inference of the posterior distribution over millions of parameters. Instead, we propose to induce a distribution that captures the uncertainty over neural networks by augmenting each layer's inputs with latent variables. We present appropriate input distributions and demonstrate state-of-the-art performance in terms of calibration, robustness and uncertainty characterisation over large-scale, multi-million parameter image classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源