论文标题

关于贝叶斯神经网络的直接损失最小化的性能

On the Performance of Direct Loss Minimization for Bayesian Neural Networks

论文作者

Wei, Yadi, Khardon, Roni

论文摘要

直接损失最小化(DLM)已被提议作为一种伪裂式方法,该方法以正则损失最小化。与变异推断相比,它用预测对数损失代替了证据下限(ELBO)中的损失项,这与评估中使用的损失函数相同。先前工作中的许多理论和经验结果表明,DLM可以显着改善某些模型的ELBO优化。但是,正如我们在本文中指出的那样,贝叶斯神经网络(BNNS)并非如此。本文探讨了DLM在BNN中的实际性能,其失败的原因及其与Elbo优化的关系,发现了两种算法的一些有趣的事实。

Direct Loss Minimization (DLM) has been proposed as a pseudo-Bayesian method motivated as regularized loss minimization. Compared to variational inference, it replaces the loss term in the evidence lower bound (ELBO) with the predictive log loss, which is the same loss function used in evaluation. A number of theoretical and empirical results in prior work suggest that DLM can significantly improve over ELBO optimization for some models. However, as we point out in this paper, this is not the case for Bayesian neural networks (BNNs). The paper explores the practical performance of DLM for BNN, the reasons for its failure and its relationship to optimizing the ELBO, uncovering some interesting facts about both algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源