论文标题

通过各种自由能最小化对贝叶斯神经网络的原则修剪

Principled Pruning of Bayesian Neural Networks through Variational Free Energy Minimization

论文作者

Beckers, Jim, van Erp, Bart, Zhao, Ziyue, Kondrashov, Kirill, de Vries, Bert

论文摘要

贝叶斯模型还原提供了一种有效的方法,可以比较模型的所有嵌套子模型的性能,而无需重新评估这些子模型中的任何一个。到目前为止,贝叶斯模型的降低主要用于简单模型的计算神经科学界。在本文中,我们基于变分的自由能最小化,制定并应用贝叶斯模型还原以对贝叶斯神经网络进行原则修剪。但是,直接应用贝叶斯模型还原会导致近似错误。因此,提出了一种新型的迭代修剪算法,以减轻幼稚贝叶斯模型降低引起的问题,这是针对不同推理算法的公共可用UCI数据集的实验支持。这种新颖的参数修剪方案解决了信号处理群落使用的当前最新修剪方法的缺点。提出的方法具有明确的停止标准,并最大程度地降低了在训练过程中使用的相同目标。在这些好处之后,我们的实验表明与最先进的修剪方案相比,更好的模型性能。

Bayesian model reduction provides an efficient approach for comparing the performance of all nested sub-models of a model, without re-evaluating any of these sub-models. Until now, Bayesian model reduction has been applied mainly in the computational neuroscience community on simple models. In this paper, we formulate and apply Bayesian model reduction to perform principled pruning of Bayesian neural networks, based on variational free energy minimization. Direct application of Bayesian model reduction, however, gives rise to approximation errors. Therefore, a novel iterative pruning algorithm is presented to alleviate the problems arising with naive Bayesian model reduction, as supported experimentally on the publicly available UCI datasets for different inference algorithms. This novel parameter pruning scheme solves the shortcomings of current state-of-the-art pruning methods that are used by the signal processing community. The proposed approach has a clear stopping criterion and minimizes the same objective that is used during training. Next to these benefits, our experiments indicate better model performance in comparison to state-of-the-art pruning schemes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源