论文标题

互惠贝叶斯套索

The Reciprocal Bayesian LASSO

论文作者

Mallick, Himel, Alhamzawi, Rahim, Paul, Erina, Svetnik, Vladimir

论文摘要

互惠的套索(Rlaso)正规化采用了较低的惩罚功能,而不是传统的惩罚方法,而传统的惩罚方法则使用对系数增加的惩罚,从而导致相对于传统收缩方法,导致更强的简约和优越的模型选择。在这里,我们考虑了Rlaso问题的完全贝叶斯公式,该公式基于这样的观察结果,即线性回归参数的rlaso估计值可以解释为当分配回归参数独立的逆拉动laplace priors时,可以解释为贝叶斯后模式估计值。使用双帕累托或截断的正常分布的比例混合物,可以使用扩展的层次结构进行贝叶斯的推断。在模拟和真实数据集上,我们表明,贝叶斯公式在估计,预测和可变选择方面的表现优于其经典表亲,同时提供后推断的优势。最后,我们讨论了这种新方法的其他变体,并为使用灵活的相互惩罚提供了一个统一的框架。本文中描述的所有方法均以R包公开,网址为:https://github.com/himelmallick/bayesrecipe。

A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this paper are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源