论文标题

正规化回归的强大先验

Robust priors for regularized regression

论文作者

Bobadilla-Suarez, Sebastian, Jones, Matt, Love, Bradley C.

论文摘要

有用的先验受益。惩罚的回归方法,例如脊回归,将权重缩小到零,但零关联通常不是明智的先验。受到人类使用的简单而强大的决策启发,我们为惩罚的回归模型构建了非零先验,这些模型可在多个任务中提供强大而可解释的解决方案。我们的方法使从约束模型的估计可以作为更通用模型的先验,从而产生了一种原则性的方式,可以在不同的复杂性模型之间进行插值。我们成功地将这种方法应用于许多决策和分类问题,并分析了模拟的大脑成像数据。具有强大先验的模型具有出色的最差性能。解决方案遵循用于得出先验的启发式的形式。这些新算法可以在数据分析和机器学习中为应用程序提供应用,并有助于了解人们如何从新手过渡到专家表现。

Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks. Our approach enables estimates from a constrained model to serve as a prior for a more general model, yielding a principled way to interpolate between models of differing complexity. We successfully applied this approach to a number of decision and classification problems, as well as analyzing simulated brain imaging data. Models with robust priors had excellent worst-case performance. Solutions followed from the form of the heuristic that was used to derive the prior. These new algorithms can serve applications in data analysis and machine learning, as well as help in understanding how people transition from novice to expert performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源