论文标题

交叉验证置信区间的测试误差

Cross-validation Confidence Intervals for Test Error

论文作者

Bayle, Pierre, Bayle, Alexandre, Janson, Lucas, Mackey, Lester

论文摘要

这项工作开发了中心限制定理,用于交叉验证和在学习算法上稳定性条件下其渐近方差的一致估计值。这些结果共同提供了$ K $倍的测试错误和有效的,有效的假设测试的实用,渐近的置信区间,即一种学习算法是否比另一个学习算法更小。这些结果也是流行选择的交叉验证的首个。在我们使用多种学习算法的真实数据实验中,所得的间隔和测试优于文献中最流行的替代方法。

This work develops central limit theorems for cross-validation and consistent estimators of its asymptotic variance under weak stability conditions on the learning algorithm. Together, these results provide practical, asymptotically-exact confidence intervals for $k$-fold test error and valid, powerful hypothesis tests of whether one learning algorithm has smaller $k$-fold test error than another. These results are also the first of their kind for the popular choice of leave-one-out cross-validation. In our real-data experiments with diverse learning algorithms, the resulting intervals and tests outperform the most popular alternative methods from the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源