论文标题
深度学习中的强大抽样
Robust Sampling in Deep Learning
论文作者
论文摘要
深度学习需要正规化机制,以减少过度拟合并改善概括。我们通过基于分布鲁棒优化的新正规化方法解决了这个问题。关键思想是修改每个样本的贡献,以收紧经验风险结合。在随机训练期间,样品的选择是根据其准确性进行的,以使得执行的样本是在优化中贡献最大的样本。我们研究不同的场景,并向其中显示可以使收敛更快或提高准确性的情况。
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound. During the stochastic training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization. We study different scenarios and show the ones where it can make the convergence faster or increase the accuracy.