论文标题
注意偏见的随机梯度下降
Attentional-Biased Stochastic Gradient Descent
论文作者
论文摘要
在本文中,我们提出了一种简单而有效的可验证方法(命名为ABSGD),用于解决深度学习中的数据失衡或标签噪声问题。我们的方法是对动量SGD的简单修改,在该动量SGD中,我们为迷你批次中的每个样本分配了个体的重要性权重。采样数据的个体级别的权重与数据的缩放损耗值的指数成正比,其中比例因子被解释为分布强大优化(DRO)框架中的正则化参数。取决于缩放因子是正还是负因子,可以保证absgd分别收敛到信息调查的最小值最大或最小DRO问题的固定点。与现有的班级加权方案相比,我们的方法可以捕获每个班级中各个示例之间的多样性。与现有的个人级别加权方法相比,使用元学习的元学习来计算微型批量随机梯度的三个向后传播,我们的方法效率更高,每次迭代中只有一个向后传播,就像标准深度学习方法一样。 ABSGD足够灵活,可以与其他强大的损失相结合而没有任何额外的成本。我们对几个基准数据集的经验研究证明了提出方法的有效性。
In this paper, we present a simple yet effective provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning. Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch. The individual-level weight of sampled data is systematically proportional to the exponential of a scaled loss value of the data, where the scaling factor is interpreted as the regularization parameter in the framework of distributionally robust optimization (DRO). Depending on whether the scaling factor is positive or negative, ABSGD is guaranteed to converge to a stationary point of an information-regularized min-max or min-min DRO problem, respectively. Compared with existing class-level weighting schemes, our method can capture the diversity between individual examples within each class. Compared with existing individual-level weighting methods using meta-learning that require three backward propagations for computing mini-batch stochastic gradients, our method is more efficient with only one backward propagation at each iteration as in standard deep learning methods. ABSGD is flexible enough to combine with other robust losses without any additional cost. Our empirical studies on several benchmark datasets demonstrate the effectiveness of the proposed method.\footnote{Code is available at:\url{https://github.com/qiqi-helloworld/ABSGD/}}