论文标题

最糟糕的情况是几次识别

Worst Case Matters for Few-Shot Recognition

论文作者

Fu, Minghao, Cao, Yun-Hao, Wu, Jianxin

论文摘要

很少有射击识别能够学到一个识别模型,每个类别的图像很少(例如1或5张图像),而当前的几个射击学习方法着重于提高许多情节的平均准确性。我们认为,在现实世界中,我们通常可能只尝试一个情节而不是许多情节,因此最大化最坏情况的准确性比最大化平均准确性更为重要。我们从经验上表明,高平均精度不一定意味着高最差的准确性。由于该目标无法访问,因此我们建议降低标准偏差并同时提高平均精度。反过来,我们从偏见变化权衡的角度制定了两种策略,以隐式地实现此目标:简单而有效的稳定性正则化(SR)损失(SR)以及模型集合,以减少微调过程中的方差,以及一种适应性校准机制,以减少偏见。在基准数据集上进行的广泛实验证明了提出的策略的有效性,该策略的表现优于当前的最新方法,并以显着的余量不仅平均而言,而且更糟糕的是最差的准确性。我们的代码可在https://github.com/heekhero/acsr上找到。

Few-shot recognition learns a recognition model with very few (e.g., 1 or 5) images per category, and current few-shot learning methods focus on improving the average accuracy over many episodes. We argue that in real-world applications we may often only try one episode instead of many, and hence maximizing the worst-case accuracy is more important than maximizing the average accuracy. We empirically show that a high average accuracy not necessarily means a high worst-case accuracy. Since this objective is not accessible, we propose to reduce the standard deviation and increase the average accuracy simultaneously. In turn, we devise two strategies from the bias-variance tradeoff perspective to implicitly reach this goal: a simple yet effective stability regularization (SR) loss together with model ensemble to reduce variance during fine-tuning, and an adaptability calibration mechanism to reduce the bias. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed strategies, which outperforms current state-of-the-art methods with a significant margin in terms of not only average, but also worst-case accuracy. Our code is available at https://github.com/heekhero/ACSR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源