论文标题

重新访问批处理以改善腐败鲁棒性

Revisiting Batch Normalization for Improving Corruption Robustness

论文作者

Benz, Philipp, Zhang, Chaoning, Karjauv, Adil, Kweon, In So

论文摘要

当测试图像具有常见损坏时,已证明接受过干净图像的DNN的性能会降低。在这项工作中,我们将腐败鲁棒性解释为域的转变,并建议纠正批处理标准化(BN)统计数据以改善模型鲁棒性。这是通过将从干净的域向腐败领域的转变视为由BN统计代表的样式转变而引起的。我们发现,仅在几个(例如32个)表示样本(例如32)上估算和调整BN统计信息,而无需重新培训模型,就可以在具有广泛模型架构范围的几个基准数据集上通过大幅度的差距来提高腐败鲁棒性。例如,在Imagenet-C上,统计适应将RESNET50的TOP1准确性从39.2%提高到48.7%。此外,我们发现该技术可以将最新的强大模型从58.1%提高到63.3%。

The performance of DNNs trained on clean images has been shown to decrease when the test images have common corruptions. In this work, we interpret corruption robustness as a domain shift and propose to rectify batch normalization (BN) statistics for improving model robustness. This is motivated by perceiving the shift from the clean domain to the corruption domain as a style shift that is represented by the BN statistics. We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures. For example, on ImageNet-C, statistics adaptation improves the top1 accuracy of ResNet50 from 39.2% to 48.7%. Moreover, we find that this technique can further improve state-of-the-art robust models from 58.1% to 63.3%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源