论文标题
通过层面相关性优化,改善深层神经网络的概括和对背景偏差的鲁棒性
Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization
论文作者
论文摘要
图像背景中的功能可能与图像类别的类别相关,代表背景偏差。它们可以影响分类器的决定,从而导致快捷方式学习(聪明的汉斯效应)。该现象产生了深层神经网络(DNN),这些神经网络在标准评估数据集上表现良好,但对现实世界数据的推广不佳。层次相关性传播(LRP)解释了DNNS的决定。在这里,我们表明,LRP热图的优化可以最大程度地减少背景偏差对深层分类器的影响,从而阻碍快捷方式学习。通过不增加运行时计算成本,该方法是轻快而快速的。此外,它几乎适用于任何分类体系结构。在图像背景中注入合成偏置后,我们将我们的方法(称为ISNET)与八个最先进的DNN进行了比较,并定量证明了其优越的鲁棒性与背景偏见。混合数据集对于共vid-19和胸部X射线的结核分类很常见,从而促进了背景偏见。通过关注肺部,ISNET减少了快捷方式学习。因此,其在外部(分布外)测试数据库上的概括性能显着超过了所有实施的基准模型。
Features in images' backgrounds can spuriously correlate with the images' classes, representing background bias. They can influence the classifier's decisions, causing shortcut learning (Clever Hans effect). The phenomenon generates deep neural networks (DNNs) that perform well on standard evaluation datasets but generalize poorly to real-world data. Layer-wise Relevance Propagation (LRP) explains DNNs' decisions. Here, we show that the optimization of LRP heatmaps can minimize the background bias influence on deep classifiers, hindering shortcut learning. By not increasing run-time computational cost, the approach is light and fast. Furthermore, it applies to virtually any classification architecture. After injecting synthetic bias in images' backgrounds, we compared our approach (dubbed ISNet) to eight state-of-the-art DNNs, quantitatively demonstrating its superior robustness to background bias. Mixed datasets are common for COVID-19 and tuberculosis classification with chest X-rays, fostering background bias. By focusing on the lungs, the ISNet reduced shortcut learning. Thus, its generalization performance on external (out-of-distribution) test databases significantly surpassed all implemented benchmark models.