论文标题

通过身份判别特征抑制通过身份抑制神经网络培训

Regularizing Neural Network Training via Identity-wise Discriminative Feature Suppression

论文作者

Chapman, Avraham, Liu, Lingqiao

论文摘要

众所周知,深度神经网络具有很强的拟合能力,即使使用随机分配的类标签,也可以轻松达到较低的训练错误。当训练样本的数量很少,或类标签嘈杂时,网络倾向于记住特定于单个实例的模式,以最大程度地减少训练错误。这导致了过度拟合和泛化性能不佳的问题。本文通过抑制网络依靠特定实例模式以最小化的实例模式来探讨一种补救措施。提出的方法基于对抗性训练框架。它抑制了可以利用的功能来识别每个类中样本之间的单个实例。这导致分类器仅使用各个类别且在每个类中常见的功能。我们称我们的方法对对抗性特征(ASIF)的对抗性抑制,并在面对小数据集或嘈杂标签时演示了该技术在提高概括精度中的有用性。我们的源代码可用。

It is well-known that a deep neural network has a strong fitting capability and can easily achieve a low training error even with randomly assigned class labels. When the number of training samples is small, or the class labels are noisy, networks tend to memorize patterns specific to individual instances to minimize the training error. This leads to the issue of overfitting and poor generalisation performance. This paper explores a remedy by suppressing the network's tendency to rely on instance-specific patterns for empirical error minimisation. The proposed method is based on an adversarial training framework. It suppresses features that can be utilized to identify individual instances among samples within each class. This leads to classifiers only using features that are both discriminative across classes and common within each class. We call our method Adversarial Suppression of Identity Features (ASIF), and demonstrate the usefulness of this technique in boosting generalisation accuracy when faced with small datasets or noisy labels. Our source code is available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源