论文标题

Srouda:元自我训练,以适应强大的无监督域适应

SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation

论文作者

Zhu, Wanqing, Yin, Jia-Li, Chen, Bo-Hao, Liu, Ximeng

论文摘要

由于在数据上获取手册标签可能是昂贵的,因此无监督的域适应性(UDA)将从丰富的标签数据集中学到的知识转移到了未标记的目标数据集,但正在越来越受欢迎。尽管广泛的研究致力于提高目标域的模型准确性,但忽略了模型鲁棒性的重要问题。更糟糕的是,在UDA方案下,用于改善模型鲁棒性的常规对抗训练(AT)方法是不适用的,因为它们是通过受监督损失功能生成的对抗性示例训练模型。在本文中,我们提出了一条新的元自我训练管道,名为Srouda,用于改善UDA模型的对抗性鲁棒性。基于自我训练范式,Srouda从预训练源模型开始,通过将UDA基线应用于源标记的数据和TARAGET未标记的数据,并具有开发的随机掩盖增强(RMA),然后通过META步骤在Pseudo-Label-LabeL的目标数据源和Finetuning源模型之间进行交替。虽然自我训练允许在UDA中直接合并AT,但Srouda中的元步骤进一步有助于减轻噪音伪标签的错误传播。在各种基准数据集上进行的广泛实验证明了Srouda的最新性能,在那里它可以提高模型稳健性,而不会损害清洁精度。代码可在https://github.com/vision上找到。

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasing popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and finetuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Code is available at https://github.com/Vision.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源