论文标题

跨域合奏蒸馏域概括

Cross-Domain Ensemble Distillation for Domain Generalization

论文作者

Lee, Kyungmoon, Kim, Sungyeon, Kwak, Suha

论文摘要

域的概括是学习模型的任务,这些模型将概括为看不见的目标域。我们提出了一种简单而有效的域泛化方法,称为跨域集合蒸馏(XDED),该方法学习了域不变的特征,同时鼓励模型将模型收敛到Flat Minima,最近证明这是域概括的足够条件。为此,我们的方法从具有相同标签的训练数据中生成了输出逻辑的集合,但来自不同的域,然后对与集合的不匹配进行惩罚。此外,我们提出了一种去式化技术,该技术将功能标准化,以鼓励该模型即使在任意目标域中也会产生风格一致的预测。我们的方法极大地提高了公共基准的概括能力,用于跨域图像分类,跨数据库人重新ID和跨数据库语义分割。此外,我们表明,通过我们的方法学到的模型对对抗性攻击和图像腐败具有强大的态度。

Domain generalization is the task of learning models that generalize to unseen target domains. We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED), that learns domain-invariant features while encouraging the model to converge to flat minima, which recently turned out to be a sufficient condition for domain generalization. To this end, our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble. Also, we present a de-stylization technique that standardizes features to encourage the model to produce style-consistent predictions even in an arbitrary target domain. Our method greatly improves generalization capability in public benchmarks for cross-domain image classification, cross-dataset person re-ID, and cross-dataset semantic segmentation. Moreover, we show that models learned by our method are robust against adversarial attacks and image corruptions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源