论文标题
迈向域概括的优化和模型选择:一种混合引导的解决方案
Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution
论文作者
论文摘要
训练和测试数据之间的分布变化通常会破坏模型的性能。近年来,许多工作都关注存在分布变化的域泛化(DG),而目标数据则看不见。尽管算法设计取得了进展,但长期以来一直忽略了两个基础因素:1)基于正则化的目标的优化,以及2)DG的模型选择,因为无法利用有关目标域的知识。在本文中,我们提出了用于DG的混合式优化和选择技术。为了优化,我们利用适应的混音来生成一个分发数据集,该数据集可以指导首选项方向并通过帕累托优化进行优化。对于模型选择,我们生成一个验证数据集,距离目标分布距离更遥远,从而可以更好地表示目标数据。我们还提出了一些理论见解。全面的实验表明,我们的模型优化和选择技术可以在很大程度上改善现有域泛化算法的性能,甚至可以实现新的最新结果。
The distribution shifts between training and test data typically undermine the performance of models. In recent years, lots of work pays attention to domain generalization (DG) where distribution shifts exist, and target data are unseen. Despite the progress in algorithm design, two foundational factors have long been ignored: 1) the optimization for regularization-based objectives, and 2) the model selection for DG since no knowledge about the target domain can be utilized. In this paper, we propose Mixup guided optimization and selection techniques for DG. For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization. For model selection, we generate a validation dataset with a closer distance to the target distribution, and thereby it can better represent the target data. We also present some theoretical insights behind our proposals. Comprehensive experiments demonstrate that our model optimization and selection techniques can largely improve the performance of existing domain generalization algorithms and even achieve new state-of-the-art results.