论文标题

可概括的人重新识别的深层多模式融合

Deep Multimodal Fusion for Generalizable Person Re-identification

论文作者

Xiang, Suncheng, Chen, Hao, Ran, Wei, Yu, Zefang, Liu, Ting, Qian, Dahong, Fu, Yuzhuo

论文摘要

由于其在公共安全和视频监视中的各种应用,人们重新识别在现实情况中起着重要作用。最近,利用大规模数据集和强大的计算性能的监督或半监督的学习范式在特定的目标域上取得了竞争性能。但是,当将重新ID模型直接部署在没有目标样本的新域中时,它们总是遭受性能降解和较差的域概括。为了应对这一挑战,我们提出了一个深层的多模式融合网络,以详细说明丰富的语义知识,以协助在预训练期间表示学习。重要的是,引入了多模式融合策略,以将不同方式的特征转化为公共空间,这可以显着提高重新ID模型的概括能力。至于微调阶段,采用了一个现实的数据集来微调预训练的模型,以更好地使用现实世界中的数据进行分配对齐。基准的全面实验表明,我们的方法可以显着胜过以前的域概括或明确边缘的元学习方法。我们的源代码也将在https://github.com/jeremyxsc/dmf上公开获得。

Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, we propose a Deep Multimodal Fusion network to elaborate rich semantic knowledge for assisting in representation learning during the pre-training. Importantly, a multimodal fusion strategy is introduced to translate the features of different modalities into the common space, which can significantly boost generalization capability of Re-ID model. As for the fine-tuning stage, a realistic dataset is adopted to fine-tune the pre-trained model for better distribution alignment with real-world data. Comprehensive experiments on benchmarks demonstrate that our method can significantly outperform previous domain generalization or meta-learning methods with a clear margin. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源