论文标题

学习形状表示形状,以重新识别身份的服装变化

Learning Shape Representations for Clothing Variations in Person Re-Identification

论文作者

Li, Yu-Jhe, Luo, Zhengyi, Weng, Xinshuo, Kitani, Kris M.

论文摘要

人重新识别(RE-ID)的目的是认识到在不同相机中拍摄的多个图像中包含的同一人的实例。现有的重新ID方法倾向于在很大程度上依赖于以下假设:同一个人的查询和画廊图像具有相同的衣服。不幸的是,这种假设可能无法在长时间(例如,几周,几个月或几年)中捕获的数据集。为了在衣服变化的背景下解决重新ID问题,我们提出了一种新颖的表示学习模型,该模型能够生成体形特征表示,而不会受到衣服颜色或图案的影响。我们称我们的模型为颜色不可知的形状提取网络(案例网络)。 Case-Net学习了身份的表示,该身份仅取决于身体形状,并通过对抗性学习和特征分解。由于缺乏包含同一个人衣服变化的大规模重新ID数据集,我们建议两个合成数据集进行评估。我们创建了一个带有不同衣服图案的渲染数据集SMPL-REID和一个带有不同服装颜色的合成数据集Div-Market,以模拟两种类型的服装更改。 5个数据集(SMPL-REID,DIV-MARETET,两个基准重新ID数据集,跨模式重新ID数据集)的定量和定性结果确认我们方法的鲁棒性和优越性

Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras. Existing methods for re-ID tend to rely heavily on the assumption that both query and gallery images of the same person have the same clothing. Unfortunately, this assumption may not hold for datasets captured over long periods of time (e.g., weeks, months or years). To tackle the re-ID problem in the context of clothing changes, we propose a novel representation learning model which is able to generate a body shape feature representation without being affected by clothing color or patterns. We call our model the Color Agnostic Shape Extraction Network (CASE-Net). CASE-Net learns a representation of identity that depends only on body shape via adversarial learning and feature disentanglement. Due to the lack of large-scale re-ID datasets which contain clothing changes for the same person, we propose two synthetic datasets for evaluation. We create a rendered dataset SMPL-reID with different clothes patterns and a synthesized dataset Div-Market with different clothing color to simulate two types of clothing changes. The quantitative and qualitative results across 5 datasets (SMPL-reID, Div-Market, two benchmark re-ID datasets, a cross-modality re-ID dataset) confirm the robustness and superiority of our approach against several state-of-the-art approaches

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源