论文标题

GEFF:使用带有面部功能的画廊丰富改善任何改变衣服的人REID模型

GEFF: Improving Any Clothes-Changing Person ReID Model using Gallery Enrichment with Face Features

论文作者

Arkushin, Daniel, Cohen, Bar, Peleg, Shmuel, Fried, Ohad

论文摘要

在换衣服的重新识别(CC-REID)问题中,给定一个人的查询样本,目的是根据标有不同衣服的标签画廊确定正确的身份。几种模型通过提取独立的功能来应对这一挑战。但是,与换衣服的设置相比,这些型号的性能仍然较低,与同一衣服的设置相比,在标签画廊中穿着相同衣服的人。由于与服装相关的功能通常是数据中的主要功能,因此我们提出了一个新的过程,我们称为画廊富集,以利用这些功能。在此过程中,我们使用无监督的算法根据其面部特征添加了原始画廊来丰富原始画廊。此外,我们表明,将Reid和Face特征提取模块与丰富的画廊结合起来,从而使REID模型更加准确,即使对于带有不包括面孔的新服装的查询样品也是如此。此外,我们声称现有的CC-REID基准并不能完全代表现实世界的场景,并提出了一个新的视频CC-Reid数据集,该数据集名为42Street,基于剧院游戏,其中包括拥挤的场景和许多衣服的变化。当应用于多种REID模型时,我们的方法(GEFF)在PRCC和LTCC基准的TOP-1换衣服度量中的平均提高33.5%和6.7%。结合最新的REID模型,我们的方法在PRCC,LTCC,CCVID,LAST和VC-CLOTHES基准和提议的42Street数据集上实现了新的SOTA结果。

In the Clothes-Changing Re-Identification (CC-ReID) problem, given a query sample of a person, the goal is to determine the correct identity based on a labeled gallery in which the person appears in different clothes. Several models tackle this challenge by extracting clothes-independent features. However, the performance of these models is still lower for the clothes-changing setting compared to the same-clothes setting in which the person appears with the same clothes in the labeled gallery. As clothing-related features are often dominant features in the data, we propose a new process we call Gallery Enrichment, to utilize these features. In this process, we enrich the original gallery by adding to it query samples based on their face features, using an unsupervised algorithm. Additionally, we show that combining ReID and face feature extraction modules alongside an enriched gallery results in a more accurate ReID model, even for query samples with new outfits that do not include faces. Moreover, we claim that existing CC-ReID benchmarks do not fully represent real-world scenarios, and propose a new video CC-ReID dataset called 42Street, based on a theater play that includes crowded scenes and numerous clothes changes. When applied to multiple ReID models, our method (GEFF) achieves an average improvement of 33.5% and 6.7% in the Top-1 clothes-changing metric on the PRCC and LTCC benchmarks. Combined with the latest ReID models, our method achieves new SOTA results on the PRCC, LTCC, CCVID, LaST and VC-Clothes benchmarks and the proposed 42Street dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源