论文标题
零件感知人员的动态模板初始化re-id
Dynamic Template Initialization for Part-Aware Person Re-ID
论文作者
论文摘要
许多现有的人重新识别(RE-ID)方法取决于特征图,这些特征图可以分区以定位一个人的部分或减少以创建全球表示形式。尽管部分定位已显示出明显的成功,但它使用基于NAı的位置分区或静态特征模板。但是,这些假设假设零件在给定的图像或其位置中的先前存在,而忽略了特定于图像的信息,这些信息限制了其在充满挑战的场景中,例如使用部分遮挡和部分探测图像进行重新添加。在本文中,我们介绍了一个基于空间注意力的动态零件模板初始化模块,该模块使用骨干的早期层中使用中级语义特征动态生成零件序列。遵循自发注意力的层,使用简化的跨注意方案来使用主链的人体部分特征来提取各种人体部位的模板,然后将其用于识别和整理各种人类部分的语义丰富特征的代表,从而提高整个模型的歧视能力。我们进一步探索零件描述符的自适应加权,以量化局部属性的缺失或遮挡,并抑制相应零件描述子对匹配标准的贡献。关于整体,遮挡和部分重新ID任务基准的广泛实验表明,我们提出的架构能够实现竞争性能。代码将包含在补充材料中,并将公开提供。
Many of the existing Person Re-identification (Re-ID) approaches depend on feature maps which are either partitioned to localize parts of a person or reduced to create a global representation. While part localization has shown significant success, it uses either naıve position-based partitions or static feature templates. These, however, hypothesize the pre-existence of the parts in a given image or their positions, ignoring the input image-specific information which limits their usability in challenging scenarios such as Re-ID with partial occlusions and partial probe images. In this paper, we introduce a spatial attention-based Dynamic Part Template Initialization module that dynamically generates part-templates using mid-level semantic features at the earlier layers of the backbone. Following a self-attention layer, human part-level features of the backbone are used to extract the templates of diverse human body parts using a simplified cross-attention scheme which will then be used to identify and collate representations of various human parts from semantically rich features, increasing the discriminative ability of the entire model. We further explore adaptive weighting of part descriptors to quantify the absence or occlusion of local attributes and suppress the contribution of the corresponding part descriptors to the matching criteria. Extensive experiments on holistic, occluded, and partial Re-ID task benchmarks demonstrate that our proposed architecture is able to achieve competitive performance. Codes will be included in the supplementary material and will be made publicly available.