论文标题
Bodymap:学习全身密集的对应图图
BodyMap: Learning Full-Body Dense Correspondence Map
论文作者
论文摘要
人之间的密集对应关系具有强大的语义信息,可用于解决基本问题,以实现全身理解,例如野外表面匹配,跟踪和重建。在本文中,我们介绍了Bodymap,这是一个新的框架,用于在衣服人体的内部图像和3D模板模型的表面之间获得高清全身和连续的密度对应关系。信件涵盖了精美的细节,例如手和头发,同时捕获远离身体表面的区域,例如宽松的衣服。估计这种致密表面对应关系的先前方法i)将3D机构切成未包装到2D紫外空间的部分,沿缝接缝产生不连续性,或II)使用单个表面来表示整个身体,但没有任何处理的身体细节。在这里,我们介绍了一种新型的网络体系结构,该架构具有视觉变压器,该构造在连续的身体表面上学习优质特征。 BodyMap的表现优于先前在各种指标和数据集上的工作,包括密集的距离。此外,我们展示了各种应用,从多层致密布置,具有新颖的视图合成和外观交换的神经渲染。
Dense correspondence between humans carries powerful semantic information that can be utilized to solve fundamental problems for full-body understanding such as in-the-wild surface matching, tracking and reconstruction. In this paper we present BodyMap, a new framework for obtaining high-definition full-body and continuous dense correspondence between in-the-wild images of clothed humans and the surface of a 3D template model. The correspondences cover fine details such as hands and hair, while capturing regions far from the body surface, such as loose clothing. Prior methods for estimating such dense surface correspondence i) cut a 3D body into parts which are unwrapped to a 2D UV space, producing discontinuities along part seams, or ii) use a single surface for representing the whole body, but none handled body details. Here, we introduce a novel network architecture with Vision Transformers that learn fine-level features on a continuous body surface. BodyMap outperforms prior work on various metrics and datasets, including DensePose-COCO by a large margin. Furthermore, we show various applications ranging from multi-layer dense cloth correspondence, neural rendering with novel-view synthesis and appearance swapping.