论文标题

SRFEAT:学习本地准确且全球一致的非刚性形状对应

SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid Shape Correspondence

论文作者

Li, Lei, Attaiki, Souhaib, Ovsjanikov, Maks

论文摘要

在这项工作中,我们提出了一个新颖的基于学习的框架,该框架将对比度学习的局部准确性与几何方法的全球一致性结合在一起,以实现强大的非刚性匹配。我们首先观察到,尽管对比度学习可以导致强大的点特征,但由于标准对比度损失的纯粹组合性质,学到的对应关系通常缺乏平滑度和一致性。为了克服这一局限性,我们建议通过两种类型的平滑度正则化来提高对比性学习,以将几何信息注入对应学习。借助这种新颖的组合,由此产生的特征在各个点之间都是高度歧视性的,同时通过简单的接近查询导致了强大而一致的对应关系。我们的框架是一般的,适用于3D和2D域中的本地功能学习。我们通过对各种具有挑战性的匹配基准进行了广泛的实验来证明我们的方法的优势,包括3D非刚性形状对应和2D图像关键点匹配。

In this work, we present a novel learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches, for robust non-rigid matching. We first observe that while contrastive learning can lead to powerful point-wise features, the learned correspondences commonly lack smoothness and consistency, owing to the purely combinatorial nature of the standard contrastive losses. To overcome this limitation we propose to boost contrastive feature learning with two types of smoothness regularization that inject geometric information into correspondence learning. With this novel combination in hand, the resulting features are both highly discriminative across individual points, and, at the same time, lead to robust and consistent correspondences, through simple proximity queries. Our framework is general and is applicable to local feature learning in both the 3D and 2D domains. We demonstrate the superiority of our approach through extensive experiments on a wide range of challenging matching benchmarks, including 3D non-rigid shape correspondence and 2D image keypoint matching.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源