论文标题

同上:通过互动构建数字双铰接物对象

Ditto: Building Digital Twins of Articulated Objects from Interaction

论文作者

Jiang, Zhenyu, Hsu, Cheng-Chun, Zhu, Yuke

论文摘要

将物理对象数字化到虚拟世界中有可能解锁体现的AI和混合现实中的新研究和应用。这项工作着重于重新创建现实世界中清晰的对象的交互式数字双胞胎,这些双胞胎可以直接导入到虚拟环境中。我们介绍了同上,以通过交互感知来学习表达对象的发音模型估计和3D几何重建。考虑到相互作用前后对铰接对象的一对视觉观察,同上重建了零件级的几何形状并估算对象的发音模型。我们采用隐式神经表示来进行关节几何和发音建模。我们的实验表明,同上可以有效地建立数字双铰链对象,以类别不可屈服的方式。我们还将同上应用于现实世界对象,并在物理模拟中部署重新创建的数字双胞胎。代码和其他结果可在https://ut-aut-autin-rpl.github.io/ditto上找到

Digitizing physical objects into the virtual world has the potential to unlock new research and applications in embodied AI and mixed reality. This work focuses on recreating interactive digital twins of real-world articulated objects, which can be directly imported into virtual environments. We introduce Ditto to learn articulation model estimation and 3D geometry reconstruction of an articulated object through interactive perception. Given a pair of visual observations of an articulated object before and after interaction, Ditto reconstructs part-level geometry and estimates the articulation model of the object. We employ implicit neural representations for joint geometry and articulation modeling. Our experiments show that Ditto effectively builds digital twins of articulated objects in a category-agnostic way. We also apply Ditto to real-world objects and deploy the recreated digital twins in physical simulation. Code and additional results are available at https://ut-austin-rpl.github.io/Ditto

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源