论文标题

与因子图无关类别的铰接对象跟踪

Category-Independent Articulated Object Tracking with Factor Graphs

论文作者

Heppert, Nick, Migimatsu, Toki, Yi, Brent, Chen, Claire, Bohg, Jeannette

论文摘要

部署在以人为中心环境中的机器人可能需要操纵各种铰接的物体,例如门,洗碗机和橱柜。铰接式物体通常具有意外的发音机制,这些机制与分类先验不一致:例如,抽屉可能会围绕铰链接头旋转而不是打开滑动。我们提出了一个独立于类别的框架,用于预测RGB-D图像序列中未知对象的表达模型。该预测是通过两步过程进行的:首先,视觉感知模块跟踪对象零件的姿势来自原始图像,其次,一个因子图采用这些姿势并渗透了铰接模型,包括零件之间的当前配置作为6D扭曲。我们还提出了一个面向操作的指标,以评估预测的关节曲折,从合规机器人控制器能够操纵铰接式对象的情况下,鉴于预测的扭曲。我们证明,我们的视觉感知和因子图模块在模拟数据上的表现优于基线,并显示了我们的因子图在现实世界数据上的适用性。

Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets. Articulated objects often come with unexpected articulation mechanisms that are inconsistent with categorical priors: for example, a drawer might rotate about a hinge joint instead of sliding open. We propose a category-independent framework for predicting the articulation models of unknown objects from sequences of RGB-D images. The prediction is performed by a two-step process: first, a visual perception module tracks object part poses from raw images, and second, a factor graph takes these poses and infers the articulation model including the current configuration between the parts as a 6D twist. We also propose a manipulation-oriented metric to evaluate predicted joint twists in terms of how well a compliant robot controller would be able to manipulate the articulated object given the predicted twist. We demonstrate that our visual perception and factor graph modules outperform baselines on simulated data and show the applicability of our factor graph on real world data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源