论文标题

用几何和符号场景图表图的长摩根操作的层次结构计划图

Hierarchical Planning for Long-Horizon Manipulation with Geometric and Symbolic Scene Graphs

论文作者

Zhu, Yifeng, Tremblay, Jonathan, Birchfield, Stan, Zhu, Yuke

论文摘要

我们提出了一种视觉扎根的层次规划算法,用于长途操纵任务。我们的算法提供了一个神经符号任务计划和低级运动生成的联合框​​架,以指定目标为条件。我们方法的核心是一个两级场景图表示,即几何场景图和符号场景图。该分层表示是一种结构化的,以对象为中心的操纵场景的抽象。我们的模型使用图形神经网络来处理这些场景图来预测高级任务计划和低级动作。我们证明了我们的方法扩展到长途任务,并将其概括为新的任务目标。我们在物理模拟和现实世界中都在厨房存储任务中验证我们的方法。我们的实验表明,与基于标准的基于搜索的任务和运动计划者相比,我们的方法在实际机器人上达到了超过70%的成功率和接近次目标完成率的近90%。

We present a visually grounded hierarchical planning algorithm for long-horizon manipulation tasks. Our algorithm offers a joint framework of neuro-symbolic task planning and low-level motion generation conditioned on the specified goal. At the core of our approach is a two-level scene graph representation, namely geometric scene graph and symbolic scene graph. This hierarchical representation serves as a structured, object-centric abstraction of manipulation scenes. Our model uses graph neural networks to process these scene graphs for predicting high-level task plans and low-level motions. We demonstrate that our method scales to long-horizon tasks and generalizes well to novel task goals. We validate our method in a kitchen storage task in both physical simulation and the real world. Our experiments show that our method achieved over 70% success rate and nearly 90% of subgoal completion rate on the real robot while being four orders of magnitude faster in computation time compared to standard search-based task-and-motion planner.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源