论文标题
无自主的无标记快速空中抓握
Autonomous Marker-less Rapid Aerial Grasping
论文作者
论文摘要
在具有自主机器人的未来,视觉和空间感知对于机器人系统至关重要。特别是对于空中机器人技术,在许多现实情况下,需要使用视觉感知的许多应用。机器人的空中抓握使用无人机有望快速采摘解决方案,而与其他机器人解决方案相比,机动性大大增加。利用蒙版R-CNN场景分割(detectron2),我们提出了一个基于视觉的系统,用于自主快速空中抓握,该系统不依赖于对象定位的标记,并且不需要以前已知对象的外观。将分段的图像与深度摄像头的空间信息相结合,我们生成了检测到的对象的密集点云,并执行基于几何的grasp计划,以确定对象上的掌握点。在动态抓住空中平台上的现实世界实验中,我们表明我们的系统可以复制运动捕获系统的性能,用于对象定位,最高为基线抓地成功率的94.5%。通过我们的结果,我们展示了基于几何的抓地力技术与飞行平台的首次使用,并旨在提高现有的空中操纵平台的自主权,从而进一步使它们进入仓库和类似环境中的现实世界应用。
In a future with autonomous robots, visual and spatial perception is of utmost importance for robotic systems. Particularly for aerial robotics, there are many applications where utilizing visual perception is necessary for any real-world scenarios. Robotic aerial grasping using drones promises fast pick-and-place solutions with a large increase in mobility over other robotic solutions. Utilizing Mask R-CNN scene segmentation (detectron2), we propose a vision-based system for autonomous rapid aerial grasping which does not rely on markers for object localization and does not require the appearance of the object to be previously known. Combining segmented images with spatial information from a depth camera, we generate a dense point cloud of the detected objects and perform geometry-based grasp planning to determine grasping points on the objects. In real-world experiments on a dynamically grasping aerial platform, we show that our system can replicate the performance of a motion capture system for object localization up to 94.5 % of the baseline grasping success rate. With our results, we show the first use of geometry-based grasping techniques with a flying platform and aim to increase the autonomy of existing aerial manipulation platforms, bringing them further towards real-world applications in warehouses and similar environments.