论文标题

持久性:在结构化生成模型中,通过在线学习进行数据效率无监督的重新校准视觉宣传

DURableVS: Data-efficient Unsupervised Recalibrating Visual Servoing via online learning in a structured generative model

论文作者

Gothoskar, Nishad, Lázaro-Gredilla, Miguel, Bekiroglu, Yasemin, Agarwal, Abhishek, Tenenbaum, Joshua B., Mansinghka, Vikash K., George, Dileep

论文摘要

Visual Servoing使机器人系统能够执行准确的闭环控制,这在许多应用中都需要。但是,现有方法要么需要对机器人运动模型和相机进行精确校准,要么使用需要大量数据训练的神经体系结构。在这项工作中,我们提出了一种无监督的视觉伺服学习方法,该方法不需要任何以前的校准,并且具有极高的数据效率。我们的关键见解是,视觉宣传片并不依赖于识别Veridical运动学和相机参数,而是仅根据来自机器人关节位置的图像特征观察的精确生成模型。我们证明,通过模型架构和学习算法,我们可以一致地从少于50个培训样本中学习准确的模型(相当于无监督的数据收集的小于1分钟),并且对于标准的神经体系结构,无法进行此类数据效率学习。此外,我们表明,通过在循环中使用生成模型并在线学习,我们可以使机器人系统从校准错误中恢复,并检测并迅速适应机器人相机系统中可能出乎意料的更改(例如,碰撞相机,新对象)。

Visual servoing enables robotic systems to perform accurate closed-loop control, which is required in many applications. However, existing methods either require precise calibration of the robot kinematic model and cameras or use neural architectures that require large amounts of data to train. In this work, we present a method for unsupervised learning of visual servoing that does not require any prior calibration and is extremely data-efficient. Our key insight is that visual servoing does not depend on identifying the veridical kinematic and camera parameters, but instead only on an accurate generative model of image feature observations from the joint positions of the robot. We demonstrate that with our model architecture and learning algorithm, we can consistently learn accurate models from less than 50 training samples (which amounts to less than 1 min of unsupervised data collection), and that such data-efficient learning is not possible with standard neural architectures. Further, we show that by using the generative model in the loop and learning online, we can enable a robotic system to recover from calibration errors and to detect and quickly adapt to possibly unexpected changes in the robot-camera system (e.g. bumped camera, new objects).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源