论文标题
通过残留动作预测解决视觉模仿学习中的模仿问题
Resolving Copycat Problems in Visual Imitation Learning via Residual Action Prediction
论文作者
论文摘要
模仿学习是一种广泛使用的政策学习方法,它使智能代理能够从专家演示中获取复杂的技能。模仿学习算法的输入通常由当前的观察和历史观察组成,因为最新观察可能不包含足够的信息。图像观测值尤其如此,其中单个图像仅包含场景的一个视图,并且缺乏运动信息和对象遮挡。从理论上讲,为模仿学习代理提供多个观察将带来更好的性能。然而,令人惊讶的是,人们发现有时从观察史中的模仿比最近的观察结果模仿更糟糕。在本文中,我们从神经网络角度的信息流中解释了这种现象。我们还提出了一种新颖的模仿学习神经网络体系结构,该架构并不遭受设计问题的困扰。此外,我们的方法缩放到高维图像观测值。最后,我们对两个广泛使用的模拟器Carla和Mujoco进行了基准测试,它成功地减轻了模仿问题并超过了现有的解决方案。
Imitation learning is a widely used policy learning method that enables intelligent agents to acquire complex skills from expert demonstrations. The input to the imitation learning algorithm is usually composed of both the current observation and historical observations since the most recent observation might not contain enough information. This is especially the case with image observations, where a single image only includes one view of the scene, and it suffers from a lack of motion information and object occlusions. In theory, providing multiple observations to the imitation learning agent will lead to better performance. However, surprisingly people find that sometimes imitation from observation histories performs worse than imitation from the most recent observation. In this paper, we explain this phenomenon from the information flow within the neural network perspective. We also propose a novel imitation learning neural network architecture that does not suffer from this issue by design. Furthermore, our method scales to high-dimensional image observations. Finally, we benchmark our approach on two widely used simulators, CARLA and MuJoCo, and it successfully alleviates the copycat problem and surpasses the existing solutions.