论文标题

通过学习端到端的视觉注意来解释自动驾驶

Explaining Autonomous Driving by Learning End-to-End Visual Attention

论文作者

Cultrera, Luca, Seidenari, Lorenzo, Becattini, Federico, Pala, Pietro, Del Bimbo, Alberto

论文摘要

当前基于深度学习的自主驾驶方法在某些受控方案中也会产生令人印象深刻的结果。最受欢迎,最迷人的方法之一依赖于直接从传感器感知到的数据中学习车辆控制。这种端到端的学习范式既可以在经典的监督环境和使用强化学习中应用。但是,这种方法的主要缺点和其他学习问题也缺乏解释性。的确,深层网络将根据先前看到的驾驶模式充当黑盒输出预测,而无需对为什么采取此类决策的任何反馈。在获得最佳性能的同时,从学到的代理商那里获得可解释的输出并不是至关重要的,尤其是在这样的安全关键领域,了解网络的行为至关重要。这与解释此类系统的故障特别相关。在这项工作中,我们建议培训配备了注意力模型的基于模仿学习的代理。注意模型使我们能够理解图像的哪一部分被认为是最重要的。有趣的是,使用CARLA驾驶模拟器在标准基准测试中的使用也可以提高卓越的性能。

Current deep learning based autonomous driving approaches yield impressive results also leading to in-production deployment in certain controlled scenarios. One of the most popular and fascinating approaches relies on learning vehicle controls directly from data perceived by sensors. This end-to-end learning paradigm can be applied both in classical supervised settings and using reinforcement learning. Nonetheless the main drawback of this approach as also in other learning problems is the lack of explainability. Indeed, a deep network will act as a black-box outputting predictions depending on previously seen driving patterns without giving any feedback on why such decisions were taken. While to obtain optimal performance it is not critical to obtain explainable outputs from a learned agent, especially in such a safety critical field, it is of paramount importance to understand how the network behaves. This is particularly relevant to interpret failures of such systems. In this work we propose to train an imitation learning based agent equipped with an attention model. The attention model allows us to understand what part of the image has been deemed most important. Interestingly, the use of attention also leads to superior performance in a standard benchmark using the CARLA driving simulator.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源