论文标题

学习一个可解释的模型,用于诱导偏见的驾驶员行为预测

Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases

论文作者

Arbabi, Salar, Tavernini, Davide, Fallah, Saber, Bowden, Richard

论文摘要

为了计划安全的演习并采取远见卓识,自动驾驶汽车必须能够准确预测不确定的未来。在自主驾驶的背景下,深层神经网络已成功地应用于从数据中学习人类驾驶行为的预测模型。但是,这些预测遭受了级联错误的影响,导致长时间的不准确性。此外,学到的模型是黑匣子,因此通常不清楚它们如何得出自己的预测。相比之下,由人类专家告知的基于规则的模型在其预测中保持了长期连贯性,并且可以解释。但是,这样的模型通常缺乏捕获复杂的现实世界动态所需的足够表现力。在这项工作中,我们开始通过将智能驱动程序模型(一种流行的手工制作的驱动程序模型)嵌入深度神经网络来缩小这一差距。我们的模型的透明度可以提供可观的优势,例如,在调试模型并更容易解释其预测时。我们在模拟合并方案中评估我们的方法,表明它产生了可端到端训练的强大模型,并无需为模型的预测准确性提供更高的透明度。

To plan safe maneuvers and act with foresight, autonomous vehicles must be capable of accurately predicting the uncertain future. In the context of autonomous driving, deep neural networks have been successfully applied to learning predictive models of human driving behavior from data. However, the predictions suffer from cascading errors, resulting in large inaccuracies over long time horizons. Furthermore, the learned models are black boxes, and thus it is often unclear how they arrive at their predictions. In contrast, rule-based models, which are informed by human experts, maintain long-term coherence in their predictions and are human-interpretable. However, such models often lack the sufficient expressiveness needed to capture complex real-world dynamics. In this work, we begin to close this gap by embedding the Intelligent Driver Model, a popular hand-crafted driver model, into deep neural networks. Our model's transparency can offer considerable advantages, e.g., in debugging the model and more easily interpreting its predictions. We evaluate our approach on a simulated merging scenario, showing that it yields a robust model that is end-to-end trainable and provides greater transparency at no cost to the model's predictive accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源