论文标题
DriveFuzz:通过驾驶质量引导的模糊发现自动驾驶错误
DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing
论文作者
论文摘要
自动驾驶已成为现实;价格合理的价格范围内的半自主驾驶车已经在街上,主要的汽车供应商正在积极开发完整的自动驾驶系统来在这十年中部署它们。在将产品推出到最终用户之前,必须测试和确保自主驾驶系统的安全性至关重要,该系统由以复杂方式交织的多层组成。但是,尽管关键的安全错误可能存在于任何一层,甚至在整个层中都存在,但对在所有层上测试整个驾驶系统的关注很少。先前的工作主要集中于单个层的白盒测试,并防止对每一层的攻击。 在本文中,我们旨在对自动驾驶系统进行整体测试,这些自动驾驶系统的整体整体集成在一起。我们专注于车辆指出系统在驾驶环境中不断变化,而不是研究各个层。这使我们可以设计DriveFuzz,这是一个新的系统模糊框架,无论其位置如何,都可以发现潜在的漏洞。 DriveFuzz会根据利用高保真驾驶模拟器的不同因素自动生成和突变驾驶场景。我们根据现实世界的交通规则来建立新颖的驾驶测试甲壳虫,以检测到安全关键的不当行为,并通过驱动涉及车辆物理状态的质量指标来指导魔力机构的行为不端。 DriveFuzz在两个自动驾驶系统(Autoware和Carla行为代理)的各个层中发现了30个新错误,在Carla模拟器中发现了其他三个错误。我们进一步分析了这些错误的影响,以及对手如何利用它们作为安全漏洞,以在现实世界中造成关键事故。
Autonomous driving has become real; semi-autonomous driving vehicles in an affordable price range are already on the streets, and major automotive vendors are actively developing full self-driving systems to deploy them in this decade. Before rolling the products out to the end-users, it is critical to test and ensure the safety of the autonomous driving systems, consisting of multiple layers intertwined in a complicated way. However, while safety-critical bugs may exist in any layer and even across layers, relatively little attention has been given to testing the entire driving system across all the layers. Prior work mainly focuses on white-box testing of individual layers and preventing attacks on each layer. In this paper, we aim at holistic testing of autonomous driving systems that have a whole stack of layers integrated in their entirety. Instead of looking into the individual layers, we focus on the vehicle states that the system continuously changes in the driving environment. This allows us to design DriveFuzz, a new systematic fuzzing framework that can uncover potential vulnerabilities regardless of their locations. DriveFuzz automatically generates and mutates driving scenarios based on diverse factors leveraging a high-fidelity driving simulator. We build novel driving test oracles based on the real-world traffic rules to detect safety-critical misbehaviors, and guide the fuzzer towards such misbehaviors through driving quality metrics referring to the physical states of the vehicle. DriveFuzz has discovered 30 new bugs in various layers of two autonomous driving systems (Autoware and CARLA Behavior Agent) and three additional bugs in the CARLA simulator. We further analyze the impact of these bugs and how an adversary may exploit them as security vulnerabilities to cause critical accidents in the real world.