论文标题
使用惊喜充足降低DNN标签成本:用于自动驾驶的工业案例研究
Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving
论文作者
论文摘要
由于对自动驾驶必不可少的任务,汽车行业的深刻神经网络(DNN)迅速被汽车行业采用。对象细分就是这样的任务:其目的是精确定位对象的边界并对已确定的对象进行分类,从而帮助自动驾驶汽车识别道路环境和交通状况。此任务安全不仅至关重要,而且开发基于DNN的对象细分模块提出了一系列挑战,这些挑战与安全关键软件的传统开发明显不同。使用的开发过程包括数据收集,标签,培训和评估的多次迭代。在这些阶段中,培训和评估是计算密集型的,而数据收集和标签是手动劳动密集型的。本文显示了如何通过利用惊喜充足性(SA)和模型性能之间的相关性来改善基于DNN的对象细分的开发。相关性使我们能够预测输入的模型性能,而无需手动标记它们。反过来,这可以了解模型性能,更多指导数据收集以及有关进一步培训的明智决定。在我们的工业案例研究中,由于评估不准确,该技术可节省高达50%的成本。此外,工程师可以根据不同的开发阶段和场景来兑换成本节省与无法忍受的不可容纳水平。
Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries of objects and classify the identified objects, helping autonomous cars to recognise the road environment and the traffic situation. Not only is this task safety critical, but developing a DNN based object segmentation module presents a set of challenges that are significantly different from traditional development of safety critical software. The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation. Among these stages, training and evaluation are computation intensive while data collection and labelling are manual labour intensive. This paper shows how development of DNN based object segmentation can be improved by exploiting the correlation between Surprise Adequacy (SA) and model performance. The correlation allows us to predict model performance for inputs without manually labelling them. This, in turn, enables understanding of model performance, more guided data collection, and informed decisions about further training. In our industrial case study the technique allows cost savings of up to 50% with negligible evaluation inaccuracy. Furthermore, engineers can trade off cost savings versus the tolerable level of inaccuracy depending on different development phases and scenarios.