论文标题

Yodar:基于不确定性的传感器融合用于使用相机和雷达传感器的车辆检测

YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Radar Sensors

论文作者

Kowol, Kamil, Rottmann, Matthias, Bracke, Stefan, Gottschalk, Hanno

论文摘要

在这项工作中,我们提出了一种基于不确定性的方法,将传感器与摄像头和雷达数据融合。两个神经网络的输出,一个处理摄像头和另一个雷达数据,以一种不确定性的方式组合在一起。为此,我们收集了两个网络的输出和相应的元信息。对于每个预测的对象,收集的信息是通过梯度提升方法后进行后处理的,以产生两个网络的联合预测。在我们的实验中,我们将Yolov3对象检测网络与自定义的$ 1D $雷达分割网络相结合,并在Nuscenes数据集上评估我们的方法。特别是我们专注于夜幕,其中基于相机数据的对象检测网络的能力可能会残障。我们的实验表明,这种不确定性融合的方法(也具有非常模块化的性质),与单个传感器基准相比,具有显着的性能,并且具有一系列专门定制的基于深度学习的融合方法。

In this work, we present an uncertainty-based method for sensor fusion with camera and radar data. The outputs of two neural networks, one processing camera and the other one radar data, are combined in an uncertainty aware manner. To this end, we gather the outputs and corresponding meta information for both networks. For each predicted object, the gathered information is post-processed by a gradient boosting method to produce a joint prediction of both networks. In our experiments we combine the YOLOv3 object detection network with a customized $1D$ radar segmentation network and evaluate our method on the nuScenes dataset. In particular we focus on night scenes, where the capability of object detection networks based on camera data is potentially handicapped. Our experiments show, that this approach of uncertainty aware fusion, which is also of very modular nature, significantly gains performance compared to single sensor baselines and is in range of specifically tailored deep learning based fusion approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源