论文标题

相机和激光镜头的时间和空间在线校准

Temporal and Spatial Online Integrated Calibration for Camera and LiDAR

论文作者

Wang, Shouan, Zhang, Xinyu, Zhang, GuiPeng, Xiong, Yijin, Tian, Ganglin, Guo, Shichun, Li, Jun

论文摘要

虽然相机和激光雷达在大多数辅助和自主驾驶系统中广泛使用,但仅提出了少数工程来将适用于在线传感器数据融合的摄像机和激光镜头的时间同步和外部校准。时间和空间校准技术正面临缺乏相关性和实时的挑战。在本文中,我们介绍了姿势估计模型和环境鲁棒线的提取,以提高数据融合和即时在线校正能力的相关性。考虑到相邻力矩之间的点云匹配的对应关系,消除旨在寻求最佳政策的动态目标。搜索优化过程旨在以计算精度和效率提供准确的参数。为了证明这种方法的好处,我们以基础真实价值在基蒂基准上进行评估。在在线实验中,与时间校准中的软同步方法相比,我们的方法提高了准确性38.5 \%。在空间校准时,我们的方法会在0.4秒内自动纠正干扰误差,并达到0.3度的精度。这项工作可以促进传感器融合的研究和应用。

While camera and LiDAR are widely used in most of the assisted and autonomous driving systems, only a few works have been proposed to associate the temporal synchronization and extrinsic calibration for camera and LiDAR which are dedicated to online sensors data fusion. The temporal and spatial calibration technologies are facing the challenges of lack of relevance and real-time. In this paper, we introduce the pose estimation model and environmental robust line features extraction to improve the relevance of data fusion and instant online ability of correction. Dynamic targets eliminating aims to seek optimal policy considering the correspondence of point cloud matching between adjacent moments. The searching optimization process aims to provide accurate parameters with both computation accuracy and efficiency. To demonstrate the benefits of this method, we evaluate it on the KITTI benchmark with ground truth value. In online experiments, our approach improves the accuracy by 38.5\% than the soft synchronization method in temporal calibration. While in spatial calibration, our approach automatically corrects disturbance errors within 0.4 second and achieves an accuracy of 0.3-degree. This work can promote the research and application of sensor fusion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源