论文标题

CC-3DT:全景3D对象跟踪通过跨相机融合

CC-3DT: Panoramic 3D Object Tracking via Cross-Camera Fusion

论文作者

Fischer, Tobias, Yang, Yung-Hsu, Kumar, Suryansh, Sun, Min, Yu, Fisher

论文摘要

为了在任何给定时间跟踪其他交通参与者的3D位置和轨迹,现代的自动驾驶汽车配备了多台覆盖车辆完整周围环境的摄像机。但是,基于相机的3D对象跟踪方法优先考虑优化单相机设置,并诉诸于多相机设置中的事后融合。在本文中,我们提出了一种称为CC-3DT的全景3D对象跟踪的方法,该方法是在时间和跨视图上关联和模型对象轨迹,并提高整体跟踪一致性。特别是,我们的方法融合了关联之前的多个相机的3D检测,大大降低了身份转换并改善运动建模。我们在大规模驾驶数据集上的实验表明,关联之前的融合会导致事后融合的大幅度改善。在所有基于竞争性的Nuscenes 3D跟踪基准的基于摄像头的方法中,我们设定了一个新的最先进的方法,平均多对象跟踪准确性(AMOTA)提高了12.6%,在AMOTA中以同一3D检测器的方式优于先前发布的方法。

To track the 3D locations and trajectories of the other traffic participants at any given time, modern autonomous vehicles are equipped with multiple cameras that cover the vehicle's full surroundings. Yet, camera-based 3D object tracking methods prioritize optimizing the single-camera setup and resort to post-hoc fusion in a multi-camera setup. In this paper, we propose a method for panoramic 3D object tracking, called CC-3DT, that associates and models object trajectories both temporally and across views, and improves the overall tracking consistency. In particular, our method fuses 3D detections from multiple cameras before association, reducing identity switches significantly and improving motion modeling. Our experiments on large-scale driving datasets show that fusion before association leads to a large margin of improvement over post-hoc fusion. We set a new state-of-the-art with 12.6% improvement in average multi-object tracking accuracy (AMOTA) among all camera-based methods on the competitive NuScenes 3D tracking benchmark, outperforming previously published methods by 6.5% in AMOTA with the same 3D detector.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源