论文标题
RV-FUSENET:基于范围视图的时间序列的LIDAR数据,用于连接3D对象检测和运动预测
RV-FuseNet: Range View Based Fusion of Time-Series LiDAR Data for Joint 3D Object Detection and Motion Forecasting
论文作者
论文摘要
对于自动驾驶汽车,必须对交通参与者进行强大的实时检测和运动预测,以安全地浏览城市环境。在本文中,我们提出了RV-Fusenet,这是一种直接从时间序列LIDAR数据中进行联合检测和轨迹估计的新型端到端方法。我们利用LiDAR数据的本地范围视图(RV)表示,而不是广泛使用的鸟类视图(BEV)表示。 RV通过避免BEV中使用的体素化来保留传感器的完整分辨率。此外,由于其紧凑性,RV可以有效地处理。以前的方法将项目时间序列数据显示为时间融合的共同观点,并且这种观点通常与捕获的观点不同。对于BEV方法来说,这足够了,但是对于RV方法,这可能会导致信息和数据失真的丢失,从而对性能产生不利影响。为了应对这一挑战,我们提出了一个简单而有效的新颖体系结构\ textit {增量融合},该体系结构通过将每个RV扫描到下一个及时的下一个扫描的观点中,从而最大程度地减少了信息丢失。我们表明,我们的方法可显着提高运动预测性能,而不是现有的最新技术。此外,我们证明了我们的顺序融合方法优于多个数据集上的基于RV的替代融合方法。
Robust real-time detection and motion forecasting of traffic participants is necessary for autonomous vehicles to safely navigate urban environments. In this paper, we present RV-FuseNet, a novel end-to-end approach for joint detection and trajectory estimation directly from time-series LiDAR data. Instead of the widely used bird's eye view (BEV) representation, we utilize the native range view (RV) representation of LiDAR data. The RV preserves the full resolution of the sensor by avoiding the voxelization used in the BEV. Furthermore, RV can be processed efficiently due to its compactness. Previous approaches project time-series data to a common viewpoint for temporal fusion, and often this viewpoint is different from where it was captured. This is sufficient for BEV methods, but for RV methods, this can lead to loss of information and data distortion which has an adverse impact on performance. To address this challenge we propose a simple yet effective novel architecture, \textit{Incremental Fusion}, that minimizes the information loss by sequentially projecting each RV sweep into the viewpoint of the next sweep in time. We show that our approach significantly improves motion forecasting performance over the existing state-of-the-art. Furthermore, we demonstrate that our sequential fusion approach is superior to alternative RV based fusion methods on multiple datasets.