论文标题
CPGNET:实时激光雷达语义分段的级联点网格融合网络
CPGNet: Cascade Point-Grid Fusion Network for Real-Time LiDAR Semantic Segmentation
论文作者
论文摘要
需要在移动平台上准确,快速且易于执行的激光射击语义细分必须进行高级自主驾驶必不可少的。由于采用了耗时的邻居搜索或稀疏的3D卷积,因此基于点或稀疏的基于体素的方法远离实时应用程序。最近的基于2D投影的方法(包括范围视图和多视图融合)可以实时运行,但由于2D投影期间的信息丢失而遭受较低的精度。此外,为了提高性能,以前的方法通常采用测试时间扩展(TTA),这进一步减慢了推理过程。为了实现更好的速度准确性权衡取舍,我们提出了喀斯喀特点网格融合网络(CPGNET),该网络既可以通过以下两种技术来确保有效性和效率:1)新颖的点网格(PG)融合块提取语义障碍物提取语义提取效率,主要是在2D投影网上的效率上,同时均在2D和3D点上概述3D点上的信息,以实现3D的信息。 2)所提出的转换一致性损失缩小了单个模型推理和TTA之间的差距。 Semantickitti和Nuscenes基准的实验表明,没有集合模型或TTA的CPGNET与最新的RPVNET相当,而其运行速度则更快。
LiDAR semantic segmentation essential for advanced autonomous driving is required to be accurate, fast, and easy-deployed on mobile platforms. Previous point-based or sparse voxel-based methods are far away from real-time applications since time-consuming neighbor searching or sparse 3D convolution are employed. Recent 2D projection-based methods, including range view and multi-view fusion, can run in real time, but suffer from lower accuracy due to information loss during the 2D projection. Besides, to improve the performance, previous methods usually adopt test time augmentation (TTA), which further slows down the inference process. To achieve a better speed-accuracy trade-off, we propose Cascade Point-Grid Fusion Network (CPGNet), which ensures both effectiveness and efficiency mainly by the following two techniques: 1) the novel Point-Grid (PG) fusion block extracts semantic features mainly on the 2D projected grid for efficiency, while summarizes both 2D and 3D features on 3D point for minimal information loss; 2) the proposed transformation consistency loss narrows the gap between the single-time model inference and TTA. The experiments on the SemanticKITTI and nuScenes benchmarks demonstrate that the CPGNet without ensemble models or TTA is comparable with the state-of-the-art RPVNet, while it runs 4.7 times faster.