论文标题

在真实的交通场景上,通过场景结合的自下而上的机制在真实的交通场景上完成自我监督的点云完成

Self-supervised Point Cloud Completion on Real Traffic Scenes via Scene-concerned Bottom-up Mechanism

论文作者

Ren, Yiming, Cong, Peishan, Zhu, Xinge, Ma, Yuexin

论文摘要

由于自我估计,外部估计性和有限的传感器分辨率,真正的扫描总是会错过对象的部分几何形状。点云完成旨在参考不完整的3D对象扫描的完整形状。当前基于深度学习的方法依赖于训练过程中的大规模完整形状,这些形状通常是从合成数据集获得的。由于域间隙,它不适用于实际扫描。在本文中,我们建议在实际交通场景中的车辆自我监督点云完成方法(TRAPCC),而没有任何完整的数据。基于车辆的对称性和相似性,我们利用连续的点云框架来构建车辆内存库作为参考。我们设计了一种自下而上的机制,可以专注于本地几何细节和输入的全局形状特征。此外,我们在网络中设计了一个场景图,以借助邻近车辆注意丢失的零件。实验表明,TRAPCC在Kitti和Nuscenes流量数据集上实现良好的性能,即使没有任何培训中的任何完整数据。我们还显示了3D检测的下游应用,这受益于我们的完成方法。

Real scans always miss partial geometries of objects due to the self-occlusions, external-occlusions, and limited sensor resolutions. Point cloud completion aims to refer the complete shapes for incomplete 3D scans of objects. Current deep learning-based approaches rely on large-scale complete shapes in the training process, which are usually obtained from synthetic datasets. It is not applicable for real-world scans due to the domain gap. In this paper, we propose a self-supervised point cloud completion method (TraPCC) for vehicles in real traffic scenes without any complete data. Based on the symmetry and similarity of vehicles, we make use of consecutive point cloud frames to construct vehicle memory bank as reference. We design a bottom-up mechanism to focus on both local geometry details and global shape features of inputs. In addition, we design a scene-graph in the network to pay attention to the missing parts by the aid of neighboring vehicles. Experiments show that TraPCC achieve good performance for real-scan completion on KITTI and nuScenes traffic datasets even without any complete data in training. We also show a downstream application of 3D detection, which benefits from our completion approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源