论文标题
学习场景在带有嘈杂伪标签的3D点云中流动
Learning Scene Flow in 3D Point Clouds with Noisy Pseudo Labels
论文作者
论文摘要
我们提出了一种新颖的场景流量方法,该方法可捕获点云中的3D运动,而不依赖地面真实场景流注释。由于点云的不规则性和稀疏性,获取地面真实场景流注释是昂贵且耗时的。一些最先进的方法通过从点云中近似伪场景流量标签来以一种自我监督的学习方式来训练场景流动网络。但是,由于点云(如稀疏性和缺乏颜色信息)的局限性,这些方法无法达到完全监督方法的性能水平。为了提供替代方案,我们提出了一种新颖的方法,该方法利用单眼RGB图像和点云来生成用于训练场景流网络的伪场景流标签。我们的伪标签生成模块通过共同利用单眼图像中的丰富外观信息和点云的几何信息来渗透点云的伪场景标签。为了进一步减少嘈杂的伪标签对训练的负面影响,我们通过利用点的几何关系提出了一种嘈杂的标签感知训练方案。实验结果表明,我们的方法不仅胜过最先进的自我监督方法,而且还优于某些使用准确的地面真相流的监督方法。
We propose a novel scene flow method that captures 3D motions from point clouds without relying on ground-truth scene flow annotations. Due to the irregularity and sparsity of point clouds, it is expensive and time-consuming to acquire ground-truth scene flow annotations. Some state-of-the-art approaches train scene flow networks in a self-supervised learning manner via approximating pseudo scene flow labels from point clouds. However, these methods fail to achieve the performance level of fully supervised methods, due to the limitations of point cloud such as sparsity and lacking color information. To provide an alternative, we propose a novel approach that utilizes monocular RGB images and point clouds to generate pseudo scene flow labels for training scene flow networks. Our pseudo label generation module infers pseudo scene labels for point clouds by jointly leveraging rich appearance information in monocular images and geometric information of point clouds. To further reduce the negative effect of noisy pseudo labels on the training, we propose a noisy-label-aware training scheme by exploiting the geometric relations of points. Experiment results show that our method not only outperforms state-of-the-art self-supervised approaches, but also outperforms some supervised approaches that use accurate ground-truth flows.