论文标题

无监督点云预训练通过遮挡完成

Unsupervised Point Cloud Pre-Training via Occlusion Completion

论文作者

Wang, Hanchen, Liu, Qi, Yue, Xiangyu, Lasenby, Joan, Kusner, Matthew J.

论文摘要

我们描述了一种简单的点云预训练方法。它分为三个步骤:1。掩盖相机视图中遮住的所有点; 2。学习一个编码器模型以重建遮挡点; 3。使用编码器权重作为下游点云任务的初始化。我们发现,即使我们构建单个预训练数据集(来自ModelNet40),此预训练方法也可以在广泛的下游任务上提高不同数据集和编码器的精度。具体来说,我们表明我们的方法在对象分类中优于先前的预训练方法,以及基于部分和语义分割任务。我们研究了预训练的特征,并发现它们导致宽的下游最小值,具有较高的变换不变性,并且具有与零件标签高度相关的激活。代码和数据可在以下网址找到:https://github.com/hansen7/occo

We describe a simple pre-training approach for point clouds. It works in three steps: 1. Mask all points occluded in a camera view; 2. Learn an encoder-decoder model to reconstruct the occluded points; 3. Use the encoder weights as initialisation for downstream point cloud tasks. We find that even when we construct a single pre-training dataset (from ModelNet40), this pre-training method improves accuracy across different datasets and encoders, on a wide range of downstream tasks. Specifically, we show that our method outperforms previous pre-training methods in object classification, and both part-based and semantic segmentation tasks. We study the pre-trained features and find that they lead to wide downstream minima, have high transformation invariance, and have activations that are highly correlated with part labels. Code and data are available at: https://github.com/hansen7/OcCo

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源