论文标题

自动编码器作为跨模式教师:验证的2D图像变压器可以帮助3D表示学习吗?

Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?

论文作者

Dong, Runpei, Qi, Zekun, Zhang, Linfeng, Zhang, Junbo, Sun, Jianjian, Ge, Zheng, Yi, Li, Ma, Kaisheng

论文摘要

深度学习的成功在很大程度上依赖于带有综合标签的大规模数据,与2D图像或自然语言相比,它更昂贵且耗时,可以在3D中获取。这促进了使用超过3D验证的模型作为跨模式知识转移的教师的潜力。在本文中,我们以统一的知识蒸馏方式重新审视掩盖建模,我们表明,用2D图像或自然语言预审预测的基础变压器可以帮助通过培训自动编码器作为跨模式教师(ACT)来帮助自我监督的3D代表学习。预处理的变压器使用离散的自动编码自我设计将跨模式3D教师转移为跨模式的3D教师,在此期间,变压器通过及时调整冻结,以迅速调整以获得更好的知识继承。由3D教师编码的潜在特征被用作掩盖点建模的目标,其中黑暗知识被蒸馏到3D变形金刚学生作为基础几何学的理解。我们的ACT预估计的3D学习者可以在各种下游基准测试中实现最新的概括能力,例如Scanobjectnn的总体准确性为88.21%。代码已在https://github.com/runpeidong/act上发布。

The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes have been released at https://github.com/RunpeiDong/ACT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源