论文标题
对比性蒙面的自动编码器是更强大的视力学习者
Contrastive Masked Autoencoders are Stronger Vision Learners
论文作者
论文摘要
蒙版的图像建模(MIM)在各种视觉任务上取得了有希望的结果。但是,学到的表示形式的有限歧视性表现出来,使一个更强大的视力学习者还有很多值得一试的事情。为了实现这一目标,我们提出了对比度蒙面的自动编码器(CMAE),这是一种新的自我监督的预训练方法,用于学习更全面,更有能力的视觉表示。通过详细统一的对比度学习(CL)和掩盖图像模型(MIM),CMAE利用了它们各自的优势,并以强大的实例可辨别性和局部概念性来学习表示形式。具体而言,CMAE由两个分支组成,其中在线分支是一个不对称的编码器编码器,动量分支是动量更新的编码器。在培训期间,在线编码器从蒙面图像的潜在表示中重建了原始图像,以学习整体特征。动量编码器添加了完整图像,可以通过其在线学习来增强功能可区分性。为了使CL与MIM兼容,CMAE引入了两个新组件,即用于生成合理的积极视图和特征解码器的像素转换,以补充对比度对的特征。多亏了这些新颖的设计,CMAE有效地提高了MIM对应物的表示质量和转移性能。 CMAE在图像分类,语义细分和对象检测的高度竞争基准上实现了最先进的性能。值得注意的是,CMAE-BASE在Imagenet上获得了$ 85.3 \%$ $ TOP-1的准确性和$ 52.5 \%$ $ MIOU的ADE20K,分别超过了$ 0.7 \%\%$ $和$ 1.8 \%$ $。源代码可在\ url {https://github.com/zhichenghuang/cmae}中公开访问。
Masked image modeling (MIM) has achieved promising results on various vision tasks. However, the limited discriminability of learned representation manifests there is still plenty to go for making a stronger vision learner. Towards this goal, we propose Contrastive Masked Autoencoders (CMAE), a new self-supervised pre-training method for learning more comprehensive and capable vision representations. By elaboratively unifying contrastive learning (CL) and masked image model (MIM) through novel designs, CMAE leverages their respective advantages and learns representations with both strong instance discriminability and local perceptibility. Specifically, CMAE consists of two branches where the online branch is an asymmetric encoder-decoder and the momentum branch is a momentum updated encoder. During training, the online encoder reconstructs original images from latent representations of masked images to learn holistic features. The momentum encoder, fed with the full images, enhances the feature discriminability via contrastive learning with its online counterpart. To make CL compatible with MIM, CMAE introduces two new components, i.e. pixel shifting for generating plausible positive views and feature decoder for complementing features of contrastive pairs. Thanks to these novel designs, CMAE effectively improves the representation quality and transfer performance over its MIM counterpart. CMAE achieves the state-of-the-art performance on highly competitive benchmarks of image classification, semantic segmentation and object detection. Notably, CMAE-Base achieves $85.3\%$ top-1 accuracy on ImageNet and $52.5\%$ mIoU on ADE20k, surpassing previous best results by $0.7\%$ and $1.8\%$ respectively. The source code is publicly accessible at \url{https://github.com/ZhichengHuang/CMAE}.