论文标题
Oamixer:视觉变压器的对象感知混合层
OAMixer: Object-aware Mixing Layer for Vision Transformers
论文作者
论文摘要
基于贴片的模型,例如视觉变压器(VIT)和混合器,在各种视觉识别任务上显示出令人印象深刻的结果,交替使用经典的卷积网络。虽然最初的基于斑块的模型(VIT)同样处理了所有斑块,但最近的研究表明,将电感偏差如空间性构成有益于表示形式。但是,大多数先前的作品仅专注于补丁的位置,俯瞰图像的场景结构。因此,我们旨在使用对象信息进一步指导补丁的相互作用。具体来说,我们提出了oamixer(对象感知混合层),该块基于对象标签来校准基于贴片的模型的贴片混合层。在这里,我们以无监督或弱监督的举止获得对象标签,即无需额外的人类通知费用。使用对象标签,Oamixer使用具有可学习的比例参数计算重新加权掩码,该掩码加强了包含相似对象的补丁的相互作用,并将掩码应用于补丁混合层。通过学习以对象为中心的表示形式,我们证明了Oamixer提高了基于贴片的各种模型的分类准确性和背景鲁棒性,包括VIT,MLP-MIXER和CORVMIXER。此外,我们表明oamixer增强了各种下游任务,包括大规模分类,自我监管学习和多对象识别,验证oamixer的通用适用性
Patch-based models, e.g., Vision Transformers (ViTs) and Mixers, have shown impressive results on various visual recognition tasks, alternating classic convolutional networks. While the initial patch-based models (ViTs) treated all patches equally, recent studies reveal that incorporating inductive bias like spatiality benefits the representations. However, most prior works solely focused on the location of patches, overlooking the scene structure of images. Thus, we aim to further guide the interaction of patches using the object information. Specifically, we propose OAMixer (object-aware mixing layer), which calibrates the patch mixing layers of patch-based models based on the object labels. Here, we obtain the object labels in unsupervised or weakly-supervised manners, i.e., no additional human-annotating cost is necessary. Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers. By learning an object-centric representation, we demonstrate that OAMixer improves the classification accuracy and background robustness of various patch-based models, including ViTs, MLP-Mixers, and ConvMixers. Moreover, we show that OAMixer enhances various downstream tasks, including large-scale classification, self-supervised learning, and multi-object recognition, verifying the generic applicability of OAMixer