论文标题
视觉变压器的令牌标签对齐
Token-Label Alignment for Vision Transformers
论文作者
论文摘要
数据混合策略(例如CutMix)表明能够极大地提高卷积神经网络(CNN)的性能。他们将两个图像混合在一起,作为用于训练的输入,并用相同比率的混合标签分配它们。尽管它们对视觉变压器有效(VIT)有效,但我们确定了一种令牌波动现象,该现象抑制了数据混合策略的潜力。我们从经验上观察到,输入令牌的贡献随着向前的传播而波动,这可能会引起输出令牌中的混合比不同。因此,由原始数据混合策略计算出的训练目标可能是不准确的,从而导致训练较低。为了解决这个问题,我们提出了一个令牌标签的比对(TL-ARIGN)方法,以追踪转换令牌和原始令牌之间的对应关系,以维护每个令牌的标签。我们将计算出的注意力重复使用,以进行有效的令牌标签对齐,仅引入可忽略的额外培训成本。广泛的实验表明,我们的方法提高了VIT在图像分类,语义分割,客观检测和转移学习任务上的性能。代码可在以下网址提供:https://github.com/euphoria16/tl-align。
Data mixing strategies (e.g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs). They mix two images as inputs for training and assign them with a mixed label with the same ratio. While they are shown effective for vision transformers (ViTs), we identify a token fluctuation phenomenon that has suppressed the potential of data mixing strategies. We empirically observe that the contributions of input tokens fluctuate as forward propagating, which might induce a different mixing ratio in the output tokens. The training target computed by the original data mixing strategy can thus be inaccurate, resulting in less effective training. To address this, we propose a token-label alignment (TL-Align) method to trace the correspondence between transformed tokens and the original tokens to maintain a label for each token. We reuse the computed attention at each layer for efficient token-label alignment, introducing only negligible additional training costs. Extensive experiments demonstrate that our method improves the performance of ViTs on image classification, semantic segmentation, objective detection, and transfer learning tasks. Code is available at: https://github.com/Euphoria16/TL-Align.