论文标题

迈向视觉变形金刚中的无示例性持续学习:注意,功能和体重正则化的描述

Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization

论文作者

Pelosin, Francesco, Jha, Saurav, Torsello, Andrea, Raducanu, Bogdan, van de Weijer, Joost

论文摘要

在本文中,我们研究了视觉变形金刚(VIT)的持续学习,以挑战无示例的场景,特别关注如何有效地提炼其至关重要的自我意识机制(SAM)。我们的工作朝着对SAM进行外科手术研究迈出的第一步,用于在VIT中设计连贯的持续学习方法。我们首先对已建立的持续学习正则化技术进行评估。然后,当将正则化应用于SAM的两个关键推动器时,我们检查了正则化的效果:(a)上下文化的嵌入层,以捕获相对于值捕获良好的表示形式的能力,以及(b)预先刻定的注意图,用于携带无独立的全局上下文信息。我们在两个图像识别基准(CIFAR100和Imagenet-32)上描绘了每种蒸馏策略的特权 - 而(a)可以提高整体准确性,(b)通过保持竞争性能来帮助增强刚性。此外,我们确定正规化损失的对称性质施加的限制。为了减轻这一点,我们提出了一个不对称的变体,并将其应用于适用于VIT的合并输出蒸馏(POD)损失。我们的实验证实,引入不对称性以使其可塑性提高了(a)和(b)的稳定性。此外,我们承认所有比较方法的低遗忘措施,表明VIT可能是自然倾向的持续学习者

In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes an initial step towards a surgical investigation of SAM for designing coherent continual learning methods in ViTs. We first carry out an evaluation of established continual learning regularization techniques. We then examine the effect of regularization when applied to two key enablers of SAM: (a) the contextualized embedding layers, for their ability to capture well-scaled representations with respect to the values, and (b) the prescaled attention maps, for carrying value-independent global contextual information. We depict the perks of each distilling strategy on two image recognition benchmarks (CIFAR100 and ImageNet-32) -- while (a) leads to a better overall accuracy, (b) helps enhance the rigidity by maintaining competitive performances. Furthermore, we identify the limitation imposed by the symmetric nature of regularization losses. To alleviate this, we propose an asymmetric variant and apply it to the pooled output distillation (POD) loss adapted for ViTs. Our experiments confirm that introducing asymmetry to POD boosts its plasticity while retaining stability across (a) and (b). Moreover, we acknowledge low forgetting measures for all the compared methods, indicating that ViTs might be naturally inclined continual learner

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源