论文标题

用于类增量学习的增量原型调整

Incremental Prototype Tuning for Class Incremental Learning

论文作者

Deng, Jieren, Hu, Jianhua, Zhang, Haojian, Wang, Yunkuan

论文摘要

班级增量学习(CIL)吸引了很多关注,但是大多数现有的相关作品都集中在微调整个表示模型上,这不可避免地会导致许多灾难性的遗忘。相反,使用语义丰富的预训练的表示模型,参数 - 辅助调整(PAT)仅更改很少的参数来学习新的视觉概念。最近的研究证明,基于PAT的CIL自然可以避免像大多数现有方法一样通过重播或蒸馏而战斗。但是,我们发现基于PAT的CIL仍然面临着严重的语义漂移,这是由分类器学习偏见在不同学习阶段引起的高级遗忘问题,这大大降低了基于PAT的CIL的性能。为了解决这个问题,我们提出了增量原型调整(IPT),这是一种简单但有效的方法,它调整了分类和学习示例原型的类别原型,以补偿语义漂移。广泛的实验表明,我们的方法可以有效地补偿语义漂移。与精心训练的VIT骨架和其他PAT方法相结合,IPT超过了主流学习基准的最新基准。

Class incremental learning(CIL) has attracted much attention, but most existing related works focus on fine-tuning the entire representation model, which inevitably results in much catastrophic forgetting. In the contrast, with a semantic-rich pre-trained representation model, parameter-additional-tuning (PAT) only changes very few parameters to learn new visual concepts. Recent studies have proved that PAT-based CIL can naturally avoid fighting against forgetting by replaying or distilling like most of the existing methods. However, we find that PAT-based CIL still faces serious semantic drift, the high-level forgetting problem caused by classifier learning bias at different learning phases, which significantly reduces the performance of PAT-based CIL. To address this problem, we propose Incremental Prototype Tuning (IPT), a simple but effective method that tunes category prototypes for classification and learning example prototypes to compensate for semantic drift. Extensive experiments demonstrate that our method can effectively compensate for semantic drift. Combined with well-pre-trained Vit backbones and other PAT methods, IPT surpasses the state-of-the-art baselines on mainstream incremental learning benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源