论文标题
灾难性遗忘在课堂开发语义细分中的原因
Causes of Catastrophic Forgetting in Class-Incremental Semantic Segmentation
论文作者
论文摘要
语义细分(CISS)的课堂开发学习目前是一个经过深入研究的领域,旨在通过依次学习新的语义类别来更新语义分割模型。 CISS中的一个主要挑战是克服灾难性遗忘的影响,这描述了在模型接受新的一组课程培训后,先前学习的类的准确性突然下降。尽管减轻灾难性遗忘的最新进展,但在CISS中特别遗忘的根本原因尚未得到充分理解。因此,在一系列实验和代表性分析中,我们证明了背景类别的语义转移和对新类别的偏见是忘记CISS的主要原因。此外,我们表明两者都在网络的更深层分类层中大部分出现了自己,而模型的早期层没有影响。最后,我们证明了如何利用背景中包含的信息有效地减轻两种原因,并借助知识蒸馏和无偏的横向渗透损失。
Class-incremental learning for semantic segmentation (CiSS) is presently a highly researched field which aims at updating a semantic segmentation model by sequentially learning new semantic classes. A major challenge in CiSS is overcoming the effects of catastrophic forgetting, which describes the sudden drop of accuracy on previously learned classes after the model is trained on a new set of classes. Despite latest advances in mitigating catastrophic forgetting, the underlying causes of forgetting specifically in CiSS are not well understood. Therefore, in a set of experiments and representational analyses, we demonstrate that the semantic shift of the background class and a bias towards new classes are the major causes of forgetting in CiSS. Furthermore, we show that both causes mostly manifest themselves in deeper classification layers of the network, while the early layers of the model are not affected. Finally, we demonstrate how both causes are effectively mitigated utilizing the information contained in the background, with the help of knowledge distillation and an unbiased cross-entropy loss.