论文标题

班级新颖的小说课程发现

Class-incremental Novel Class Discovery

论文作者

Roy, Subhankar, Liu, Mingxuan, Zhong, Zhun, Sebe, Nicu, Ricci, Elisa

论文摘要

我们研究了类新型小说类发现的新任务(类class-incd),该任务是指在一个未标记的数据集中发现新类别的问题,该问题是通过利用预先训练的模型,该模型已在包含脱节的标签数据集上进行了培训。除了发现新颖的课程外,我们还旨在维护模型识别先前看到的基本类别的能力。受到基于彩排的渐进学习方法的启发,在本文中,我们提出了一种新颖的方法,以防止通过共同利用基类功能原型和特征级知识蒸馏来忘记有关基础类的过去信息。我们还提出了一种自我训练的聚类策略,该策略同时将新颖的类别簇簇,并为基础和新颖类培训共同分类器。这使得我们的方法能够在课堂内设置中运行。我们的实验以三个共同的基准进行,表明我们的方法显着优于最先进的方法。代码可从https://github.com/oatmealliu/class-incd获得

We study the new task of class-incremental Novel Class Discovery (class-iNCD), which refers to the problem of discovering novel categories in an unlabelled data set by leveraging a pre-trained model that has been trained on a labelled data set containing disjoint yet related categories. Apart from discovering novel classes, we also aim at preserving the ability of the model to recognize previously seen base categories. Inspired by rehearsal-based incremental learning methods, in this paper we propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes by jointly exploiting base class feature prototypes and feature-level knowledge distillation. We also propose a self-training clustering strategy that simultaneously clusters novel categories and trains a joint classifier for both the base and novel classes. This makes our method able to operate in a class-incremental setting. Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches. Code is available at https://github.com/OatmealLiu/class-iNCD

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源