论文标题
基于OOD检测和任务掩盖的持续学习
Continual Learning Based on OOD Detection and Task Masking
论文作者
论文摘要
现有的持续学习技术集中于任务增量学习(TIL)或类增量学习(CIL)问题,但并非两者兼而有之。 CIL和TIL主要不同的是,在TIL测试期间为每个测试样本提供了任务ID,但未提供CIL。针对一个问题的持续学习方法对另一个问题有局限性。本文提出了一种基于分布(OOD)检测和任务掩盖的新型统一方法,称为CLOM,以解决这两个问题。关键新颖性是每个任务都被培训为OOD检测模型,而不是传统的监督学习模型,并且训练了一个任务掩码以保护每个任务以防止忘记。我们的评估表明,CLOM的表现优于大幅度的现有最新基线。 CLOM在六个实验上的平均TIL/CIL准确度为87.6/67.9%,而最佳基线的平均TIL/CIL精度仅为82.4/55.0%。
Existing continual learning techniques focus on either task incremental learning (TIL) or class incremental learning (CIL) problem, but not both. CIL and TIL differ mainly in that the task-id is provided for each test sample during testing for TIL, but not provided for CIL. Continual learning methods intended for one problem have limitations on the other problem. This paper proposes a novel unified approach based on out-of-distribution (OOD) detection and task masking, called CLOM, to solve both problems. The key novelty is that each task is trained as an OOD detection model rather than a traditional supervised learning model, and a task mask is trained to protect each task to prevent forgetting. Our evaluation shows that CLOM outperforms existing state-of-the-art baselines by large margins. The average TIL/CIL accuracy of CLOM over six experiments is 87.6/67.9% while that of the best baselines is only 82.4/55.0%.