论文标题
具有稀疏经验重播终身语言学习的元学习
Meta-Learning with Sparse Experience Replay for Lifelong Language Learning
论文作者
论文摘要
终身学习需要可以从数据流的顺序流中不断学习的模型,而不会遭受由于数据分布的变化而造成的灾难性遗忘。深度学习模型在非序列学习范式中蓬勃发展。但是,当用来学习一系列任务时,它们无法保留过去的知识并逐步学习。我们提出了一种新颖的方法,以终生学习基于元学习的语言任务,并以稀疏的经验重播,直接优化以防止忘记。我们表明,在对任务流执行单个通行证的现实设置,而没有任何任务标识符,我们的方法获得了终身文本分类和关系提取的最新结果。我们分析方法的有效性,并进一步证明其计算和空间复杂性低。
Lifelong learning requires models that can continuously learn from sequential streams of data without suffering catastrophic forgetting due to shifts in data distributions. Deep learning models have thrived in the non-sequential learning paradigm; however, when used to learn a sequence of tasks, they fail to retain past knowledge and learn incrementally. We propose a novel approach to lifelong learning of language tasks based on meta-learning with sparse experience replay that directly optimizes to prevent forgetting. We show that under the realistic setting of performing a single pass on a stream of tasks and without any task identifiers, our method obtains state-of-the-art results on lifelong text classification and relation extraction. We analyze the effectiveness of our approach and further demonstrate its low computational and space complexity.