论文标题

使用多视图任务条件神经网络的持续学习

Continual Learning Using Multi-view Task Conditional Neural Networks

论文作者

Li, Honglin, Barnaghi, Payam, Enshaeifar, Shirin, Ganz, Frieder

论文摘要

传统的深度学习模型在依次学习多个任务方面的能力有限。忘记持续学习中先前学习的任务的问题被称为灾难性的遗忘或干扰。当输入数据或学习更改的目标时,持续的模型将学习并适应新状态。但是,该模型不会记住或认识到对先前状态的任何重新访问。这会导致降低性能和重新训练曲线,以应对数据或目标的周期性或不规则重复发生变化。目标或数据的变化在连续学习模型中称为新任务。大多数持续学习方法都有一个任务已知的设置,其中任务身份在学习模型中已提前知道。我们提出了多视图任务条件神经网络(MV-TCNN),该神经网络不需要事先知道重新出现的任务。我们使用MNIST,CIFAR10,CIFAR100以及我们在远程医疗保健监控研究(即TIHM数据集)中收集的现实世界数据集评估了我们的模型。所提出的模型在不断学习和适应未预先定义的新任务方面优于最先进的解决方案。

Conventional deep learning models have limited capacity in learning multiple tasks sequentially. The issue of forgetting the previously learned tasks in continual learning is known as catastrophic forgetting or interference. When the input data or the goal of learning change, a continual model will learn and adapt to the new status. However, the model will not remember or recognise any revisits to the previous states. This causes performance reduction and re-training curves in dealing with periodic or irregularly reoccurring changes in the data or goals. The changes in goals or data are referred to as new tasks in a continual learning model. Most of the continual learning methods have a task-known setup in which the task identities are known in advance to the learning model. We propose Multi-view Task Conditional Neural Networks (Mv-TCNN) that does not require to known the reoccurring tasks in advance. We evaluate our model on standard datasets using MNIST, CIFAR10, CIFAR100, and also a real-world dataset that we have collected in a remote healthcare monitoring study (i.e. TIHM dataset). The proposed model outperforms the state-of-the-art solutions in continual learning and adapting to new tasks that are not defined in advance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源