论文标题

gan记忆无忘记

GAN Memory with No Forgetting

论文作者

Cong, Yulai, Zhao, Miaoyun, Li, Jianqiao, Wang, Sijia, Carin, Lawrence

论文摘要

作为终身学习中的基本问题,灾难性遗忘直接是由无法访问的历史数据引起的。因此,如果数据(信息)完美地记忆,则不应忘记。由此激发,我们为终身学习提出了一个gan内存,该记忆能够通过\ emph {no}忘记通过生成过程记住数据集。我们的GAN内存是基于认识到可以调节GAN模型的“样式”以形成感知到靶向生成的。因此,我们建议在行为良好的基本模型上进行连续样式调制,以形成顺序的靶向生成模型,同时从转移的基本知识中受益。因此,GAN记忆是由终身学习的动机 - 因此,通过从先前任务中对信息的正向转移和调制来表现出一种终身学习的形式。实验证明了我们方法比现有方法的优势及其在减轻终身分类问题的灾难性遗忘方面的有效性。代码可从https://github.com/miaoyunzhao/ganmemory_lifelonglearning获得。

As a fundamental issue in lifelong learning, catastrophic forgetting is directly caused by inaccessible historical data; accordingly, if the data (information) were memorized perfectly, no forgetting should be expected. Motivated by that, we propose a GAN memory for lifelong learning, which is capable of remembering a stream of datasets via generative processes, with \emph{no} forgetting. Our GAN memory is based on recognizing that one can modulate the "style" of a GAN model to form perceptually-distant targeted generation. Accordingly, we propose to do sequential style modulations atop a well-behaved base GAN model, to form sequential targeted generative models, while simultaneously benefiting from the transferred base knowledge. The GAN memory -- that is motivated by lifelong learning -- is therefore itself manifested by a form of lifelong learning, via forward transfer and modulation of information from prior tasks. Experiments demonstrate the superiority of our method over existing approaches and its effectiveness in alleviating catastrophic forgetting for lifelong classification problems. Code is available at https://github.com/MiaoyunZhao/GANmemory_LifelongLearning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源