Attempts to train a comprehensive artificial intelligence capable of solvingmultiple tasks have been impeded by a chronic problem called catastrophicforgetting. Although simply replaying all previous data alleviates the problem,it requires large memory and even worse, often infeasible in real worldapplications where the access to past data is limited. Inspired by thegenerative nature of hippocampus as a short-term memory system in primatebrain, we propose the Deep Generative Replay, a novel framework with acooperative dual model architecture consisting of a deep generative model("generator") and a task solving model ("solver"). With only these two models,training data for previous tasks can easily be sampled and interleaved withthose for a new task. We test our methods in several sequential learningsettings involving image classification tasks.
translated by 谷歌翻译