持续学习旨在从动态数据分布中学习一系列任务。如果不访问旧培训样本,难以确定的旧任务从旧任务转移,这可能是正面或负面的。如果旧知识干扰了新任务的学习,即,前瞻性知识转移是消极的,那么精确地记住旧任务将进一步加剧干扰,从而降低持续学习的性能。相比之下,通过调节学习触发的突触膨胀和突触收敛,生物神经网络可以积极忘记与新经验的学习冲突的旧知识。灵感来自于生物积极的遗忘,我们建议积极忘记限制新任务的学习以努力学习的旧知识。在贝叶斯持续学习的框架下,我们开发了一种名为积极遗忘的新方法,突触扩张 - 收敛(AFEC)。我们的方法动态扩展参数以了解每项新任务,然后选择性地结合它们,这与生物积极遗忘的底层机制正式一致。我们广泛地评估AFEC在各种持续的学习基准上,包括CIFAR-10回归任务,可视化分类任务和Atari加强任务,其中Afec有效提高了新任务的学习,并在插头中实现了最先进的性能 - 游戏方式。
translated by 谷歌翻译
持续学习需要与一系列任务的逐步兼容性。但是,模型体系结构的设计仍然是一个悬而未决的问题:一般而言,以一组共享的参数学习所有任务都受到任务之间的严重干扰;使用专用参数子空间学习每个任务时,受到可扩展性的限制。在这项工作中,我们从理论上分析了在不断学习中学习可塑性和记忆稳定性的概括错误,这可以在任务分布之间的(1)差异,(2)损失景观和(3)参数的覆盖率之间的差异。空间。然后,受到强大的生物学学习系统的启发,该系统通过多个平行的隔室处理顺序体验,我们建议将小型持续学习者(COSCL)的合作作为持续学习的一般策略。具体而言,我们介绍了一个架构,具有固定数量的较窄子网络,以并联学习所有增量任务,这可以自然地通过改善上限的三个组件来减少两个错误。为了增强这一优势,我们鼓励通过惩罚其功能表示的预测差异来合作这些子网络。有了固定的参数预算,COSCL可以将各种代表性的持续学习方法提高较大的利润率(例如,CIFAR-100-SC最高10.64%,CIFAR-100-RS为9.33%,CUB-200-100-100-100-100-100-100-100-100-100-100-100-100-100- 2011年和6.72%的小象征)并实现了新的最新性能。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
持续学习依次解决学习不同任务的设置。尽管以前的许多解决方案,但大多数仍然遭受重大忘记或昂贵的记忆成本。在这项工作中,针对这些问题,我们首先通过信息理论的镜头来研究持续学习过程,并观察到在学习时从前一个任务中的参数丢失的遗忘。新任务。从这个角度来看,我们提出了一种名为位级信息保留(BLIP)的新的连续学习方法,其通过更新位电平的参数来保留模型参数的信息增益,这可以用参数量化方便地实现。更具体地,BLIP首先列举具有对新输入任务的权重量化的神经网络,然后估计由任务数据提供的每个参数上的信息增益,以确定要冻结的比特以防止遗忘。我们进行广泛的实验,从分类任务到加强学习任务,结果表明,我们的方法更好地生成了与以前最先进的结果相比的结果。实际上,昙花一现接近零忘记,同时只需要在连续学习中需要恒定的记忆开销。
translated by 谷歌翻译
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
translated by 谷歌翻译
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs change the underlying training objective to cause less interference with previously learned information. Our proposed version of EBMs for continual learning is simple, efficient, and outperforms baseline methods by a large margin on several benchmarks. Moreover, our proposed contrastive divergence-based training objective can be combined with other continual learning methods, resulting in substantial boosts in their performance. We further show that EBMs are adaptable to a more general continual learning setting where the data distribution changes without the notion of explicitly delineated tasks. These observations point towards EBMs as a useful building block for future continual learning methods.
translated by 谷歌翻译
人类在整个生命周期中不断学习,通过积累多样化的知识并为未来的任务进行微调。当出现类似目标时,神经网络会遭受灾难性忘记,在学习过程中跨顺序任务跨好任务的数据分布是否不固定。解决此类持续学习(CL)问题的有效方法是使用超网络为目标网络生成任务依赖权重。但是,现有基于超网的方法的持续学习性能受到整个层之间权重的独立性的假设,以维持参数效率。为了解决这一限制,我们提出了一种新颖的方法,该方法使用依赖关系保留超网络来为目标网络生成权重,同时还保持参数效率。我们建议使用基于复发的神经网络(RNN)的超网络,该网络可以有效地生成层权重,同时允许在它们的依赖关系中。此外,我们为基于RNN的超网络提出了新颖的正则化和网络增长技术,以进一步提高持续的学习绩效。为了证明所提出的方法的有效性,我们对几个图像分类持续学习任务和设置进行了实验。我们发现,基于RNN HyperNetworks的建议方法在所有这些CL设置和任务中都优于基准。
translated by 谷歌翻译
人的大脑能够依次地学习任务,而无需忘记。但是,深度神经网络(DNN)在学习一项任务时遭受灾难性遗忘。我们考虑了一个挑战,考虑了一个课堂学习方案,在该方案中,DNN看到测试数据而不知道该数据启动的任务。在培训期间,持续的捕获和选择(CP&S)在DNN中找到了负责解决给定任务的子网。然后,在推理期间,CP&S选择正确的子网以对该任务进行预测。通过培训DNN的可用神经元连接(以前未经训练)来创建一个新的子网络,从而通过修剪来学习一项新任务,该连接可以包括以前训练的其他子网络(S),因为它没有更新共享的连接,因为它可以属于其他子网络(S)。这使得通过在DNN中创建专门的区域而不会相互冲突的同时仍允许知识转移在其中,可以消除灾难性的遗忘。 CP&S策略采用不同的子网络选择策略实施,揭示了在各种数据集(CIFAR-100,CUB-200,2011年,Imagenet-100和Imagenet-100)上测试的最先进的持续学习方法的卓越性能。特别是,CP&S能够从Imagenet-1000中依次学习10个任务,以确保94%的精度,而遗忘可忽略不计,这是课堂学习学习的首要结果。据作者所知,与最佳替代方法相比,这表示准确性高于20%的改善。
translated by 谷歌翻译
这项工作调查了持续学习(CL)与转移学习(TL)之间的纠缠。特别是,我们阐明了网络预训练的广泛应用,强调它本身受到灾难性遗忘的影响。不幸的是,这个问题导致在以后任务期间知识转移的解释不足。在此基础上,我们提出了转移而不忘记(TWF),这是在固定的经过预定的兄弟姐妹网络上建立的混合方法,该方法不断传播源域中固有的知识,通过层次损失项。我们的实验表明,TWF在各种设置上稳步优于其他CL方法,在各种数据集和不同的缓冲尺寸上,平均每种类型的精度增长了4.81%。
translated by 谷歌翻译
Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.
translated by 谷歌翻译
由于其非参数化干扰和灾难性遗忘的非参数化能力,核心连续学习\ Cite {derakhshani2021kernel}最近被成为一个强大的持续学习者。不幸的是,它的成功是以牺牲一个明确的内存为代价来存储来自过去任务的样本,这妨碍了具有大量任务的连续学习设置的可扩展性。在本文中,我们介绍了生成的内核持续学习,探讨了生成模型与内核之间的协同作用以进行持续学习。生成模型能够生产用于内核学习的代表性样本,其消除了在内核持续学习中对内存的依赖性。此外,由于我们仅在生成模型上重播,我们避免了与在整个模型上需要重播的先前的方法相比,在计算上更有效的情况下避免任务干扰。我们进一步引入了监督的对比正规化,使我们的模型能够为更好的基于内核的分类性能产生更具辨别性样本。我们对三种广泛使用的连续学习基准进行了广泛的实验,展示了我们贡献的能力和益处。最值得注意的是,在具有挑战性的SplitCifar100基准测试中,只需一个简单的线性内核,我们获得了与内核连续学习的相同的准确性,对于内存的十分之一,或者对于相同的内存预算的10.1%的精度增益。
translated by 谷歌翻译
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. 1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
translated by 谷歌翻译
持续学习(CL)旨在开发单一模型适应越来越多的任务的技术,从而潜在地利用跨任务的学习以资源有效的方式。 CL系统的主要挑战是灾难性的遗忘,在学习新任务时忘记了早期的任务。为了解决此问题,基于重播的CL方法在遇到遇到任务中选择的小缓冲区中维护和重复培训。我们提出梯度Coreset重放(GCR),一种新颖的重播缓冲区选择和使用仔细设计的优化标准的更新策略。具体而言,我们选择并维护一个“Coreset”,其与迄今为止关于当前模型参数的所有数据的梯度紧密近似,并讨论其有效应用于持续学习设置所需的关键策略。在学习的离线持续学习环境中,我们在最先进的最先进的最先进的持续学习环境中表现出显着的收益(2%-4%)。我们的调查结果还有效地转移到在线/流媒体CL设置,从而显示现有方法的5%。最后,我们展示了持续学习的监督对比损失的价值,当与我们的子集选择策略相结合时,累计增益高达5%。
translated by 谷歌翻译
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.
translated by 谷歌翻译
我们引入了一个新的培训范式,该范围对神经网络参数空间进行间隔约束以控制遗忘。当代持续学习(CL)方法从一系列数据流有效地培训神经网络,同时减少灾难性遗忘的负面影响,但它们不能提供任何确保的确保网络性能不会随着时间的流逝而无法控制地恶化。在这项工作中,我们展示了如何通过将模型的持续学习作为其参数空间的持续收缩来遗忘。为此,我们提出了Hypertrectangle训练,这是一种新的训练方法,其中每个任务都由参数空间中的超矩形表示,完全包含在先前任务的超矩形中。这种配方将NP-HARD CL问题降低到多项式时间,同时提供了完全防止遗忘的弹性。我们通过开发Intercontinet(间隔持续学习)算法来验证我们的主张,该算法利用间隔算术来有效地将参数区域建模为高矩形。通过实验结果,我们表明我们的方法在不连续的学习设置中表现良好,而无需存储以前的任务中的数据。
translated by 谷歌翻译
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data. In this paper, we tackle this challenge and propose an approach for continual semi-supervised learning -- a setting where not all the data samples are labeled. An underlying issue in this scenario is the model forgetting representations of unlabeled data and overfitting the labeled ones. We leverage the power of nearest-neighbor classifiers to non-linearly partition the feature space and learn a strong representation for the current task, as well as distill relevant information from previous tasks. We perform a thorough experimental evaluation and show that our method outperforms all the existing approaches by large margins, setting a strong state of the art on the continual semi-supervised learning paradigm. For example, on CIFAR100 we surpass several others even when using at least 30 times less supervision (0.8% vs. 25% of annotations).
translated by 谷歌翻译
从一系列任务中学习一生对于人为一般情报的代理至关重要。这要求代理商不断学习和记住没有干扰的新知识。本文首先展示了使用神经网络的终身学习的基本问题,命名为Anterograde忘记,即保留和转移记忆可能会抑制新知识的学习。这归因于,由于它不断记住历史知识,因此神经网络的学习能力将减少,并且可能发生概念混淆的事实,因为它转移到当前任务的无关旧知识。这项工作提出了一个名为循环内存网络(CMN)的一般框架,以解决终身学习神经网络中的伪造遗忘。 CMN由两个单独的存储器网络组成,用于存储短期和长期存储器以避免容量收缩。传输单元被设计为连接这两个存储器网络,使得从长期存储器网络的知识转移到短期内存网络以减轻概念混淆,并且开发了存储器整合机制以将短期知识集成到其中知识累积的长期记忆网络。实验结果表明,CMN可以有效地解决了在几个与任务相关的,任务冲突,类增量和跨域基准测试中忘记的伪造遗忘。
translated by 谷歌翻译
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
translated by 谷歌翻译
增量任务学习(ITL)是一个持续学习的类别,试图培训单个网络以进行多个任务(一个接一个),其中每个任务的培训数据仅在培训该任务期间可用。当神经网络接受较新的任务培训时,往往会忘记旧任务。该特性通常被称为灾难性遗忘。为了解决此问题,ITL方法使用情节内存,参数正则化,掩盖和修剪或可扩展的网络结构。在本文中,我们提出了一个基于低级别分解的新的增量任务学习框架。特别是,我们表示每一层的网络权重作为几个等级1矩阵的线性组合。为了更新新任务的网络,我们学习一个排名1(或低级别)矩阵,并将其添加到每一层的权重。我们还引入了一个其他选择器向量,该向量将不同的权重分配给对先前任务的低级矩阵。我们表明,就准确性和遗忘而言,我们的方法的表现比当前的最新方法更好。与基于情节的内存和基于面具的方法相比,我们的方法还提供了更好的内存效率。我们的代码将在https://github.com/csiplab/task-increment-rank-update.git上找到。
translated by 谷歌翻译
A growing body of research in continual learning focuses on the catastrophic forgetting problem. While many attempts have been made to alleviate this problem, the majority of the methods assume a single model in the continual learning setup. In this work, we question this assumption and show that employing ensemble models can be a simple yet effective method to improve continual performance. However, ensembles' training and inference costs can increase significantly as the number of models grows. Motivated by this limitation, we study different ensemble models to understand their benefits and drawbacks in continual learning scenarios. Finally, to overcome the high compute cost of ensembles, we leverage recent advances in neural network subspace to propose a computationally cheap algorithm with similar runtime to a single model yet enjoying the performance benefits of ensembles.
translated by 谷歌翻译