创建能够证明终身学习的人工智能(AI)系统是一个基本挑战,并且已经提出了许多方法和指标来分析算法属性。但是,对于现有的终身学习指标,算法贡献被任务和场景结构混淆。为了减轻此问题,我们引入了一种算法 - 敏捷的可解释的替代模型方法,以估计终身学习算法的潜在特性。我们验证通过合成数据实验估算这些特性的方法。为了验证替代模型的结构,我们分析了来自流行的终身学习方法和基准的真实绩效数据,这些基线适用于终身分类和终身强化学习。
translated by 谷歌翻译
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. 1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
translated by 谷歌翻译
人类和其他动物的先天能力学习多样化,经常干扰,在整个寿命中的知识和技能范围是自然智能的标志,具有明显的进化动机。同时,人工神经网络(ANN)在一系列任务和域中学习的能力,组合和重新使用所需的学习表现,是人工智能的明确目标。这种能力被广泛描述为持续学习,已成为机器学习研究的多产子场。尽管近年来近年来深度学习的众多成功,但跨越域名从图像识别到机器翻译,因此这种持续的任务学习已经证明了具有挑战性的。在具有随机梯度下降的序列上训练的神经网络通常遭受代表性干扰,由此给定任务的学习权重有效地覆盖了在灾难性遗忘的过程中的先前任务的权重。这代表了对更广泛的人工学习系统发展的主要障碍,能够以类似于人类的方式积累时间和任务空间的知识。伴随的选定论文和实施存储库可以在https://github.com/mccaffary/continualualuallning找到。
translated by 谷歌翻译
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
translated by 谷歌翻译
本文认为,连续学习方法可以通过分割多种模型的学习者的容量来利益。我们使用统计学习理论和实验分析来展示多种任务在单个型号培训时以非琐碎的方式互相交互。特定任务上的泛化误差可以随着协同任务培训,但在竞争任务训练时也可以恶化。该理论激励了我们名为Model动物园的方法,这是从升压文献的启发,增长了小型型号的集合,每个集中都在持续学习的一集中训练。我们展示了模型动物园的准确性提高了各种持续学习基准问题。
translated by 谷歌翻译
恶意软件(恶意软件)分类为持续学习(CL)制度提供了独特的挑战,这是由于每天收到的新样本的数量以及恶意软件的发展以利用新漏洞。在典型的一天中,防病毒供应商将获得数十万个独特的软件,包括恶意和良性,并且在恶意软件分类器的一生中,有超过十亿个样品很容易积累。鉴于问题的规模,使用持续学习技术的顺序培训可以在减少培训和存储开销方面提供可观的好处。但是,迄今为止,还没有对CL应用于恶意软件分类任务的探索。在本文中,我们研究了11种应用于三个恶意软件任务的CL技术,涵盖了常见的增量学习方案,包括任务,类和域增量学习(IL)。具体而言,使用两个现实的大规模恶意软件数据集,我们评估了CL方法在二进制恶意软件分类(domain-il)和多类恶意软件家庭分类(Task-IL和类IL)任务上的性能。令我们惊讶的是,在几乎所有情况下,持续的学习方法显着不足以使训练数据的幼稚关节重播 - 在某些情况下,将精度降低了70个百分点以上。与关节重播相比,有选择性重播20%的存储数据的一种简单方法可以实现更好的性能,占训练时间的50%。最后,我们讨论了CL技术表现出乎意料差的潜在原因,希望它激发进一步研究在恶意软件分类域中更有效的技术。
translated by 谷歌翻译
已知生物制剂在他们的生活过程中学习许多不同的任务,并且能够重新审视以前的任务和行为,而没有表现不损失。相比之下,人工代理容易出于“灾难性遗忘”,在以前任务上的性能随着所获取的新的任务而恶化。最近使用该方法通过鼓励参数保持接近以前任务的方法来解决此缺点。这可以通过(i)使用特定的参数正常数来完成,该参数正常数是在参数空间中映射合适的目的地,或(ii)通过将渐变投影到不会干扰先前任务的子空间来指导优化旅程。然而,这些方法通常在前馈和经常性神经网络中表现出子分子表现,并且经常性网络对支持生物持续学习的神经动力学研究感兴趣。在这项工作中,我们提出了自然的持续学习(NCL),一种统一重量正则化和预测梯度下降的新方法。 NCL使用贝叶斯重量正常化来鼓励在收敛的所有任务上进行良好的性能,并将其与梯度投影结合使用先前的精度,这可以防止在优化期间陷入灾难性遗忘。当应用于前馈和经常性网络中的连续学习问题时,我们的方法占据了标准重量正则化技术和投影的方法。最后,训练有素的网络演变了特定于任务特定的动态,这些动态被认为是学习的新任务,类似于生物电路中的实验结果。
translated by 谷歌翻译
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.
translated by 谷歌翻译
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package. The benchmarks we provide are designed to evaluate different aspects of the continual RL challenge, such as catastrophic forgetting, plasticity, ability to generalize, and sample-efficient learning. Three of the benchmarks utilize video game environments (Atari, Procgen, NetHack). The fourth benchmark, CHORES, consists of four different task sequences in a visually realistic home simulator, drawn from a diverse set of task and scene parameters. To compare continual RL methods on these benchmarks, we prepare three metrics in CORA: Continual Evaluation, Isolated Forgetting, and Zero-Shot Forward Transfer. Finally, CORA includes a set of performant, open-source baselines of existing algorithms for researchers to use and expand on. We release CORA and hope that the continual RL community can benefit from our contributions, to accelerate the development of new continual RL algorithms.
translated by 谷歌翻译
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs change the underlying training objective to cause less interference with previously learned information. Our proposed version of EBMs for continual learning is simple, efficient, and outperforms baseline methods by a large margin on several benchmarks. Moreover, our proposed contrastive divergence-based training objective can be combined with other continual learning methods, resulting in substantial boosts in their performance. We further show that EBMs are adaptable to a more general continual learning setting where the data distribution changes without the notion of explicitly delineated tasks. These observations point towards EBMs as a useful building block for future continual learning methods.
translated by 谷歌翻译
经典的机器学习算法通常假设绘制数据是i.i.d的。来自固定概率分布。最近,持续学习成为机器学习的快速增长领域,在该领域中,该假设放松,即数据分布是非平稳的,并且随着时间的推移而变化。本文通过上下文变量$ c $表示数据分布的状态。 $ c $的漂移导致数据分布漂移。上下文漂移可能会改变目标分布,输入分布或两者兼而有之。此外,分布漂移可能是突然的或逐渐的。在持续学习中,环境漂移可能会干扰学习过程并擦除以前学习的知识。因此,持续学习算法必须包括处理此类漂移的专业机制。在本文中,我们旨在识别和分类不同类型的上下文漂移和潜在的假设,以更好地表征各种持续学习的场景。此外,我们建议使用分布漂移框架来提供对连续学习领域常用的几个术语的更精确的定义。
translated by 谷歌翻译
Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.
translated by 谷歌翻译
增量任务学习(ITL)是一个持续学习的类别,试图培训单个网络以进行多个任务(一个接一个),其中每个任务的培训数据仅在培训该任务期间可用。当神经网络接受较新的任务培训时,往往会忘记旧任务。该特性通常被称为灾难性遗忘。为了解决此问题,ITL方法使用情节内存,参数正则化,掩盖和修剪或可扩展的网络结构。在本文中,我们提出了一个基于低级别分解的新的增量任务学习框架。特别是,我们表示每一层的网络权重作为几个等级1矩阵的线性组合。为了更新新任务的网络,我们学习一个排名1(或低级别)矩阵,并将其添加到每一层的权重。我们还引入了一个其他选择器向量,该向量将不同的权重分配给对先前任务的低级矩阵。我们表明,就准确性和遗忘而言,我们的方法的表现比当前的最新方法更好。与基于情节的内存和基于面具的方法相比,我们的方法还提供了更好的内存效率。我们的代码将在https://github.com/csiplab/task-increment-rank-update.git上找到。
translated by 谷歌翻译
在不同的持续学习场景中可以经验经验评估模型的能力。每种情况都定义了限制和学习环境的机会。在这里,我们挑战了持续学习文学中的当前趋势,主要是在类渐进式场景上进行实验,其中一项经验中的课程从未被重新审视。我们对这种环境的过度注重可能是对持续学习的未来研究来限制,因为类增量场景人为地加剧了灾难性的遗忘,以牺牲其他重要目标等于前向传递和计算效率。在许多现实世界环境中,实际上,重复先前遇到的概念自然地发生,有助于软化对先前知识的破坏。我们倡导更深入地研究替代持续学习场景,其中重复通过传入信息流中的设计集成。从已经现有的提案开始,我们描述了这种级别的级别与重复方案的优势可以提供更全面的持续学习模型的评估。
translated by 谷歌翻译
持续学习旨在通过以在线学习方式利用过去获得的知识,同时能够在所有以前的任务上表现良好,从而学习一系列任务,这对人工智能(AI)系统至关重要,因此持续学习与传统学习模式相比,更适合大多数现实和复杂的应用方案。但是,当前的模型通常在每个任务上的类标签上学习一个通用表示基础,并选择有效的策略来避免灾难性的遗忘。我们假设,仅从获得的知识中选择相关且有用的零件比利用整个知识更有效。基于这一事实,在本文中,我们提出了一个新框架,名为“选择相关的在线持续学习知识(SRKOCL),该框架结合了一种额外的有效频道注意机制,以选择每个任务的特定相关知识。我们的模型还结合了经验重播和知识蒸馏,以避免灾难性的遗忘。最后,在不同的基准上进行了广泛的实验,竞争性实验结果表明,我们提出的SRKOCL是针对最先进的承诺方法。
translated by 谷歌翻译
持续学习旨在从动态数据分布中学习一系列任务。如果不访问旧培训样本,难以确定的旧任务从旧任务转移,这可能是正面或负面的。如果旧知识干扰了新任务的学习,即,前瞻性知识转移是消极的,那么精确地记住旧任务将进一步加剧干扰,从而降低持续学习的性能。相比之下,通过调节学习触发的突触膨胀和突触收敛,生物神经网络可以积极忘记与新经验的学习冲突的旧知识。灵感来自于生物积极的遗忘,我们建议积极忘记限制新任务的学习以努力学习的旧知识。在贝叶斯持续学习的框架下,我们开发了一种名为积极遗忘的新方法,突触扩张 - 收敛(AFEC)。我们的方法动态扩展参数以了解每项新任务,然后选择性地结合它们,这与生物积极遗忘的底层机制正式一致。我们广泛地评估AFEC在各种持续的学习基准上,包括CIFAR-10回归任务,可视化分类任务和Atari加强任务,其中Afec有效提高了新任务的学习,并在插头中实现了最先进的性能 - 游戏方式。
translated by 谷歌翻译