持续学习和少数学习是追求改善机器学习的重要领域。每个边界的工作越来越多,但将两者结合起来很少。但是最近,Antoniou等人。 ARXIV:2004.11967引入了一个连续的少数学习框架CFSL,将两者都结合在一起。在这项研究中,我们扩展了CFSL,以使其与标准持续学习实验更具可比性,通常会介绍更多的类。我们还引入了一个“实例测试”以对非常相似的特定实例进行分类 - ML通常忽略的动物认知能力。我们从原始CFSL工作中选择了代表性的基线模型,并将其与具有海马启发性重播的模型进行了比较,因为海马被认为对动物中的这种学习至关重要。正如预期的那样,学习更多的课程比原始的CFSL实验更加困难,有趣的是,它们的呈现方式对性能有所不同。实例测试中的准确性与分类任务相当。使用重播进行合并可改善两种类型的任务的性能,尤其是实例测试。
translated by 谷歌翻译
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. 1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
现代ML方法在培训数据是IID,大规模和良好标记的时候Excel。在不太理想的条件下学习仍然是一个开放的挑战。在不利条件下,几次射击,持续的,转移和代表学习的子场在学习中取得了很大的进步;通过方法和见解,每个都提供了独特的优势。这些方法解决了不同的挑战,例如依次到达的数据或稀缺的训练示例,然而,在部署之前,ML系统将面临困难的条件。因此,需要可以处理实际设置中许多学习挑战的一般ML系统。为了促进一般ML方法目标的研究,我们介绍了一个新的统一评估框架 - 流体(灵活的顺序数据)。流体集成了几次拍摄,持续的,转移和表示学习的目标,同时能够比较和整合这些子场的技术。在流体中,学习者面临数据流,并且必须在选择如何更新自身时进行顺序预测,快速调整到新颖的类别,并处理更改的数据分布;虽然会计计算总额。我们对广泛的方法进行实验,这些方法阐述了新的洞察当前解决方案的优缺点并表明解决了新的研究问题。作为更一般方法的起点,我们展示了两种新的基线,其在流体上优于其他评估的方法。项目页面:https://raivn.cs.washington.edu/projects/fluid/。
translated by 谷歌翻译
对于新应用程序,例如家庭机器人,智能手机的用户个性化以及增强/虚拟现实耳机,需要实时的持续学习持续学习。但是,此设置构成了独特的挑战:嵌入式设备的内存和计算能力有限,并且在非平稳数据流进行更新时,灾难性遗忘的常规机器学习模型会遭受损失。尽管已经开发了几种在线持续学习模型,但它们对嵌入式应用程序的有效性尚未进行严格研究。在本文中,我们首先确定在线持续学习者必须满足以有效执行实时,设备学习的标准。然后,当与移动神经网络一起使用时,我们研究了几种在线连续学习方法的功效。我们衡量他们的性能,内存使用情况,计算要求以及将其推广到分类外输入的能力。
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
人类和其他动物的先天能力学习多样化,经常干扰,在整个寿命中的知识和技能范围是自然智能的标志,具有明显的进化动机。同时,人工神经网络(ANN)在一系列任务和域中学习的能力,组合和重新使用所需的学习表现,是人工智能的明确目标。这种能力被广泛描述为持续学习,已成为机器学习研究的多产子场。尽管近年来近年来深度学习的众多成功,但跨越域名从图像识别到机器翻译,因此这种持续的任务学习已经证明了具有挑战性的。在具有随机梯度下降的序列上训练的神经网络通常遭受代表性干扰,由此给定任务的学习权重有效地覆盖了在灾难性遗忘的过程中的先前任务的权重。这代表了对更广泛的人工学习系统发展的主要障碍,能够以类似于人类的方式积累时间和任务空间的知识。伴随的选定论文和实施存储库可以在https://github.com/mccaffary/continualualuallning找到。
translated by 谷歌翻译
人类的持续学习(CL)能力与稳定性与可塑性困境密切相关,描述了人类如何实现持续的学习能力和保存的学习信息。自发育以来,CL的概念始终存在于人工智能(AI)中。本文提出了对CL的全面审查。与之前的评论不同,主要关注CL中的灾难性遗忘现象,本文根据稳定性与可塑性机制的宏观视角来调查CL。类似于生物对应物,“智能”AI代理商应该是I)记住以前学到的信息(信息回流); ii)不断推断新信息(信息浏览:); iii)转移有用的信息(信息转移),以实现高级CL。根据分类学,评估度量,算法,应用以及一些打开问题。我们的主要贡献涉及I)从人工综合情报层面重新检查CL; ii)在CL主题提供详细和广泛的概述; iii)提出一些关于CL潜在发展的新颖思路。
translated by 谷歌翻译
已知应用于任务序列的标准梯度下降算法可在深层神经网络中产生灾难性遗忘。当对序列中的新任务进行培训时,该模型会在当前任务上更新其参数,从而忘记过去的知识。本文探讨了我们在有限环境中扩展任务数量的方案。这些方案由与重复数据的长期任务组成。我们表明,在这种情况下,随机梯度下降可以学习,进步并融合到根据现有文献需要持续学习算法的解决方案。换句话说,我们表明该模型在没有特定的记忆机制的情况下执行知识保留和积累。我们提出了一个新的实验框架,即Scole(缩放量表),以研究在潜在无限序列中的知识保留和算法的积累。为了探索此设置,我们对1,000个任务的序列进行了大量实验,以更好地了解这种新的设置家庭。我们还提出了对香草随机梯度下降的轻微修改,以促进这种情况下的持续学习。 SCOLE框架代表了对实用训练环境的良好模拟,并允许长序列研究收敛行为。我们的实验表明,在短方案上以前的结果不能总是推断为更长的场景。
translated by 谷歌翻译
大多数元学习方法都假设存在于可用于基本知识的情节元学习的一组非常大的标记数据。这与更现实的持续学习范例形成对比,其中数据以包含不相交类的任务的形式逐步到达。在本文中,我们考虑了这个增量元学习(IML)的这个问题,其中类在离散任务中逐步呈现。我们提出了一种方法,我们调用了IML,我们称之为eCISODIC重播蒸馏(ERD),该方法将来自当前任务的类混合到当前任务中,当研究剧集时,来自先前任务的类别示例。然后将这些剧集用于知识蒸馏以最大限度地减少灾难性的遗忘。四个数据集的实验表明ERD超越了最先进的。特别是,在一次挑战的单次次数较挑战,长任务序列增量元学习场景中,我们将IML和联合训练与当前状态的3.5%/ 10.1%/ 13.4%之间的差距降低我们在Diered-ImageNet / Mini-ImageNet / CIFAR100上分别为2.6%/ 2.9%/ 5.0%。
translated by 谷歌翻译
随着时间的流逝,不断扩大知识并利用其快速推广到新任务的能力是人类语言智能的关键特征。然而,现有对新任务进行快速概括的模型(例如,很少的学习方法)主要是在固定数据集中的单个镜头中训练,无法动态扩展其知识;虽然不断学习算法并非专门设计用于快速概括。我们提出了一种新的学习设置,对几杆学习者(CLIF)的持续学习,以应对统一设置的两个学习设置的挑战。 CLIF假设模型从依次到达的一系列不同的NLP任务中学习,从而积累了知识,以改善对新任务的概括,同时还保留了较早所学的任务的性能。我们研究了在持续学习设置中如何影响概括能力,评估许多持续学习算法,并提出一种新型的正则适配器生成方法。我们发现,灾难性的遗忘影响着概括能力的程度远低于所见任务的表现。虽然持续学习算法仍然可以为概括能力带来可观的好处。
translated by 谷歌翻译
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.
translated by 谷歌翻译
尽管深度神经网络(DNNS)在封闭世界的学习方案中取得了令人印象深刻的分类性能,但它们通常无法概括地在动态的开放世界环境中看不见的类别,在这种环境中,概念数量无界的数量。相反,人类和动物学习者具有通过识别和适应新颖观察结果来逐步更新知识的能力。特别是,人类通过独家(唯一)基本特征集来表征概念,这些特征既用于识别已知类别和识别新颖性。受到自然学习者的启发,我们引入了稀疏的高级独特,低水平共享的特征表示(Shels),同时鼓励学习独家的高级功能和必不可少的,共享的低级功能。高级功能的排他性使DNN能够自动检测到分布(OOD)数据,而通过稀疏的低级功能可以有效利用容量,可以容纳新知识。最终的方法使用OOD检测来执行班级持续学习,而没有已知的类边界。我们表明,使用木材进行新颖性检测导致对各种基准数据集的最新OOD检测方法的统计显着改善。此外,我们证明了木木模型在课堂学习环境中减轻灾难性的遗忘,从而实现了一个组合的新颖性检测和住宿框架,该框架支持在开放世界中学习
translated by 谷歌翻译
在线持续学习是一个充满挑战的学习方案,模型必须从非平稳的数据流中学习,其中每个样本只能看到一次。主要的挑战是在避免灾难性遗忘的同时逐步学习,即在从新数据中学习时忘记先前获得的知识的问题。在这种情况下,一种流行的解决方案是使用较小的内存来保留旧数据并随着时间的推移进行排练。不幸的是,由于内存尺寸有限,随着时间的推移,内存的质量会恶化。在本文中,我们提出了OLCGM,这是一种基于新型重放的持续学习策略,该策略使用知识冷凝技术连续压缩记忆并更好地利用其有限的尺寸。样品冷凝步骤压缩了旧样品,而不是像其他重播策略那样将其删除。结果,实验表明,每当与数据的复杂性相比,每当记忆预算受到限制,OLCGM都会提高与最先进的重播策略相比的最终准确性。
translated by 谷歌翻译
持续学习研究的主要重点领域是通过设计新算法对分布变化更强大的新算法来减轻神经网络中的“灾难性遗忘”问题。尽管持续学习文献的最新进展令人鼓舞,但我们对神经网络的特性有助于灾难性遗忘的理解仍然有限。为了解决这个问题,我们不关注持续的学习算法,而是在这项工作中专注于模型本身,并研究神经网络体系结构对灾难性遗忘的“宽度”的影响,并表明宽度在遗忘遗产方面具有出人意料的显着影响。为了解释这种效果,我们从各个角度研究网络的学习动力学,例如梯度正交性,稀疏性和懒惰的培训制度。我们提供了与不同架构和持续学习基准之间的经验结果一致的潜在解释。
translated by 谷歌翻译
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
translated by 谷歌翻译
尽管人工神经网络(ANN)取得了重大进展,但其设计过程仍在臭名昭著,这主要取决于直觉,经验和反复试验。这个依赖人类的过程通常很耗时,容易出现错误。此外,这些模型通常与其训练环境绑定,而没有考虑其周围环境的变化。神经网络的持续适应性和自动化对于部署后模型可访问性的几个领域至关重要(例如,IoT设备,自动驾驶汽车等)。此外,即使是可访问的模型,也需要频繁的维护后部署后,以克服诸如概念/数据漂移之类的问题,这可能是繁琐且限制性的。当前关于自适应ANN的艺术状况仍然是研究的过早领域。然而,一种自动化和持续学习形式的神经体系结构搜索(NAS)最近在深度学习研究领域中获得了越来越多的动力,旨在提供更强大和适应性的ANN开发框架。这项研究是关于汽车和CL之间交集的首次广泛综述,概述了可以促进ANN中充分自动化和终身可塑性的不同方法的研究方向。
translated by 谷歌翻译
持续学习(CL)旨在开发单一模型适应越来越多的任务的技术,从而潜在地利用跨任务的学习以资源有效的方式。 CL系统的主要挑战是灾难性的遗忘,在学习新任务时忘记了早期的任务。为了解决此问题,基于重播的CL方法在遇到遇到任务中选择的小缓冲区中维护和重复培训。我们提出梯度Coreset重放(GCR),一种新颖的重播缓冲区选择和使用仔细设计的优化标准的更新策略。具体而言,我们选择并维护一个“Coreset”,其与迄今为止关于当前模型参数的所有数据的梯度紧密近似,并讨论其有效应用于持续学习设置所需的关键策略。在学习的离线持续学习环境中,我们在最先进的最先进的最先进的持续学习环境中表现出显着的收益(2%-4%)。我们的调查结果还有效地转移到在线/流媒体CL设置,从而显示现有方法的5%。最后,我们展示了持续学习的监督对比损失的价值,当与我们的子集选择策略相结合时,累计增益高达5%。
translated by 谷歌翻译
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package. The benchmarks we provide are designed to evaluate different aspects of the continual RL challenge, such as catastrophic forgetting, plasticity, ability to generalize, and sample-efficient learning. Three of the benchmarks utilize video game environments (Atari, Procgen, NetHack). The fourth benchmark, CHORES, consists of four different task sequences in a visually realistic home simulator, drawn from a diverse set of task and scene parameters. To compare continual RL methods on these benchmarks, we prepare three metrics in CORA: Continual Evaluation, Isolated Forgetting, and Zero-Shot Forward Transfer. Finally, CORA includes a set of performant, open-source baselines of existing algorithms for researchers to use and expand on. We release CORA and hope that the continual RL community can benefit from our contributions, to accelerate the development of new continual RL algorithms.
translated by 谷歌翻译
Interacting with a complex world involves continual learning, in which tasks and data distributions change over time. A continual learning system should demonstrate both plasticity (acquisition of new knowledge) and stability (preservation of old knowledge). Catastrophic forgetting is the failure of stability, in which new experience overwrites previous experience. In the brain, replay of past experience is widely believed to reduce forgetting, yet it has been largely overlooked as a solution to forgetting in deep reinforcement learning. Here, we introduce CLEAR, a replay-based method that greatly reduces catastrophic forgetting in multi-task reinforcement learning. CLEAR leverages off-policy learning and behavioral cloning from replay to enhance stability, as well as on-policy learning to preserve plasticity. We show that CLEAR performs better than state-of-the-art deep learning techniques for mitigating forgetting, despite being significantly less complicated and not requiring any knowledge of the individual tasks being learned.
translated by 谷歌翻译