从一系列任务中学习一生对于人为一般情报的代理至关重要。这要求代理商不断学习和记住没有干扰的新知识。本文首先展示了使用神经网络的终身学习的基本问题,命名为Anterograde忘记,即保留和转移记忆可能会抑制新知识的学习。这归因于,由于它不断记住历史知识,因此神经网络的学习能力将减少,并且可能发生概念混淆的事实,因为它转移到当前任务的无关旧知识。这项工作提出了一个名为循环内存网络(CMN)的一般框架,以解决终身学习神经网络中的伪造遗忘。 CMN由两个单独的存储器网络组成,用于存储短期和长期存储器以避免容量收缩。传输单元被设计为连接这两个存储器网络,使得从长期存储器网络的知识转移到短期内存网络以减轻概念混淆,并且开发了存储器整合机制以将短期知识集成到其中知识累积的长期记忆网络。实验结果表明,CMN可以有效地解决了在几个与任务相关的,任务冲突,类增量和跨域基准测试中忘记的伪造遗忘。
translated by 谷歌翻译
持续学习旨在快速,不断地从一系列任务中学习当前的任务。与其他类型的方法相比,基于经验重播的方法表现出了极大的优势来克服灾难性的遗忘。该方法的一个常见局限性是上一个任务和当前任务之间的数据不平衡,这将进一步加剧遗忘。此外,如何在这种情况下有效解决稳定性困境也是一个紧迫的问题。在本文中,我们通过提出一个通过多尺度知识蒸馏和数据扩展(MMKDDA)提出一个名为Meta学习更新的新框架来克服这些挑战。具体而言,我们应用多尺度知识蒸馏来掌握不同特征级别的远程和短期空间关系的演变,以减轻数据不平衡问题。此外,我们的方法在在线持续训练程序中混合了来自情节记忆和当前任务的样品,从而减轻了由于概率分布的变化而减轻了侧面影响。此外,我们通过元学习更新来优化我们的模型,该更新诉诸于前面所看到的任务数量,这有助于保持稳定性和可塑性之间的更好平衡。最后,我们对四个基准数据集的实验评估显示了提出的MMKDDA框架对其他流行基线的有效性,并且还进行了消融研究,以进一步分析每个组件在我们的框架中的作用。
translated by 谷歌翻译
人类的持续学习(CL)能力与稳定性与可塑性困境密切相关,描述了人类如何实现持续的学习能力和保存的学习信息。自发育以来,CL的概念始终存在于人工智能(AI)中。本文提出了对CL的全面审查。与之前的评论不同,主要关注CL中的灾难性遗忘现象,本文根据稳定性与可塑性机制的宏观视角来调查CL。类似于生物对应物,“智能”AI代理商应该是I)记住以前学到的信息(信息回流); ii)不断推断新信息(信息浏览:); iii)转移有用的信息(信息转移),以实现高级CL。根据分类学,评估度量,算法,应用以及一些打开问题。我们的主要贡献涉及I)从人工综合情报层面重新检查CL; ii)在CL主题提供详细和广泛的概述; iii)提出一些关于CL潜在发展的新颖思路。
translated by 谷歌翻译
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
translated by 谷歌翻译
持续学习旨在通过以在线学习方式利用过去获得的知识,同时能够在所有以前的任务上表现良好,从而学习一系列任务,这对人工智能(AI)系统至关重要,因此持续学习与传统学习模式相比,更适合大多数现实和复杂的应用方案。但是,当前的模型通常在每个任务上的类标签上学习一个通用表示基础,并选择有效的策略来避免灾难性的遗忘。我们假设,仅从获得的知识中选择相关且有用的零件比利用整个知识更有效。基于这一事实,在本文中,我们提出了一个新框架,名为“选择相关的在线持续学习知识(SRKOCL),该框架结合了一种额外的有效频道注意机制,以选择每个任务的特定相关知识。我们的模型还结合了经验重播和知识蒸馏,以避免灾难性的遗忘。最后,在不同的基准上进行了广泛的实验,竞争性实验结果表明,我们提出的SRKOCL是针对最先进的承诺方法。
translated by 谷歌翻译
尽管人工神经网络(ANN)取得了重大进展,但其设计过程仍在臭名昭著,这主要取决于直觉,经验和反复试验。这个依赖人类的过程通常很耗时,容易出现错误。此外,这些模型通常与其训练环境绑定,而没有考虑其周围环境的变化。神经网络的持续适应性和自动化对于部署后模型可访问性的几个领域至关重要(例如,IoT设备,自动驾驶汽车等)。此外,即使是可访问的模型,也需要频繁的维护后部署后,以克服诸如概念/数据漂移之类的问题,这可能是繁琐且限制性的。当前关于自适应ANN的艺术状况仍然是研究的过早领域。然而,一种自动化和持续学习形式的神经体系结构搜索(NAS)最近在深度学习研究领域中获得了越来越多的动力,旨在提供更强大和适应性的ANN开发框架。这项研究是关于汽车和CL之间交集的首次广泛综述,概述了可以促进ANN中充分自动化和终身可塑性的不同方法的研究方向。
translated by 谷歌翻译
记住和遗忘机制是人类学习记忆系统中同一硬币的两侧。灵感来自人类脑记忆机制,现代机器学习系统一直在努力通过更好地记住终身学习能力的机器,同时推动遗忘为敌人来克服。尽管如此,这个想法可能只能看到半张图片。直到最近,越来越多的研究人员认为,大脑出生忘记,即忘记是抽象,丰富和灵活的陈述的自然和积极的过程。本文通过人工神经网络积极遗忘机制提出了一种学习模型。主动遗忘机制(AFM)通过“即插即用”遗忘层(P \&PF)引入神经网络,由具有内部调节策略(IRS)的抑制神经元组成,以调整自己的消光率通过横向抑制机制和外部调节策略(ERS)通过抑制机制调节兴奋性神经元的消光速率。实验研究表明,P \&PF提供了令人惊讶的益处:自适应结构,强大的泛化,长期学习和记忆,以及对数据和参数扰动的鲁棒性。这项工作阐明了忘记学习过程的重要性,并提供了新的视角,了解神经网络的潜在机制。
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency.
translated by 谷歌翻译
人类和其他动物的先天能力学习多样化,经常干扰,在整个寿命中的知识和技能范围是自然智能的标志,具有明显的进化动机。同时,人工神经网络(ANN)在一系列任务和域中学习的能力,组合和重新使用所需的学习表现,是人工智能的明确目标。这种能力被广泛描述为持续学习,已成为机器学习研究的多产子场。尽管近年来近年来深度学习的众多成功,但跨越域名从图像识别到机器翻译,因此这种持续的任务学习已经证明了具有挑战性的。在具有随机梯度下降的序列上训练的神经网络通常遭受代表性干扰,由此给定任务的学习权重有效地覆盖了在灾难性遗忘的过程中的先前任务的权重。这代表了对更广泛的人工学习系统发展的主要障碍,能够以类似于人类的方式积累时间和任务空间的知识。伴随的选定论文和实施存储库可以在https://github.com/mccaffary/continualualuallning找到。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
Graph learning is a popular approach for performing machine learning on graph-structured data. It has revolutionized the machine learning ability to model graph data to address downstream tasks. Its application is wide due to the availability of graph data ranging from all types of networks to information systems. Most graph learning methods assume that the graph is static and its complete structure is known during training. This limits their applicability since they cannot be applied to problems where the underlying graph grows over time and/or new tasks emerge incrementally. Such applications require a lifelong learning approach that can learn the graph continuously and accommodate new information whilst retaining previously learned knowledge. Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure. As a result, graph lifelong learning is gaining attention from the research community. This survey paper provides a comprehensive overview of recent advancements in graph lifelong learning, including the categorization of existing methods, and the discussions of potential applications and open research problems.
translated by 谷歌翻译
Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.
translated by 谷歌翻译
终身学习旨在学习一系列任务,而无需忘记先前获得的知识。但是,由于隐私或版权原因,涉及的培训数据可能不是终身合法的。例如,在实际情况下,模型所有者可能希望不时启用或禁用特定任务或特定样本的知识。不幸的是,这种灵活的对知识转移的灵活控制在以前的增量或减少学习方法中,即使在问题设定的水平上也被忽略了。在本文中,我们探索了一种新颖的学习方案,称为学习,可回收遗忘(LIRF),该方案明确处理任务或特定于样本的知识去除和恢复。具体而言,LIRF带来了两个创新的方案,即知识存款和撤回,这使用户指定的知识从预先训练的网络中隔离开来,并在必要时将其注入。在知识存款过程中,从目标网络中提取了指定的知识并存储在存款模块中,同时保留了目标网络的不敏感或一般知识,并进一步增强。在知识提取期间,将带走知识添加回目标网络。存款和提取过程仅需在删除数据上对几个时期进行填充时期,从而确保数据和时间效率。我们在几个数据集上进行实验,并证明所提出的LIRF策略具有令人振奋的概括能力。
translated by 谷歌翻译
神经网络(NNS)的能力在顺序地学习和记住多项任务是由于其灾难性遗忘(CF)问题而在实现一般人工智能方面面临艰难的挑战。幸运的是,最新的OWM正交权重修改)和其他几种连续学习(CL)方法表明了一些有希望的克服CF问题的方法。但是,现有的CL方法都没有探讨以下三个关键问题,以便有效地克服CF问题:即,它有助于在其顺序任务学习期间对NN的有效权重修改有所了解?当新学习任务的数据分布与先前学习的任务相对应的更改时,是否应该采用统一/特定的权重修改策略?对于给定的CL方法,可学习任务的上限是什么? ect。为了实现这一点,在本文中,我们首先揭示了新的学习任务的权重梯度的事实是由新任务的输入空间和先前学习任务的重量空间顺序确定。在这种观察和递归最小二乘法的情况下,我们通过增强型OWM提出了一种新的高效和有效的连续学习方法EOWM。我们理论上和明确地赋予了我们的EOWM的学习任务的上限。在基准测试上进行的广泛实验表明,我们的EOWM是有效性,优于所有最先进的CL基线。
translated by 谷歌翻译
在学习新知识时,班级学习学习(CIL)与灾难性遗忘和无数据CIL(DFCIL)的斗争更具挑战性,而无需访问以前学过的课程的培训数据。尽管最近的DFCIL作品介绍了诸如模型反转以合成以前类的数据,但由于合成数据和真实数据之间的严重域间隙,它们无法克服遗忘。为了解决这个问题,本文提出了有关DFCIL的关系引导的代表学习(RRL),称为R-DFCIL。在RRL中,我们引入了关系知识蒸馏,以灵活地将新数据的结构关系从旧模型转移到当前模型。我们的RRL增强DFCIL可以指导当前的模型来学习与以前类的表示更好地兼容的新课程的表示,从而大大减少了在改善可塑性的同时遗忘。为了避免表示和分类器学习之间的相互干扰,我们在RRL期间采用本地分类损失而不是全球分类损失。在RRL之后,分类头将通过全球类平衡的分类损失进行完善,以解决数据不平衡问题,并学习新课程和以前类之间的决策界限。关于CIFAR100,Tiny-Imagenet200和Imagenet100的广泛实验表明,我们的R-DFCIL显着超过了以前的方法,并实现了DFCIL的新最新性能。代码可从https://github.com/jianzhangcs/r-dfcil获得。
translated by 谷歌翻译
根据互补学习系统(CLS)理论〜\ cite {mcclelland1995there}在神经科学中,人类通过两个补充系统有效\ emph {持续学习}:一种快速学习系统,以海马为中心,用于海马,以快速学习细节,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验的快速学习, ;以及位于新皮层中的缓慢学习系统,以逐步获取有关环境的结构化知识。在该理论的激励下,我们提出\ emph {dualnets}(对于双网络),这是一个一般的持续学习框架,该框架包括一个快速学习系统,用于监督从特定任务和慢速学习系统中的模式分离代表学习,用于表示任务的慢学习系统 - 不可知论的一般代表通过自我监督学习(SSL)。双网符可以无缝地将两种表示类型纳入整体框架中,以促进在深层神经网络中更好地持续学习。通过广泛的实验,我们在各种持续的学习协议上展示了双网络的有希望的结果,从标准离线,任务感知设置到具有挑战性的在线,无任务的场景。值得注意的是,在Ctrl〜 \ Cite {veniat2020202020202020202020202020202020202020202020202020202020202021- coite {ostapenko2021-continual}的基准中。此外,我们进行了全面的消融研究,以验证双nets功效,鲁棒性和可伸缩性。代码可在\ url {https://github.com/phquang/dualnet}上公开获得。
translated by 谷歌翻译
持续学习旨在从动态数据分布中学习一系列任务。如果不访问旧培训样本,难以确定的旧任务从旧任务转移,这可能是正面或负面的。如果旧知识干扰了新任务的学习,即,前瞻性知识转移是消极的,那么精确地记住旧任务将进一步加剧干扰,从而降低持续学习的性能。相比之下,通过调节学习触发的突触膨胀和突触收敛,生物神经网络可以积极忘记与新经验的学习冲突的旧知识。灵感来自于生物积极的遗忘,我们建议积极忘记限制新任务的学习以努力学习的旧知识。在贝叶斯持续学习的框架下,我们开发了一种名为积极遗忘的新方法,突触扩张 - 收敛(AFEC)。我们的方法动态扩展参数以了解每项新任务,然后选择性地结合它们,这与生物积极遗忘的底层机制正式一致。我们广泛地评估AFEC在各种持续的学习基准上,包括CIFAR-10回归任务,可视化分类任务和Atari加强任务,其中Afec有效提高了新任务的学习,并在插头中实现了最先进的性能 - 游戏方式。
translated by 谷歌翻译
在持续学习中使用神经网络中的任务特定组件(CL)是一种令人信服的策略,可以解决固定容量模型中稳定性 - 塑性困境,而无需访问过去的数据。当前方法仅着重于选择一个新任务的子网络,以减少忘记过去任务。但是,这种选择可能会限制有助于将来学习的相关过去知识的前瞻性转移。我们的研究表明,当统一的分类器用于所有类别的任务课程学习(class-il)时,共同满足这两个目标是更具挑战性的,因为这很容易跨越任务之间的类之间的歧义。此外,当跨任务的课程相似性增加时,挑战就会增加。为了应对这一挑战,我们提出了一种名为AFAF的新CL方法,旨在避免忘记并允许使用Fix-apainality模型在IL类中向前转移。 AFAF分配了一个子网络,该子网络可以选择性地转移相关知识到新任务,同时保留过去的知识,重复一些先前分配的组件以利用固定容量,并在存在相似之处时解决类型。该实验表明,AFAF在为模型提供多种CL所需属性方面的有效性,同时在具有不同语义相似性的各种具有挑战性的基准上优于最先进的方法。
translated by 谷歌翻译
增量任务学习(ITL)是一个持续学习的类别,试图培训单个网络以进行多个任务(一个接一个),其中每个任务的培训数据仅在培训该任务期间可用。当神经网络接受较新的任务培训时,往往会忘记旧任务。该特性通常被称为灾难性遗忘。为了解决此问题,ITL方法使用情节内存,参数正则化,掩盖和修剪或可扩展的网络结构。在本文中,我们提出了一个基于低级别分解的新的增量任务学习框架。特别是,我们表示每一层的网络权重作为几个等级1矩阵的线性组合。为了更新新任务的网络,我们学习一个排名1(或低级别)矩阵,并将其添加到每一层的权重。我们还引入了一个其他选择器向量,该向量将不同的权重分配给对先前任务的低级矩阵。我们表明,就准确性和遗忘而言,我们的方法的表现比当前的最新方法更好。与基于情节的内存和基于面具的方法相比,我们的方法还提供了更好的内存效率。我们的代码将在https://github.com/csiplab/task-increment-rank-update.git上找到。
translated by 谷歌翻译