持续学习的现有工作(CL)的重点是减轻灾难性遗忘,即学习新任务时过去任务的模型绩效恶化。但是,CL系统的训练效率不足,这限制了CL系统在资源有限的方案下的现实应用。在这项工作中,我们提出了一个名为“稀疏持续学习”(SPARCL)的新颖框架,这是第一个利用稀疏性以使边缘设备上具有成本效益的持续学习的研究。 SPARCL通过三个方面的协同作用来实现训练加速度和准确性保护:体重稀疏性,数据效率和梯度稀疏性。具体而言,我们建议在整个CL过程中学习一个稀疏网络,动态数据删除(DDR),以删除信息较少的培训数据和动态梯度掩盖(DGM),以稀疏梯度更新。他们每个人不仅提高了效率,而且进一步减轻了灾难性的遗忘。 SPARCL始终提高现有最新CL方法(SOTA)CL方法的训练效率最多减少了训练失败,而且令人惊讶的是,SOTA的准确性最多最多提高了1.7%。 SPARCL还优于通过将SOTA稀疏训练方法适应CL设置的效率和准确性获得的竞争基线。我们还评估了SPARCL在真实手机上的有效性,进一步表明了我们方法的实际潜力。
translated by 谷歌翻译
最近,稀疏培训已成为有希望的范式,可在边缘设备上有效地深入学习。当前的研究主要致力于通过进一步增加模型稀疏性来降低培训成本。但是,增加的稀疏性并不总是理想的,因为它不可避免地会在极高的稀疏度下引入严重的准确性降解。本文打算探索其他可能的方向,以有效,有效地降低稀疏培训成本,同时保持准确性。为此,我们研究了两种技术,即层冻结和数据筛分。首先,层冻结方法在密集的模型训练和微调方面取得了成功,但在稀疏训练域中从未采用过。然而,稀疏训练的独特特征可能会阻碍层冻结技术的结合。因此,我们分析了在稀疏培训中使用层冻结技术的可行性和潜力,并发现它有可能节省大量培训成本。其次,我们提出了一种用于数据集有效培训的数据筛分方法,该方法通过确保在整个培训过程中仅使用部分数据集来进一步降低培训成本。我们表明,这两种技术都可以很好地整合到稀疏训练算法中,以形成一个通用框架,我们将其配置为SPFDE。我们的广泛实验表明,SPFDE可以显着降低培训成本,同时从三个维度中保留准确性:重量稀疏性,层冻结和数据集筛分。
translated by 谷歌翻译
持续学习旨在使单个模型能够学习一系列任务,而不会造成灾难性的遗忘。表现最好的方法通常需要排练缓冲区来存储过去的原始示例以进行经验重播,但是,由于隐私和内存约束,这会限制其实际价值。在这项工作中,我们提出了一个简单而有效的框架,即DualPrompt,该框架学习了一组称为提示的参数,以正确指示预先训练的模型,以依次学习到达的任务,而不会缓冲过去的示例。 DualPrompt提出了一种新颖的方法,可以将互补提示附加到预训练的主链上,然后将目标提出为学习任务不变和特定于任务的“指令”。通过广泛的实验验证,双启示始终在具有挑战性的课堂开发环境下始终设置最先进的表现。尤其是,双启示的表现优于最近的高级持续学习方法,其缓冲尺寸相对较大。我们还引入了一个更具挑战性的基准Split Imagenet-R,以帮助概括无连续的持续学习研究。源代码可在https://github.com/google-research/l2p上找到。
translated by 谷歌翻译
Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency.
translated by 谷歌翻译
持续学习(CL)旨在开发单一模型适应越来越多的任务的技术,从而潜在地利用跨任务的学习以资源有效的方式。 CL系统的主要挑战是灾难性的遗忘,在学习新任务时忘记了早期的任务。为了解决此问题,基于重播的CL方法在遇到遇到任务中选择的小缓冲区中维护和重复培训。我们提出梯度Coreset重放(GCR),一种新颖的重播缓冲区选择和使用仔细设计的优化标准的更新策略。具体而言,我们选择并维护一个“Coreset”,其与迄今为止关于当前模型参数的所有数据的梯度紧密近似,并讨论其有效应用于持续学习设置所需的关键策略。在学习的离线持续学习环境中,我们在最先进的最先进的最先进的持续学习环境中表现出显着的收益(2%-4%)。我们的调查结果还有效地转移到在线/流媒体CL设置,从而显示现有方法的5%。最后,我们展示了持续学习的监督对比损失的价值,当与我们的子集选择策略相结合时,累计增益高达5%。
translated by 谷歌翻译
这项工作调查了持续学习(CL)与转移学习(TL)之间的纠缠。特别是,我们阐明了网络预训练的广泛应用,强调它本身受到灾难性遗忘的影响。不幸的是,这个问题导致在以后任务期间知识转移的解释不足。在此基础上,我们提出了转移而不忘记(TWF),这是在固定的经过预定的兄弟姐妹网络上建立的混合方法,该方法不断传播源域中固有的知识,通过层次损失项。我们的实验表明,TWF在各种设置上稳步优于其他CL方法,在各种数据集和不同的缓冲尺寸上,平均每种类型的精度增长了4.81%。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
持续学习背后的主流范例一直在使模型参数调整到非静止数据分布,灾难性遗忘是中央挑战。典型方法在测试时间依赖排练缓冲区或已知的任务标识,以检索学到的知识和地址遗忘,而这项工作呈现了一个新的范例,用于持续学习,旨在训练更加简洁的内存系统而不在测试时间访问任务标识。我们的方法学会动态提示(L2P)预先训练的模型,以在不同的任务转换下顺序地学习任务。在我们提出的框架中,提示是小型可学习参数,这些参数在内存空间中保持。目标是优化提示,以指示模型预测并明确地管理任务不变和任务特定知识,同时保持模型可塑性。我们在流行的图像分类基准下进行全面的实验,具有不同挑战的持续学习环境,其中L2P始终如一地优于现有最先进的方法。令人惊讶的是,即使没有排练缓冲区,L2P即使没有排练缓冲,L2P也能实现竞争力的结果,并直接适用于具有挑战性的任务不可行的持续学习。源代码在https://github.com/google-Research/l2p中获得。
translated by 谷歌翻译
持续学习的目标(CL)是随着时间的推移学习不同的任务。与CL相关的主要Desiderata是在旧任务上保持绩效,利用后者来改善未来任务的学习,并在培训过程中引入最小的开销(例如,不需要增长的模型或再培训)。我们建议通过固定密度的稀疏神经网络来解决这些避难所的神经启发性塑性适应(NISPA)体系结构。 NISPA形成了稳定的途径,可以从较旧的任务中保存知识。此外,NISPA使用连接重新设计来创建新的塑料路径,以重用有关新任务的现有知识。我们对EMNIST,FashionMnist,CIFAR10和CIFAR100数据集的广泛评估表明,NISPA的表现明显胜过代表性的最先进的持续学习基线,并且与盆地相比,它的可学习参数最多少了十倍。我们还认为稀疏是持续学习的重要组成部分。 NISPA代码可在https://github.com/burakgurbuz97/nispa上获得。
translated by 谷歌翻译
在持续学习中使用神经网络中的任务特定组件(CL)是一种令人信服的策略,可以解决固定容量模型中稳定性 - 塑性困境,而无需访问过去的数据。当前方法仅着重于选择一个新任务的子网络,以减少忘记过去任务。但是,这种选择可能会限制有助于将来学习的相关过去知识的前瞻性转移。我们的研究表明,当统一的分类器用于所有类别的任务课程学习(class-il)时,共同满足这两个目标是更具挑战性的,因为这很容易跨越任务之间的类之间的歧义。此外,当跨任务的课程相似性增加时,挑战就会增加。为了应对这一挑战,我们提出了一种名为AFAF的新CL方法,旨在避免忘记并允许使用Fix-apainality模型在IL类中向前转移。 AFAF分配了一个子网络,该子网络可以选择性地转移相关知识到新任务,同时保留过去的知识,重复一些先前分配的组件以利用固定容量,并在存在相似之处时解决类型。该实验表明,AFAF在为模型提供多种CL所需属性方面的有效性,同时在具有不同语义相似性的各种具有挑战性的基准上优于最先进的方法。
translated by 谷歌翻译
重量修剪是一种有效的模型压缩技术,可以解决在移动设备上实现实时深神经网络(DNN)推断的挑战。然而,由于精度劣化,难以利用硬件加速度,以及某些类型的DNN层的限制,难以降低的应用方案具有有限的应用方案。在本文中,我们提出了一般的细粒度的结构化修剪方案和相应的编译器优化,适用于任何类型的DNN层,同时实现高精度和硬件推理性能。随着使用我们的编译器优化所支持的不同层的灵活性,我们进一步探讨了确定最佳修剪方案的新问题,了解各种修剪方案的不同加速度和精度性能。两个修剪方案映射方法,一个是基于搜索,另一个是基于规则的,建议自动推导出任何给定DNN的每层的最佳修剪规则和块大小。实验结果表明,我们的修剪方案映射方法,以及一般细粒化结构修剪方案,优于最先进的DNN优化框架,最高可达2.48 $ \ times $和1.73 $ \ times $ DNN推理加速在CiFar-10和Imagenet DataSet上没有准确性损失。
translated by 谷歌翻译
深度神经网络(DNN)的记录断裂性能具有沉重的参数化,导致外部动态随机存取存储器(DRAM)进行存储。 DRAM访问的禁用能量使得在资源受限的设备上部署DNN是不普遍的,呼叫最小化重量和数据移动以提高能量效率。我们呈现SmartDeal(SD),算法框架,以进行更高成本的存储器存储/访问的较低成本计算,以便在推理和培训中积极提高存储和能量效率。 SD的核心是一种具有结构约束的新型重量分解,精心制作以释放硬件效率潜力。具体地,我们将每个重量张量分解为小基矩阵的乘积以及大的结构稀疏系数矩阵,其非零被量化为-2的功率。由此产生的稀疏和量化的DNN致力于为数据移动和重量存储而大大降低的能量,因为由于稀疏的比特 - 操作和成本良好的计算,恢复原始权重的最小开销。除了推理之外,我们采取了另一次飞跃来拥抱节能培训,引入创新技术,以解决培训时出现的独特障碍,同时保留SD结构。我们还设计专用硬件加速器,充分利用SD结构来提高实际能源效率和延迟。我们在不同的设置中对多个任务,模型和数据集进行实验。结果表明:1)应用于推理,SD可实现高达2.44倍的能效,通过实际硬件实现评估; 2)应用于培训,储存能量降低10.56倍,减少了10.56倍和4.48倍,与最先进的训练基线相比,可忽略的准确性损失。我们的源代码在线提供。
translated by 谷歌翻译
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs change the underlying training objective to cause less interference with previously learned information. Our proposed version of EBMs for continual learning is simple, efficient, and outperforms baseline methods by a large margin on several benchmarks. Moreover, our proposed contrastive divergence-based training objective can be combined with other continual learning methods, resulting in substantial boosts in their performance. We further show that EBMs are adaptable to a more general continual learning setting where the data distribution changes without the notion of explicitly delineated tasks. These observations point towards EBMs as a useful building block for future continual learning methods.
translated by 谷歌翻译
持续学习需要模型来学习新任务,同时保持先前学识到的知识。已经提出了各种算法来解决这一真正的挑战。到目前为止,基于排练的方法,例如经验重播,取得了最先进的性能。这些方法将过去任务的一小部分保存为内存缓冲区,以防止模型忘记以前学识的知识。但是,它们中的大多数情况都同样对待每一个新任务,即,在学习不同的新任务时修复了框架的超级参数。这样的设置缺乏对过去和新任务之间的关系/相似性的考虑。例如,与从公共汽车中学到的人相比,从狗的知识/特征比识别猫(新任务)更有益。在这方面,我们提出了一种基于BI级优化的元学习算法,以便自适应地调整从过去和新任务中提取的知识之间的关系。因此,该模型可以在持续学习期间找到适当的梯度方向,避免在内存缓冲区上的严重过度拟合问题。广泛的实验是在三个公开的数据集(即CiFar-10,CiFar-100和微小想象网)上进行的。实验结果表明,该方法可以一致地改善所有基线的性能。
translated by 谷歌翻译
Learning from changing tasks and sequential experience without forgetting the obtained knowledge is a challenging problem for artificial neural networks. In this work, we focus on two challenging problems in the paradigm of Continual Learning (CL) without involving any old data: (i) the accumulation of catastrophic forgetting caused by the gradually fading knowledge space from which the model learns the previous knowledge; (ii) the uncontrolled tug-of-war dynamics to balance the stability and plasticity during the learning of new tasks. In order to tackle these problems, we present Progressive Learning without Forgetting (PLwF) and a credit assignment regime in the optimizer. PLwF densely introduces model functions from previous tasks to construct a knowledge space such that it contains the most reliable knowledge on each task and the distribution information of different tasks, while credit assignment controls the tug-of-war dynamics by removing gradient conflict through projection. Extensive ablative experiments demonstrate the effectiveness of PLwF and credit assignment. In comparison with other CL methods, we report notably better results even without relying on any raw data.
translated by 谷歌翻译
Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
A growing body of research in continual learning focuses on the catastrophic forgetting problem. While many attempts have been made to alleviate this problem, the majority of the methods assume a single model in the continual learning setup. In this work, we question this assumption and show that employing ensemble models can be a simple yet effective method to improve continual performance. However, ensembles' training and inference costs can increase significantly as the number of models grows. Motivated by this limitation, we study different ensemble models to understand their benefits and drawbacks in continual learning scenarios. Finally, to overcome the high compute cost of ensembles, we leverage recent advances in neural network subspace to propose a computationally cheap algorithm with similar runtime to a single model yet enjoying the performance benefits of ensembles.
translated by 谷歌翻译
持续的学习方法通​​过试图解决灾难性遗忘来帮助深度神经网络模型适应和逐步学习。但是,无论这些现有方法是否传统上应用于基于图像的任务,都具有与移动或嵌入式传感系统生成的顺序时间序列数据相同的疗效仍然是一个未解决的问题。为了解决这一空白,我们进行了第一项全面的经验研究,该研究量化了三个主要的持续学习方案的性能(即,在三个移动和嵌入式感应应用程序中的六个数据集中的三个主要的持续学习方案(即正规化,重播和重播)的性能。不同的学习复杂性。更具体地说,我们在Edge设备上实现了端到端连续学习框架。然后,我们研究了不同持续学习方法的性能,存储,计算成本和记忆足迹之间的普遍性,权衡。我们的发现表明,以示例性计划(例如ICARL)重播,即使在复杂的场景中,甚至在复杂的场景中都具有最佳的性能权衡,以牺牲一些存储空间(少数MB)来训练示例(1%至5%)。我们还首次证明,以有限的记忆预算进行连续学习,可行和实用。特别是,两种类型的移动设备和嵌入式设备的延迟表明,可以接受递增的学习时间(几秒钟-4分钟)和培训时间(1-75分钟),可以接受,因为嵌入式嵌入式时可能会在设备上进行培训设备正在充电,从而确保完整的数据隐私。最后,我们为希望将不断学习范式应用于移动传感任务的从业者提供了一些准则。
translated by 谷歌翻译
持续学习旨在快速,不断地从一系列任务中学习当前的任务。与其他类型的方法相比,基于经验重播的方法表现出了极大的优势来克服灾难性的遗忘。该方法的一个常见局限性是上一个任务和当前任务之间的数据不平衡,这将进一步加剧遗忘。此外,如何在这种情况下有效解决稳定性困境也是一个紧迫的问题。在本文中,我们通过提出一个通过多尺度知识蒸馏和数据扩展(MMKDDA)提出一个名为Meta学习更新的新框架来克服这些挑战。具体而言,我们应用多尺度知识蒸馏来掌握不同特征级别的远程和短期空间关系的演变,以减轻数据不平衡问题。此外,我们的方法在在线持续训练程序中混合了来自情节记忆和当前任务的样品,从而减轻了由于概率分布的变化而减轻了侧面影响。此外,我们通过元学习更新来优化我们的模型,该更新诉诸于前面所看到的任务数量,这有助于保持稳定性和可塑性之间的更好平衡。最后,我们对四个基准数据集的实验评估显示了提出的MMKDDA框架对其他流行基线的有效性,并且还进行了消融研究,以进一步分析每个组件在我们的框架中的作用。
translated by 谷歌翻译