多任务学习在NLP中是有用的,因为实际上是希望在一系列任务中有一个型号的单个模型。在医疗领域,对任务的顺序培训可能有时是培训模型的唯一方法,因为因为对原始(潜在敏感)数据的访问不再可用,或者只是由于联合再培训所固有的计算成本。然而,顺序学习固有的一个主要问题是灾难性的遗忘,即,当为新任务更新模型时,对先前任务的准确性大幅下降。弹性重量整合是最近提出的解决这个问题的方法,但是将这种方法扩展到实践中使用的现代大型模型需要对模型参数进行强烈的独立假设,限制其有效性。在这项工作中,我们应用了Kronecker分解 - 最近的方法可以放松独立假设 - 以防止在规模的卷积和变压器的神经网络中灾难忘记。我们展示了该技术对在三个数据集中的医疗实体链接的重要和说明性任务中的有效性,证明了在新的医疗数据可用时,用于对现有方法进行有效更新的技术的能力。平均而言,当使用基于BERT的模型时,所提出的方法将灾难性忘记减少51%,相比使用标准弹性重量固结的27%减少,同时保持与模型参数数量成比例的空间复杂性。
translated by 谷歌翻译
已知生物制剂在他们的生活过程中学习许多不同的任务,并且能够重新审视以前的任务和行为,而没有表现不损失。相比之下,人工代理容易出于“灾难性遗忘”,在以前任务上的性能随着所获取的新的任务而恶化。最近使用该方法通过鼓励参数保持接近以前任务的方法来解决此缺点。这可以通过(i)使用特定的参数正常数来完成,该参数正常数是在参数空间中映射合适的目的地,或(ii)通过将渐变投影到不会干扰先前任务的子空间来指导优化旅程。然而,这些方法通常在前馈和经常性神经网络中表现出子分子表现,并且经常性网络对支持生物持续学习的神经动力学研究感兴趣。在这项工作中,我们提出了自然的持续学习(NCL),一种统一重量正则化和预测梯度下降的新方法。 NCL使用贝叶斯重量正常化来鼓励在收敛的所有任务上进行良好的性能,并将其与梯度投影结合使用先前的精度,这可以防止在优化期间陷入灾难性遗忘。当应用于前馈和经常性网络中的连续学习问题时,我们的方法占据了标准重量正则化技术和投影的方法。最后,训练有素的网络演变了特定于任务特定的动态,这些动态被认为是学习的新任务,类似于生物电路中的实验结果。
translated by 谷歌翻译
Singular value decomposition (SVD) is one of the most popular compression methods that approximate a target matrix with smaller matrices. However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption. The parameters of a trained neural network model may affect task performance unevenly, which suggests non-equal importance among the parameters. Compared to SVD, the decomposition method aware of parameter importance is the more practical choice in real cases. Unlike standard SVD, weighted value decomposition is a non-convex optimization problem that lacks a closed-form solution. We systematically investigated multiple optimization strategies to tackle the problem and examined our method by compressing Transformer-based language models. Further, we designed a metric to predict when the SVD may introduce a significant performance drop, for which our method can be a rescue strategy. The extensive evaluations demonstrate that our method can perform better than current SOTA methods in compressing Transformer-based language models.
translated by 谷歌翻译
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
translated by 谷歌翻译
二阶优化器被认为具有加快神经网络训练的潜力,但是由于曲率矩阵的尺寸巨大,它们通常需要近似值才能计算。最成功的近似家庭是Kronecker因块状曲率估计值(KFAC)。在这里,我们结合了先前工作的工具,以评估确切的二阶更新和仔细消融以建立令人惊讶的结果:由于其近似值,KFAC与二阶更新无关,尤其是,它极大地胜过真实的第二阶段更新。订单更新。这一挑战广泛地相信,并立即提出了为什么KFAC表现如此出色的问题。为了回答这个问题,我们提出了强烈的证据,表明KFAC近似于一阶算法,该算法在神经元上执行梯度下降而不是权重。最后,我们表明,这种优化器通常会在计算成本和数据效率方面改善KFAC。
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
转移学习提供了一种在学习另一个任务时从一个任务中利用知识的方式。执行转移学习通常涉及通过训练数据集上的梯度下降来迭代地更新模型的参数。在本文中,我们介绍了一种基本上不同的方法,用于将知识转移到跨模型,这些方法将多个模型“合并”成一个。我们的方法有效地涉及计算模型参数的加权平均值。我们表明,该平均值相当于从模型权重的后部的大致抽样。在某些情况下使用各向同性高斯近似时,我们还通过Fisher信息近似于精确矩阵来证明优势。总之,我们的方法使得与基于标准梯度的培训相比,可以以极低的计算成本将多种模型中的“知识”组合。我们展示了模型合并在中间任务培训和域适应问题上实现了基于梯度下降的转移学习的可比性。我们还表明,我们的合并程序使得可以以先前未开发的方式结合模型。为了测量我们方法的稳健性,我们对我们算法的设计进行了广泛的消融。
translated by 谷歌翻译
有效地近似损失函数的局部曲率信息是用于深神经网络的优化和压缩的关键工具。然而,大多数现有方法近似二阶信息具有高计算或存储成本,这可以限制其实用性。在这项工作中,我们调查矩阵,用于估计逆象征的矢量产品(IHVPS)的矩阵线性时间方法,因为当Hessian可以近似为乘语 - 一个矩阵的总和时,如Hessian的经典近似由经验丰富的Fisher矩阵。我们提出了两个新的算法作为称为M-FAC的框架的一部分:第一个算法朝着网络压缩量身定制,如果Hessian给出了M $等级的总和,则可以计算Dimension $ D $的IHVP。 ,使用$ O(DM ^ 2)$预压制,$ O(DM)$代价计算IHVP,并查询逆Hessian的任何单个元素的费用$ O(m)$。第二算法针对优化设置,我们希望在反向Hessian之间计算产品,估计在优化步骤的滑动窗口和给定梯度方向上,根据预先说明的SGD所需的梯度方向。我们为计算IHVP和OHVP和O(DM + M ^ 3)$ of $ o(dm + m ^ 2)$提供算法,以便从滑动窗口添加或删除任何渐变。这两种算法产生最先进的结果,用于网络修剪和相对于现有二阶方法的计算开销的优化。在[9]和[17]可用实现。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
灾难性忘记破坏了深神网络(DNN)在诸如持续学习和终身学习等方案中的有效性。尽管已经提出了几种解决这个问题的方法,但有限的工作解释了为什么这些方法效果很好。本文的目的是更好地解释一种避免灾难性遗忘的普遍使用的技术:二次正则化。我们表明,二次正规化器可以通过在每次训练迭代时插值当前和先前的值来忘记过去的任务。在多次训练迭代中,这种插值操作降低了更重要的模型参数的学习率,从而最大程度地减少了它们的运动。我们的分析还揭示了二次正则化的两个缺点:(a)参数插值对训练超参数的依赖性通常会导致训练不稳定性,并且(b)(b)将较低的重要性分配到更深的层,这通常是DNNS中遗忘的地方。通过对操作顺序的简单修改,我们表明可以轻松避免这些缺点,从而在4.5%降低平均遗忘时的平均准确度增加6.2 \%。我们通过在不同的环境中培训2000多个模型来确认结果的鲁棒性。可在\ url {https://github.com/ekdeepslubana/qrforgetting}上获得代码
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
预训练的代表是现代深度学习成功的关键要素之一。但是,现有的关于持续学习方法的作品主要集中在从头开始逐步学习学习模型。在本文中,我们探讨了一个替代框架,以逐步学习,我们不断从预训练的表示中微调模型。我们的方法利用了预训练的神经网络的线性化技术来进行简单有效的持续学习。我们表明,这使我们能够设计一个线性模型,其中将二次参数正则方法作为最佳持续学习策略,同时享受神经网络的高性能。我们还表明,所提出的算法使参数正则化方法适用于类新问题。此外,我们还提供了一个理论原因,为什么在接受跨凝结损失训练的神经网络上,现有的参数空间正则化算法(例如EWC表现不佳)。我们表明,提出的方法可以防止忘记,同时在图像分类任务上实现高连续的微调性能。为了证明我们的方法可以应用于一般的持续学习设置,我们评估了我们在数据收入,任务收入和课堂学习问题方面的方法。
translated by 谷歌翻译
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models. Oftentimes fine-tuned models are readily available but their training data is not, due to data privacy or intellectual property concerns. This creates a barrier to fusing knowledge across individual models to yield a better single model. In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data. We propose a dataless knowledge fusion method that merges models in their parameter space, guided by weights that minimize prediction differences between the merged model and the individual models. Over a battery of evaluation settings, we show that the proposed method significantly outperforms baselines such as Fisher-weighted averaging or model ensembling. Further, we find that our method is a promising alternative to multi-task learning that can preserve or sometimes improve over the individual models without access to the training data. Finally, model merging is more efficient than training a multi-task model, thus making it applicable to a wider set of scenarios.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
我们考虑在具有挑战性的训练后环境中,深度神经网络(DNN)的模型压缩问题,在该设置中,我们将获得精确的训练模型,并且必须仅基于少量校准输入数据而无需任何重新培训即可压缩它。鉴于新兴软件和硬件支持通过加速修剪和/或量化压缩的模型,并且已经针对两种压缩方法独立提出了良好的表现解决方案,因此该问题已变得流行。在本文中,我们引入了一个新的压缩框架,该框架涵盖了统一环境中的重量修剪和量化,时间和空间效率高,并且在现有的后训练方法的实际性能上大大改善。在技​​术层面上,我们的方法基于[Lecun,Denker和Solla,1990年]在现代DNN的规模上的经典最佳脑外科医生(OBS)框架的第一个精确实现,我们进一步扩展到覆盖范围。重量量化。这是通过一系列可能具有独立利益的算法开发来实现的。从实际的角度来看,我们的实验结果表明,它可以在现有后训练方法的压缩 - 准确性权衡方面显着改善,并且甚至可以在训练后进行修剪和量化的准确共同应用。
translated by 谷歌翻译
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. 1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
translated by 谷歌翻译
具有许多预训练模型(PTM)的模型中心已经是深度学习的基石。尽管以高成本建造,但它们仍然保持\ emph {探索}:从业人员通常会通过普及从提供的模型中心中选择一个PTM,然后对PTM进行微调以解决目标任务。这种na \“我的但共同的实践构成了两个障碍,以充分利用预训练的模型中心:(1)通过受欢迎程度选择的PTM选择没有最佳保证;(2)仅使用一个PTM,而其余的PTM则被忽略。理想情况下。理想情况下。 ,为了最大程度地利用预训练的模型枢纽,需要尝试所有PTM的所有组合和广泛的微调每个PTM组合,这会产生指数组合和不可偿还的计算预算。在本文中,我们提出了一种新的范围排名和调整预训练的模型:(1)我们的会议论文〜\ citep {you_logme:_2021}提出的logMe,以估算预先训练模型提取的标签证据的最大值,该标签证据可以在模型中排名所有PTMS用于各种类型的PTM和任务的枢纽\ Emph {微调之前}。(2)如果我们不偏爱模型的体系结构,则可以对排名最佳的PTM进行微调和部署,或者可以通过TOPE调整目标PTM -k通过t排名PTM他提出了b-tuning算法。排名部分基于会议论文,我们在本文中完成了其理论分析,包括启发式证据最大化程序的收敛证明和特征维度的影响。调整零件引入了一种用于调整多个PTM的新型贝叶斯调整(B-Tuning)方法,该方法超过了专门的方法,该方法旨在调整均匀的PTMS,并为调整异质PTMS设置了一种新的技术。利用PTM枢纽的新范式对于整个机器学习社区的大量受众来说可能会很有趣。
translated by 谷歌翻译
在基于典型的深度神经网络训练期间,所有模型的参数都在每次迭代时更新。最近的工作表明,在训练期间只能更新模型参数的小型子集,这可以减轻存储和通信要求。在本文中,我们表明,可以在模型的参数上诱导一个固定的稀疏掩码,该屏蔽选择要在许多迭代中更新的子集。我们的方法用最大的Fisher信息构造出k $参数的掩码,作为一个简单的近似,与手头的任务最重要的近似值。在参数高效转移学习和分布式培训的实验中,我们表明我们的方法与其他方法的性能相匹配或超出稀疏更新的其他方法的性能,同时在内存使用和通信成本方面更有效。我们公开发布我们的代码,以促进我们的方法的进一步应用。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译