优化在开发机器学习系统中起着昂贵且至关重要的作用。在学习的优化器中,常用手工设计的优化器的少数超参数,例如Adam或SGD用灵活的参数函数代替。然后对这些功能的参数进行优化,以便所得的学习优化器最大程度地减少所选模型类别的目标损失。学识渊博的优化者都可以减少所需的训练步骤的数量并改善最终测试损失。但是,它们的训练可能很昂贵,一旦训练,由于优化器本身的计算和内存开销,使用训练可能很昂贵。在这项工作中,我们确定并量化了许多学习和手工设计的优化器的内存,计算和性能权衡的设计功能。我们进一步利用我们的分析来构建比以前的工作更快,更有效的学习优化器。我们的模型和培训代码是开源的。
translated by 谷歌翻译
学识渊博的优化器 - 经过训练可以充当优化器的神经网络 - 有可能大大加速机器学习模型的培训。但是,即使以巨大的计算费用进行了数千个任务进行元训练,Blackbox学会的优化者在应用于任务的稳定性和概括方面也经常在其元训练集中使用。在本文中,我们使用动力学系统中的工具来研究优化算法的电感偏差和稳定性,并将所得的见解应用于设计黑框优化器的电感偏置。我们的调查始于嘈杂的二次模型,在该模型中,根据训练动力学的特征值,我们表征了优化稳定的条件。然后,我们将简单的修改引入了学到的优化器的体系结构和元训练过程,从而改善了稳定性,并改善了优化器的电感偏置。我们将最终学习的优化器应用于各种神经网络训练任务,在优化性能和元训练速度方面,它的表现优于当前的最新技术优化器(在匹配的优化器计算上的开销),并且能够实现对任务的概括与受元训练的任务大不相同。
translated by 谷歌翻译
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose in-context learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
translated by 谷歌翻译
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. optimizer optimizee p a ra m eter up d a te s er ror signa l
translated by 谷歌翻译
展开的计算图在许多方案中出现,包括培训RNN,通过展开优化调整超级参与,以及培训学习优化器。当前在这种计算图中优化参数的方法遭受高方差梯度,偏差,慢更新或大的内存使用情况。我们介绍一种称为持久演进策略(PES)的方法,该方法将计算图分为一系列截断的展开,并在每个展开后执行基于演进策略的更新步骤。PE通过在整个展开序列上累积校正项来消除这些截断的偏差。PE允许快速参数更新,具有较低的内存使用率,是无偏的,具有合理的方差特性。我们通过实验证明了PE的优势与综合任务的渐变估计的其他几种方法相比,并表明其适用于培训学习优化器和调整超参数。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
如果学习组件的趋势最终超过了手工制作的版本,那么学习的优化器最终将优于SGD或ADAM等手工制作的优化器。即使学到的优化器(L2OS)最终在实践中超过手工制作的优化器,但它们仍然没有被证明是融合的,并且可能会失败。这些是这里解决的问题。目前,学习的优化器经常超过一般的手工制作的优化器(例如梯度下降),但在一段时间后,它们通常会在一段时间后平稳,而通用算法继续取得进步,并且经常超过学习算法,因为它占据了Aseop的乌龟,从而超过了野兔的野兔。 L2OS仍然很难概括分配。 Heaton等。拟议的保护L2O(GL2O)可以采用学习的优化器并使用通用学习算法进行保护,以便通过有条件地在两者之间进行切换,可以证明所得算法可以收敛。我们提出了一类新的保障L2O,称为“损失的L2O”(LGL2O),这在概念上既简单又便宜。守卫机制仅基于两个优化器的预期未来损失价值决定。此外,与GL2O和其他基准相比,我们显示了LGL2O的收敛保证和经验结果的理论证明,表明它结合了L2O和SGD的最佳状态,而实际上,它比GL2O更好地收敛。
translated by 谷歌翻译
学习的优化器是可以训练解决优化问题的算法。与使用从理论原则派生的简单更新规则的基线优化器(例如势头或亚当)相比,学习的优化器使用灵活,高维,非线性参数化。虽然这可能导致某些设置中的更好性能,但他们的内部工作仍然是一个谜。学习优化器如何优于一个良好的调整基线?它是否学习了现有优化技术的复杂组合,或者是实现全新的行为吗?在这项工作中,我们通过仔细分析和可视化的学习优化器来解决这些问题。我们研究了从三个不同的任务中从头开始培训的优化器,并发现他们已经了解了可解释的机制,包括:势头,渐变剪辑,学习率计划以及新形式的学习率适应形式。此外,我们展示了学习优化器的动态如何实现这些行为。我们的结果帮助阐明了对学习优化器的工作原理的先前密切了解,并建立了解释未来学习优化器的工具。
translated by 谷歌翻译
二阶优化器被认为具有加快神经网络训练的潜力,但是由于曲率矩阵的尺寸巨大,它们通常需要近似值才能计算。最成功的近似家庭是Kronecker因块状曲率估计值(KFAC)。在这里,我们结合了先前工作的工具,以评估确切的二阶更新和仔细消融以建立令人惊讶的结果:由于其近似值,KFAC与二阶更新无关,尤其是,它极大地胜过真实的第二阶段更新。订单更新。这一挑战广泛地相信,并立即提出了为什么KFAC表现如此出色的问题。为了回答这个问题,我们提出了强烈的证据,表明KFAC近似于一阶算法,该算法在神经元上执行梯度下降而不是权重。最后,我们表明,这种优化器通常会在计算成本和数据效率方面改善KFAC。
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
在本文中,我们开发了损失功能学习的新兴主题,该主题旨在学习损失功能,从而显着提高在其下方训练的模型的性能。具体而言,我们提出了一个新的元学习框架,用于通过混合神经符号搜索方法来学习模型 - 不足的损失函数。该框架首先使用基于进化的方法来搜索原始数学操作的空间,以找到一组符号损耗函数。其次,随后通过基于端梯度的训练程序对学习的损失功能集进行了参数化和优化。拟议框架的多功能性在经验上得到了各种监督的学习任务的经验验证。结果表明,通过新提出的方法发现的元学习损失函数在各种神经网络体系结构和数据集上都超过了交叉渗透丢失和最新的损失函数学习方法。
translated by 谷歌翻译
我们探索一种以数据为基础的学习方法来优化神经网络。我们构建神经网络检查点的数据集,并培训有关参数的生成模型。特别是,我们的模型是一个条件扩散变压器,鉴于初始输入参数向量以及提示的丢失,误差或返回,可以预测实现所需度量的参数更新的分布。在测试时,它可以在一个更新中优化具有看不见的参数的神经网络。我们发现我们的方法成功地生成了各种损失提示的参数。此外,它可以采样多模式参数解决方案,并具有有利的缩放属性。我们将方法应用于监督和强化学习中的不同神经网络体系结构和任务。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
Designing faster optimization algorithms is of ever-growing interest. In recent years, learning to learn methods that learn how to optimize demonstrated very encouraging results. Current approaches usually do not effectively include the dynamics of the optimization process during training. They either omit it entirely or only implicitly assume the dynamics of an isolated parameter. In this paper, we show how to utilize the dynamic mode decomposition method for extracting informative features about optimization dynamics. By employing those features, we show that our learned optimizer generalizes much better to unseen optimization problems in short. The improved generalization is illustrated on multiple tasks where training the optimizer on one neural network generalizes to different architectures and distinct datasets.
translated by 谷歌翻译
有效地近似损失函数的局部曲率信息是用于深神经网络的优化和压缩的关键工具。然而,大多数现有方法近似二阶信息具有高计算或存储成本,这可以限制其实用性。在这项工作中,我们调查矩阵,用于估计逆象征的矢量产品(IHVPS)的矩阵线性时间方法,因为当Hessian可以近似为乘语 - 一个矩阵的总和时,如Hessian的经典近似由经验丰富的Fisher矩阵。我们提出了两个新的算法作为称为M-FAC的框架的一部分:第一个算法朝着网络压缩量身定制,如果Hessian给出了M $等级的总和,则可以计算Dimension $ D $的IHVP。 ,使用$ O(DM ^ 2)$预压制,$ O(DM)$代价计算IHVP,并查询逆Hessian的任何单个元素的费用$ O(m)$。第二算法针对优化设置,我们希望在反向Hessian之间计算产品,估计在优化步骤的滑动窗口和给定梯度方向上,根据预先说明的SGD所需的梯度方向。我们为计算IHVP和OHVP和O(DM + M ^ 3)$ of $ o(dm + m ^ 2)$提供算法,以便从滑动窗口添加或删除任何渐变。这两种算法产生最先进的结果,用于网络修剪和相对于现有二阶方法的计算开销的优化。在[9]和[17]可用实现。
translated by 谷歌翻译
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this work, in order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks-PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performs at least as well as ENAS [41], a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups. Relatedly, we provide all information (code, random seeds, documentation) needed to exactly reproduce our results, and report our random search with weight-sharing results for each benchmark on multiple runs.
translated by 谷歌翻译
目前,深层神经网络(DNN)主要使用一阶方法进行训练。其中一些方法(例如Adam,Adagrad和Rmsprop及其变体)通过使用对角线矩阵来预先处理随机梯度。最近,通过通过按层块 - diagonal矩阵对随机梯度进行预处理,已开发出有效的二阶方法,例如KFAC,K-BFGS,洗发水和TNT。在这里,我们提出了一种自适应的“迷你块Fisher(MBF)”预处理方法,其中在这两类方法之间。具体而言,我们的方法对经验渔民矩阵使用块对基近似值,在DNN中的每一层(无论是卷积还是馈送)和完全连接,相关的对角线本身都是块 - diagonal,并且由A组成。大量适度的迷你块。我们的新方法利用GPU的并行性来有效地对每一层的大量矩阵进行计算。因此,MBF的均值计算成本仅略高于一阶方法。将我们提出的方法的性能与在自动编码器和CNN问题上的几种基线方法进行了比较,以在时间效率和概括功率方面验证其有效性。最后,证明MBF的理想化版本线性收敛。
translated by 谷歌翻译
Neural network pruning-the task of reducing the size of a network by removing parameters-has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
translated by 谷歌翻译
情报依赖于代理商对其不知道的知识。可以根据多个输入的标签的联合预测质量来评估此能力。传统的神经网络缺乏这种能力,并且由于大多数研究都集中在边际预测上,因此这种缺点在很大程度上被忽略了。我们将认知神经网络(ENN)作为模型的界面,代表产生有用的关节预测所需的不确定性。虽然先前的不确定性建模方法(例如贝叶斯神经网络)可以表示为ENN,但这种新界面促进了联合预测和新型体系结构和算法的设计的比较。特别是,我们介绍了Epinet:一种可以补充任何常规神经网络(包括大型模型)的体系结构,并且可以通过适度的增量计算进行培训以估计不确定性。有了Epact,传统的神经网络的表现优于非常大的合奏,包括数百个或更多的颗粒,计算的数量级较低。我们在合成数据,成像网和一些强化学习任务中证明了这种功效。作为这项工作的一部分,我们开放源实验代码。
translated by 谷歌翻译
最近,稀疏的培训方法已开始作为事实上的人工神经网络的培训和推理效率的方法。然而,这种效率只是理论上。在实践中,每个人都使用二进制掩码来模拟稀疏性,因为典型的深度学习软件和硬件已针对密集的矩阵操作进行了优化。在本文中,我们采用正交方法,我们表明我们可以训练真正稀疏的神经网络以收获其全部潜力。为了实现这一目标,我们介绍了三个新颖的贡献,这些贡献是专门为稀疏神经网络设计的:(1)平行训练算法及其相应的稀疏实现,(2)具有不可训练的参数的激活功能,以支持梯度流动,以支持梯度流量, (3)隐藏的神经元对消除冗余的重要性指标。总而言之,我们能够打破记录并训练有史以来最大的神经网络在代表力方面训练 - 达到蝙蝠大脑的大小。结果表明,我们的方法具有最先进的表现,同时为环保人工智能时代开辟了道路。
translated by 谷歌翻译