学习的优化器是可以训练解决优化问题的算法。与使用从理论原则派生的简单更新规则的基线优化器(例如势头或亚当)相比,学习的优化器使用灵活,高维,非线性参数化。虽然这可能导致某些设置中的更好性能,但他们的内部工作仍然是一个谜。学习优化器如何优于一个良好的调整基线?它是否学习了现有优化技术的复杂组合,或者是实现全新的行为吗?在这项工作中,我们通过仔细分析和可视化的学习优化器来解决这些问题。我们研究了从三个不同的任务中从头开始培训的优化器,并发现他们已经了解了可解释的机制,包括:势头,渐变剪辑,学习率计划以及新形式的学习率适应形式。此外,我们展示了学习优化器的动态如何实现这些行为。我们的结果帮助阐明了对学习优化器的工作原理的先前密切了解,并建立了解释未来学习优化器的工具。
translated by 谷歌翻译
学识渊博的优化器 - 经过训练可以充当优化器的神经网络 - 有可能大大加速机器学习模型的培训。但是,即使以巨大的计算费用进行了数千个任务进行元训练,Blackbox学会的优化者在应用于任务的稳定性和概括方面也经常在其元训练集中使用。在本文中,我们使用动力学系统中的工具来研究优化算法的电感偏差和稳定性,并将所得的见解应用于设计黑框优化器的电感偏置。我们的调查始于嘈杂的二次模型,在该模型中,根据训练动力学的特征值,我们表征了优化稳定的条件。然后,我们将简单的修改引入了学到的优化器的体系结构和元训练过程,从而改善了稳定性,并改善了优化器的电感偏置。我们将最终学习的优化器应用于各种神经网络训练任务,在优化性能和元训练速度方面,它的表现优于当前的最新技术优化器(在匹配的优化器计算上的开销),并且能够实现对任务的概括与受元训练的任务大不相同。
translated by 谷歌翻译
Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We
translated by 谷歌翻译
可分辨率的编程技术在社区中广泛应用,负责过去几十年的机器学习文艺复兴。虽然这些方法是强大的,但它们有限制。在本简短的报告中,我们讨论了一种基于混乱的失效模式,这些失效模式出现在各种可分子的情况下,从经常性神经网络和数值物理模拟到培训学习优化器。我们追溯到正在研究的系统的雅各比亚的频谱,并为从业者可能预期这种未能破坏基于分化的优化算法的标准提供标准。
translated by 谷歌翻译
经常性的神经网络(RNNS)是用于处理时间序列数据的强大模型,但了解它们如何运作仍然具有挑战性。提高这种理解对机器学习和神经科学社区的大量兴趣。逆向工程框架训练的RNN通过在其固定点周围线性化提供了洞察力,但该方法具有重大挑战。这些包括在使用线性化动态重建非线性动态时,选择在研究RNN动态和误差累积时难以扩展的固定点。我们提出了一种通过使用新型切换线性动态系统(SLD)制剂的RNN共同训练RNN来克服这些限制的新模型。共同训练的RNN的一阶泰勒系列扩展和训练拾取RNN的固定点的辅助功能管理SLDS动态。结果是训练有素的SLDS变体,其与RNN相近,可以为状态空间中的每个点产生固定点的辅助函数,以及其动态已经规程的训练有素的非线性RNN,使得其一阶项执行计算, 如果可能的话。该模型删除了培训后的固定点优化,并允许我们明确地研究SLD在状态空间中的任何点的学习动态。它还概括了SLDS模型,以在交换机共享参数的同时将SLD模型转换为切换点的连续歧管。我们以与先前的工作逆向工程RNN相关的两个合成任务验证模型的实用程序。然后,我们表明我们的模型可以用作更复杂的架构中的替换,例如LFAD,并应用该LFADS杂种以分析非人类灵长类动物的电机系统的单试尖峰活动。
translated by 谷歌翻译
Many problems in machine learning involve bilevel optimization (BLO), including hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems consist of two nested sub-problems, called the outer and inner problems, respectively. In practice, often at least one of these sub-problems is overparameterized. In this case, there are many ways to choose among optima that achieve equivalent objective values. Inspired by recent studies of the implicit bias induced by optimization algorithms in single-level optimization, we investigate the implicit bias of gradient-based algorithms for bilevel optimization. We delineate two standard BLO methods -- cold-start and warm-start -- and show that the converged solution or long-run behavior depends to a large degree on these and other algorithmic choices, such as the hypergradient approximation. We also show that the inner solutions obtained by warm-start BLO can encode a surprising amount of information about the outer objective, even when the outer parameters are low-dimensional. We believe that implicit bias deserves as central a role in the study of bilevel optimization as it has attained in the study of single-level neural net optimization.
translated by 谷歌翻译
经常性神经网络(RNNS)是强大的动态模型,广泛用于机器学习(ML)和神经科学。之前的理论作品集中在具有添加剂相互作用的RNN上。然而,门控 - 即乘法 - 相互作用在真神经元中普遍存在,并且也是ML中最佳性能RNN的中心特征。在这里,我们表明Gating提供灵活地控制集体动态的两个突出特征:i)时间尺寸和ii)维度。栅极控制时间尺度导致新颖的稳定状态,网络用作灵活积分器。与以前的方法不同,Gating允许这种重要功能而没有参数微调或特殊对称。门还提供一种灵活的上下文相关机制来重置存储器跟踪,从而补充存储器功能。调制维度的栅极可以诱导新颖的不连续的混沌转变,其中输入将稳定的系统推向强的混沌活动,与通常稳定的输入效果相比。在这种转变之上,与添加剂RNN不同,关键点(拓扑复杂性)的增殖与混沌动力学的外观解耦(动态复杂性)。丰富的动态总结在相图中,从而为ML从业者提供了一个原理参数初始化选择的地图。
translated by 谷歌翻译
优化在开发机器学习系统中起着昂贵且至关重要的作用。在学习的优化器中,常用手工设计的优化器的少数超参数,例如Adam或SGD用灵活的参数函数代替。然后对这些功能的参数进行优化,以便所得的学习优化器最大程度地减少所选模型类别的目标损失。学识渊博的优化者都可以减少所需的训练步骤的数量并改善最终测试损失。但是,它们的训练可能很昂贵,一旦训练,由于优化器本身的计算和内存开销,使用训练可能很昂贵。在这项工作中,我们确定并量化了许多学习和手工设计的优化器的内存,计算和性能权衡的设计功能。我们进一步利用我们的分析来构建比以前的工作更快,更有效的学习优化器。我们的模型和培训代码是开源的。
translated by 谷歌翻译
重型模型引起了神经网络现代发展的关注。深度平衡模型(DEQ)代表具有重量趋势的无限深度神经网络,最近的研究表明了这种方法的潜力。需要迭代解决训练中的根发现问题,并建立在模型确定的基础动力学基础上,需要DEQ。在本文中,我们介绍了稳定的不变模型(SIM),这是一种新的深层模型,原理在稳定性下近似DEQ,并将动力学扩展到更一般的动力学,从而收敛到不变的集合(不受固定点的限制)。得出SIMS的关键要素是用Koopman和Perron--Frobenius操作员的光谱表示动力学的代表。该视角大致揭示了用DEQS揭示稳定的动力学,然后衍生了两个SIMS的变体。我们还提出了可以以与前馈模型相同的方式学习的SIMS的实现。我们通过实验说明了SIMS的经验表现,并证明SIMS在几个学习任务中对DEQ实现了比较或出色的表现。
translated by 谷歌翻译
展开的计算图在许多方案中出现,包括培训RNN,通过展开优化调整超级参与,以及培训学习优化器。当前在这种计算图中优化参数的方法遭受高方差梯度,偏差,慢更新或大的内存使用情况。我们介绍一种称为持久演进策略(PES)的方法,该方法将计算图分为一系列截断的展开,并在每个展开后执行基于演进策略的更新步骤。PE通过在整个展开序列上累积校正项来消除这些截断的偏差。PE允许快速参数更新,具有较低的内存使用率,是无偏的,具有合理的方差特性。我们通过实验证明了PE的优势与综合任务的渐变估计的其他几种方法相比,并表明其适用于培训学习优化器和调整超参数。
translated by 谷歌翻译
A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance. This work extends the results of .
translated by 谷歌翻译
Deep Learning optimization involves minimizing a high-dimensional loss function in the weight space which is often perceived as difficult due to its inherent difficulties such as saddle points, local minima, ill-conditioning of the Hessian and limited compute resources. In this paper, we provide a comprehensive review of 12 standard optimization methods successfully used in deep learning research and a theoretical assessment of the difficulties in numerical optimization from the optimization literature.
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
Ridge Rider(RR)是一种通过遵循Hessian(“脊”)的特征向量来查找各种解决方案的优化问题算法。RR专为保守梯度系统(即,涉及单次损失函数的设置)设计,其中它在鞍座上分支 - 易于找到的分叉点。我们通过提出一种方法 - 表示的广义脊骑手(GRR)来概括该想法,以寻找任意分叉点的方法。我们通过从动态系统领域利用机械来为我们的方法提供理论动机。我们构建了新的玩具问题,我们可以在欣赏到兴趣的高维问题的同时可视化新现象。最后,我们通过在迭代的囚犯困境和相关机器学习问题中找到不同的解决方案来统一地评估我们的方法。
translated by 谷歌翻译
Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned.Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods su ce for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods.
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译