Deep Learning optimization involves minimizing a high-dimensional loss function in the weight space which is often perceived as difficult due to its inherent difficulties such as saddle points, local minima, ill-conditioning of the Hessian and limited compute resources. In this paper, we provide a comprehensive review of 12 standard optimization methods successfully used in deep learning research and a theoretical assessment of the difficulties in numerical optimization from the optimization literature.
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
本文评价用机器学习问题的数值优化方法。由于机器学习模型是高度参数化的,我们专注于适合高维优化的方法。我们在二次模型上构建直觉,以确定哪种方法适用于非凸优化,并在凸函数上开发用于这种方法的凸起函数。随着随机梯度下降和动量方法的这种理论基础,我们试图解释为什么机器学习领域通常使用的方法非常成功。除了解释成功的启发式之外,最后一章还提供了对更多理论方法的广泛审查,这在实践中并不像惯例。所以在某些情况下,这项工作试图回答这个问题:为什么默认值中包含的默认TensorFlow优化器?
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译
A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance. This work extends the results of .
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned.Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods su ce for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods.
translated by 谷歌翻译
近期在应用于培训深度神经网络和数据分析中的其他优化问题中的非凸优化的优化算法的兴趣增加,我们概述了最近对非凸优化优化算法的全球性能保证的理论结果。我们从古典参数开始,显示一般非凸面问题无法在合理的时间内有效地解决。然后,我们提供了一个问题列表,可以通过利用问题的结构来有效地找到全球最小化器,因为可能的问题。处理非凸性的另一种方法是放宽目标,从找到全局最小,以找到静止点或局部最小值。对于该设置,我们首先为确定性一阶方法的收敛速率提出了已知结果,然后是最佳随机和随机梯度方案的一般理论分析,以及随机第一阶方法的概述。之后,我们讨论了非常一般的非凸面问题,例如最小化$ \ alpha $ -weakly-are-convex功能和满足Polyak-lojasiewicz条件的功能,这仍然允许获得一阶的理论融合保证方法。然后,我们考虑更高阶和零序/衍生物的方法及其收敛速率,以获得非凸优化问题。
translated by 谷歌翻译
在本文中,我们考虑了第一和二阶技术来解决机器学习中产生的连续优化问题。在一阶案例中,我们提出了一种从确定性或半确定性到随机二次正则化方法的转换框架。我们利用随机优化的两相性质提出了一种具有自适应采样和自适应步长的新型一阶算法。在二阶案例中,我们提出了一种新型随机阻尼L-BFGS方法,该方法可以在深度学习的高度非凸起背景下提高先前的算法。这两种算法都在众所周知的深度学习数据集上进行评估并表现出有希望的性能。
translated by 谷歌翻译
This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.
translated by 谷歌翻译
Deep neural networks are usually trained with stochastic gradient descent (SGD), which minimizes objective function using very rough approximations of gradient, only averaging to the real gradient. Standard approaches like momentum or ADAM only consider a single direction, and do not try to model distance from extremum - neglecting valuable information from calculated sequence of gradients, often stagnating in some suboptimal plateau. Second order methods could exploit these missed opportunities, however, beside suffering from very large cost and numerical instabilities, many of them attract to suboptimal points like saddles due to negligence of signs of curvatures (as eigenvalues of Hessian). Saddle-free Newton method is a rare example of addressing this issue - changes saddle attraction into repulsion, and was shown to provide essential improvement for final value this way. However, it neglects noise while modelling second order behavior, focuses on Krylov subspace for numerical reasons, and requires costly eigendecomposion. Maintaining SFN advantages, there are proposed inexpensive ways for exploiting these opportunities. Second order behavior is linear dependence of first derivative - we can optimally estimate it from sequence of noisy gradients with least square linear regression, in online setting here: with weakening weights of old gradients. Statistically relevant subspace is suggested by PCA of recent noisy gradients - in online setting it can be made by slowly rotating considered directions toward new gradients, gradually replacing old directions with recent statistically relevant. Eigendecomposition can be also performed online: with regularly performed step of QR method to maintain diagonal Hessian. Outside the second order modeled subspace we can simultaneously perform gradient descent.
translated by 谷歌翻译
在这项工作中,我们探讨了随机梯度下降(SGD)训练的深神经网络的限制动态。如前所述,长时间的性能融合,网络继续通过参数空间通过一个异常扩散的过程,其中距离在具有非活动指数的梯度更新的数量中增加距离。我们揭示了优化的超公数,梯度噪声结构之间的复杂相互作用,以及在训练结束时解释这种异常扩散的Hessian矩阵。为了构建这种理解,我们首先为SGD推导出一个连续时间模型,具有有限的学习速率和批量尺寸,作为欠下的Langevin方程。我们在线性回归中研究了这个方程,我们可以为参数的相位空间动态和它们的瞬时速度来得出精确的分析表达式,从初始化到实用性。使用Fokker-Planck方程,我们表明驾驶这些动态的关键成分不是原始的训练损失,而是修改的损失的组合,其隐含地规则地规范速度和概率电流,这导致相位空间中的振荡。我们在ImageNet培训的Reset-18模型的动态中确定了这种理论的定性和定量预测。通过统计物理的镜头,我们揭示了SGD培训的深神经网络的异常限制动态的机制来源。
translated by 谷歌翻译
大量数据集上的培训机学习模型会产生大量的计算成本。为了减轻此类费用,已经持续努力开发数据有效的培训方法,这些方法可以仔细选择培训示例的子集,以概括为完整的培训数据。但是,现有方法在为在提取子集训练的模型的质量提供理论保证方面受到限制,并且在实践中的表现可能差。我们提出了Adacore,该方法利用数据的几何形状提取培训示例的子集以进行有效的机器学习。我们方法背后的关键思想是通过对Hessian的指数平均估计值动态近似损耗函数的曲率,以选择加权子集(核心),这些子集(核心)可提供与Hessian的完整梯度预处理的近似值。我们证明,对应用于Adacore选择的子集的各种一阶和二阶方法的收敛性有严格的保证。我们的广泛实验表明,与基准相比,ADACORE提取了质量更高的核心,并加快了对凸和非凸机学习模型的训练,例如逻辑回归和神经网络,超过2.9倍,超过4.5倍,而随机子集则超过4.5倍。 。
translated by 谷歌翻译
We explore the usage of the Levenberg-Marquardt (LM) algorithm for regression (non-linear least squares) and classification (generalized Gauss-Newton methods) tasks in neural networks. We compare the performance of the LM method with other popular first-order algorithms such as SGD and Adam, as well as other second-order algorithms such as L-BFGS , Hessian-Free and KFAC. We further speed up the LM method by using adaptive momentum, learning rate line search, and uphill step acceptance.
translated by 谷歌翻译
神经网络损失景观的二次近似已被广泛用于研究这些网络的优化过程。但是,它通常位于最低限度的一个很小的社区,但无法解释在优化过程中观察到的许多现象。在这项工作中,我们研究了神经网络损失函数的结构及其对超出良好二次近似范围的区域中优化的影响。从数值上讲,我们观察到神经网络损失功能具有多尺度结构,以两种方式表现出来:(1)在Minima的社区中,损失将量表的连续体和次级次序增长,(2)在较大的区域,损失,损失,损失,清楚地显示了几个单独的秤。使用次级生长,我们能够解释梯度下降(GD)方法观察到的稳定现象的边缘[5]。使用单独的量表,我们通过简单示例解释学习率衰减的工作机理。最后,我们研究了多尺度结构的起源,并提出模型的非跨性别性和训练数据的不均匀性是原因之一。通过构建两层神经网络问题,我们表明,具有不同幅度的训练数据会产生损失函数的不同尺度,从而产生次级生长和多个单独的尺度。
translated by 谷歌翻译
在本文中,我们提出了SC-REG(自助正规化)来学习过共同的前馈神经网络来学习\ EMPH {牛顿递减}框架的二阶信息进行凸起问题。我们提出了具有自助正规化(得分-GGN)算法的广义高斯 - 牛顿,其每次接收到新输入批处理时都会更新网络参数。所提出的算法利用Hessian矩阵中的二阶信息的结构,从而减少训练计算开销。虽然我们的目前的分析仅考虑凸面的情况,但数值实验表明了我们在凸和非凸面设置下的方法和快速收敛的效率,这对基线一阶方法和准牛顿方法进行了比较。
translated by 谷歌翻译
长期存在的辩论围绕着相关的假设,即低曲率的最小值更好地推广,而SGD则不鼓励曲率。我们提供更完整和细微的观点,以支持两者。首先,我们表明曲率通过两种新机制损害了测试性能,除了已知的参数搭配机制外,弯曲和偏置曲线除了偏置和偏置。尽管曲率不是,但对测试性能的三个曲率介导的贡献是重复的,尽管曲率不是。移位横向的变化是连接列车和测试局部最小值的线路,由于数据集采样或分布位移而差异。尽管在训练时间的转移尚不清楚,但仍可以通过最大程度地减少总体曲率来减轻横向横向。其次,我们得出了一种新的,明确的SGD稳态分布,表明SGD优化了与火车损失相关的有效潜力,并且SGD噪声介导了这种有效潜力的深层与低外生区域之间的权衡。第三,将我们的测试性能分析与SGD稳态相结合,表明,对于小的SGD噪声,移位膜可能是三种机制中最重要的。我们的实验证实了狂热对测试损失的影响,并进一步探索了SGD噪声与曲率之间的关系。
translated by 谷歌翻译
分析高维损失函数的几何特性,例如局部曲率以及围绕损失空间某个特定点的其他Optima的存在,可以帮助您更好地理解神经网络结构,实现属性和学习绩效之间的相互作用。在这项工作中,我们将概念从高维概率和差异几何形状结合在一起,以研究低维损耗表示中的曲率特性如何取决于原始损失空间中的曲率。我们表明,如果使用随机投影,则很少在较低维表示中正确识别原始空间中的鞍点。在这样的预测中,较低维表示中的预期曲率与原始损耗空间中的平均曲率成正比。因此,原始损耗空间中的平均曲率决定了鞍点是否平均显示为最小值,最大值或几乎平坦的区域。我们使用预期曲率和平均曲率(即标准化的Hessian Trace)之间的连接来估计黑森的痕迹,而无需像Hutchinson的方法一样计算Hessian或Hessian-Vector产品。由于随机预测无法正确识别马鞍信息,因此我们建议沿着与最大和最小的主要曲线相关的Hessian指示进行预测。我们将发现与正在进行的有关损失景观平坦性和普遍性的辩论联系起来。最后,我们在不同图像分类器上的数值实验中说明了我们的方法,最高$ 7 \ times 10^6 $参数。
translated by 谷歌翻译
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
translated by 谷歌翻译