在神经网络的经验风险景观中扁平最小值的性质已经讨论了一段时间。越来越多的证据表明他们对尖锐物质具有更好的泛化能力。首先,我们讨论高斯混合分类模型,并分析显示存在贝叶斯最佳点估算器,其对应于属于宽平区域的最小值。可以通过直接在分类器(通常是独立的)或学习中使用的可分解损耗函数上应用最大平坦度算法来找到这些估计器。接下来,我们通过广泛的数值验证将分析扩展到深度学习场景。使用两种算法,熵-SGD和复制-SGD,明确地包括在优化目标中,所谓的非局部平整度措施称为本地熵,我们一直提高常见架构的泛化误差(例如Resnet,CeffectnNet)。易于计算的平坦度测量显示与测试精度明确的相关性。
translated by 谷歌翻译
This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.
translated by 谷歌翻译
我们通过将其基于实现功能空间而不是参数空间的几何形状来系统地研究深度神经网络景观的方法。将分类器分组到等效类中,我们开发了一个标准化的参数化,其中所有对称性都被删除,从而导致环形拓扑。在这个空间上,我们探讨了误差景观而不是损失。这使我们能够得出有意义的概念,即最小化器的平坦度和连接它们的地球通道的概念。使用不同的优化算法,这些算法采样具有不同平坦度的最小化器,我们研究模式连接性和相对距离。测试各种最先进的体系结构和基准数据集,我们确认了平面度和泛化性能之间的相关性;我们进一步表明,在功能空间中,minima彼此更近,并且连接它们的大地测量学的屏障很小。我们还发现,通过梯度下降的变体发现的最小化器可以通过由参数空间中的两个直线组成的零误差路径连接,即带有单个弯曲的多边形链。我们观察到具有二进制权重和激活的神经网络中相似的定性结果,这为在这种情况下的连通性提供了第一个结果之一。我们的结果取决于对称性的去除,并且与对简单浅层模型进行的一些分析研究所描述的丰富现象学非常吻合。
translated by 谷歌翻译
当前的深度神经网络被高度参数化(多达数十亿个连接权重)和非线性。然而,它们几乎可以通过梯度下降算法的变体完美地拟合数据,并达到预测准确性的意外水平,而不会过度拟合。这些是巨大的结果,无视统计学习的预测,并对非凸优化构成概念性挑战。在本文中,我们使用来自无序系统的统计物理学的方法来分析非凸二进制二进制神经网络模型中过度参数化的计算后果,该模型对从结构上更简单但“隐藏”网络产生的数据进行了培训。随着连接权重的增加,我们遵循误差损失函数不同最小值的几何结构的变化,并将其与学习和概括性能相关联。当解决方案开始存在时,第一次过渡发生在所谓的插值点(完美拟合变得可能)。这种过渡反映了典型溶液的特性,但是它是尖锐的最小值,难以采样。差距后,发生了第二个过渡,并具有不同类型的“非典型”结构的不连续外观:重量空间的宽区域,这些区域特别是解决方案密度且具有良好的泛化特性。两种解决方案共存,典型的解决方案的呈指数数量,但是从经验上讲,我们发现有效的算法采样了非典型,稀有的算法。这表明非典型相变是学习的相关阶段。与该理论建议的可观察到的现实网络的数值测试结果与这种情况一致。
translated by 谷歌翻译
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a minmax optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at https: //github.com/google-research/sam. * Work done as part of the Google AI Residency program.
translated by 谷歌翻译
深度学习的成功揭示了神经网络对整个科学的应用潜力,并开辟了基本的理论问题。特别地,基于梯度方法的简单变体的学习算法能够找到高度非凸损函数的近最佳最佳最小值,是神经网络的意外特征。此外,这种算法即使在存在噪声的情况下也能够适合数据,但它们具有出色的预测能力。若干经验结果表明了通过算法实现的最小值的所谓平坦度与概括性性能之间的可再现相关性。同时,统计物理结果表明,在非透露网络中,多个窄的最小值可能与较少数量的宽扁平最小值共存,这概括了很好。在这里,我们表明,从“高边缘”(即局部稳健的)配置,从最小值的聚结会出现宽平坦的结构。尽管与零保证金相比具有呈指数稀有的稀有性,但高利润最小值倾向于集中在特定地区。这些最小值又被较小且较小的边距的其他解决方案包围,导致长距离的溶液区域密集。我们的分析还提供了一种替代分析方法,用于估计扁平最小值,当算法开始找到解决方案时,随着模型参数的数量变化。
translated by 谷歌翻译
深度学习归一化技术的基本特性,例如批准归一化,正在使范围前的参数量表不变。此类参数的固有域是单位球,因此可以通过球形优化的梯度优化动力学以不同的有效学习率(ELR)来表示,这是先前研究的。在这项工作中,我们使用固定的ELR直接研究了训练量表不变的神经网络的特性。我们根据ELR值发现了这种训练的三个方案:收敛,混乱平衡和差异。我们详细研究了这些制度示例的理论检查,以及对真实规模不变深度学习模型的彻底经验分析。每个制度都有独特的特征,并反映了内在损失格局的特定特性,其中一些与先前对常规和规模不变的神经网络培训的研究相似。最后,我们证明了如何在归一化网络的常规培训以及如何利用它们以实现更好的Optima中反映发现的制度。
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译
我们专注于具有单个隐藏层的特定浅神经网络,即具有$ l_2 $ normalistization的数据以及Sigmoid形状的高斯错误函数(“ ERF”)激活或高斯错误线性单元(GELU)激活。对于这些网络,我们通过Pac-Bayesian理论得出了新的泛化界限。与大多数现有的界限不同,它们适用于具有确定性或随机参数的神经网络。当网络接受Mnist和Fashion-Mnist上的香草随机梯度下降训练时,我们的界限在经验上是无效的。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say 32-512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions-and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.
translated by 谷歌翻译
我们使用高斯过程扰动模型在高维二次上的真实和批量风险表面之间的高斯过程扰动模型分析和解释迭代平均的泛化性能。我们从我们的理论结果中获得了三个现象\姓名:}(1)将迭代平均值(ia)与大型学习率和正则化进行了改进的正规化的重要性。 (2)对较少频繁平均的理由。 (3)我们预计自适应梯度方法同样地工作,或者更好,而不是其非自适应对应物的迭代平均值。灵感来自这些结果\姓据{,一起与}对迭代解决方案多样性的适当正则化的重要性,我们提出了两个具有迭代平均的自适应算法。与随机梯度下降(SGD)相比,这些结果具有明显更好的结果,需要较少调谐并且不需要早期停止或验证设定监视。我们在各种现代和古典网络架构上展示了我们对CiFar-10/100,Imagenet和Penn TreeBank数据集的方法的疗效。
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet. * Equal contribution. 1 Suppose we have three weight vectors w1, w2, w3. We set u = (w2 − w1), v = (w3 − w1) − w3 − w1, w2 − w1 / w2 − w1 2 • (w2 − w1). Then the normalized vectors û = u/ u , v = v/ v form an orthonormal basis in the plane containing w1, w2, w3. To visualize the loss in this plane, we define a Cartesian grid in the basis û, v and evaluate the networks corresponding to each of the points in the grid. A point P with coordinates (x, y) in the plane would then be given by P = w1 + x • û + y • v.
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
分布式深度学习(DDL)对于大型深度学习(DL)培训至关重要。同步随机梯度下降(SSGD)1是事实上的DDL优化方法。使用足够大的批量大小对于实现DDL运行时加速至关重要。在大量批量设置中,必须增加学习速率以补偿减少的参数更新数量。然而,大型学习率可能会损害SSGD和培训可以很容易地分歧。最近,已经提出了分散的平行SGD(DPSGD)以改善分布式训练速度。在本文中,我们发现DPSGD不仅具有系统明智的运行时效,而且在大批量设置中对SSGD的显着收敛性有益。基于对DPSGD学习动态的详细分析,我们发现DPSGD引入了额外的横向依赖性噪声,可自动调整有效的学习率以提高收敛。此外,我们理论上表明这种噪音平滑了损失景观,因此允许更大的学习率。我们在18个最先进的DL模型/任务中进行广泛的研究,并证明DPSGD通常会收敛于SSGD在大批批量设置中大的学习速率的情况下融合。我们的发现一致地遍布两个不同的应用领域:计算机视觉(CIFAR10和Imagenet-1K)和自动语音识别(SWB300和SWB2000),以及两种不同类型的神经网络模型:卷积神经网络和长短期内存经常性神经网络。
translated by 谷歌翻译
随机梯度下降(SGD)是深度学习技术的工作主控算法。在训练阶段的每个步骤中,从训练数据集中抽取迷你样本,并且根据该特定示例子集的性能调整神经网络的权重。迷你批量采样过程将随机性动力学引入梯度下降,具有非琐碎的状态依赖性噪声。我们在原型神经网络模型中表征了SGD的随机和最近引入的变体持久性SGD。在占地面定的制度中,在最终训练误差是阳性的情况下,SGD动力学达到静止状态,我们从波动耗散定理定义了从动态平均场理论计算的波动定理的有效温度。我们使用有效温度来量化SGD噪声的幅度作为问题参数的函数。在过度参数化的制度中,在训练错误消失的情况下,我们通过计算系统的两个副本之间的平均距离来测量SGD的噪声幅度,并具有相同的初始化和两个不同的SGD噪声的实现。我们发现这两个噪声测量与问题参数的函数类似。此外,我们观察到嘈杂的算法导致相应的约束满足问题的更广泛的决策边界。
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much flatter solutions than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.
translated by 谷歌翻译