随机重量平均(SWA)被认为是一种简单的,而一种有效的方法来改善随机梯度下降(SGD)的推广,用于训练深层神经网络(DNN)。解释其成功的常见见解是,在配备周期性或高常数学习率的SGD过程之后的平均权重可以发现更广泛的Optima,然后导致更好的泛化。我们给出了一个不同意上述内容的新洞察力。我们的表征,SWA的性能高度依赖于SWA收敛前运行的SGD进程的程度,并且权重平均的操作仅有助于减少方差。这种新的Insight表明了更好的算法设计上的实用指南。作为一个实例化,我们表明,随着收敛不足的SGD过程,运行SWA更多次导致泛化方面的持续增量益处。我们的发现在不同网络架构上的广泛实验得到了证实,包括基线CNN,PRERESNET-164,WieresNetNet-28-10,VGG16,Resnet-50,Reset-152,DenSenet-161和不同的数据集,包括CiFar- {10,100}和想象因。
translated by 谷歌翻译
Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much flatter solutions than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.
translated by 谷歌翻译
The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet. * Equal contribution. 1 Suppose we have three weight vectors w1, w2, w3. We set u = (w2 − w1), v = (w3 − w1) − w3 − w1, w2 − w1 / w2 − w1 2 • (w2 − w1). Then the normalized vectors û = u/ u , v = v/ v form an orthonormal basis in the plane containing w1, w2, w3. To visualize the loss in this plane, we define a Cartesian grid in the basis û, v and evaluate the networks corresponding to each of the points in the grid. A point P with coordinates (x, y) in the plane would then be given by P = w1 + x • û + y • v.
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
我们使用高斯过程扰动模型在高维二次上的真实和批量风险表面之间的高斯过程扰动模型分析和解释迭代平均的泛化性能。我们从我们的理论结果中获得了三个现象\姓名:}(1)将迭代平均值(ia)与大型学习率和正则化进行了改进的正规化的重要性。 (2)对较少频繁平均的理由。 (3)我们预计自适应梯度方法同样地工作,或者更好,而不是其非自适应对应物的迭代平均值。灵感来自这些结果\姓据{,一起与}对迭代解决方案多样性的适当正则化的重要性,我们提出了两个具有迭代平均的自适应算法。与随机梯度下降(SGD)相比,这些结果具有明显更好的结果,需要较少调谐并且不需要早期停止或验证设定监视。我们在各种现代和古典网络架构上展示了我们对CiFar-10/100,Imagenet和Penn TreeBank数据集的方法的疗效。
translated by 谷歌翻译
深度学习归一化技术的基本特性,例如批准归一化,正在使范围前的参数量表不变。此类参数的固有域是单位球,因此可以通过球形优化的梯度优化动力学以不同的有效学习率(ELR)来表示,这是先前研究的。在这项工作中,我们使用固定的ELR直接研究了训练量表不变的神经网络的特性。我们根据ELR值发现了这种训练的三个方案:收敛,混乱平衡和差异。我们详细研究了这些制度示例的理论检查,以及对真实规模不变深度学习模型的彻底经验分析。每个制度都有独特的特征,并反映了内在损失格局的特定特性,其中一些与先前对常规和规模不变的神经网络培训的研究相似。最后,我们证明了如何在归一化网络的常规培训以及如何利用它们以实现更好的Optima中反映发现的制度。
translated by 谷歌翻译
Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradientbased optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a minmax optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at https: //github.com/google-research/sam. * Work done as part of the Google AI Residency program.
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converge to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2\% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4\% across imbalanced datasets. The code is available at: https://github.com/val-iisc/Saddle-LongTail.
translated by 谷歌翻译
This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.
translated by 谷歌翻译
这项工作研究了基于梯度的算法的现有理论分析与训练深神经网络的实践之间的深刻断开。具体而言,我们提供了数值证据,表明在大规模神经网络训练(例如Imagenet + Resnet101和WT103 + Transformerxl模型)中,神经网络的权重不会融合到损失的梯度为零的固定点。然而,值得注意的是,我们观察到,即使权重不融合到固定点,最小化损耗函数的进展和训练损失稳定下来。受到这一观察的启发,我们提出了一种基于动力学系统的千古理论来解释它的新观点。我们没有研究权重演化,而是研究权重分布的演变。我们证明了权重分布到近似不变的度量,从而解释了训练损失如何稳定而无需重合到固定点。我们进一步讨论了这种观点如何更好地调整优化理论与机器学习实践中的经验观察。
translated by 谷歌翻译
培训具有批量标准化和重量衰减的神经网络已成为近年来的常见做法。在这项工作中,我们表明它们的结合使用可能导致优化动态的令人惊讶的周期性行为:培训过程定期表现出稳定,然而,不会导致完全发散但导致新的培训期。我们严格研究了从经验和理论观点的发现的定期行为基础的机制,并分析了实践中发生的条件。我们还证明,周期性行为可以被视为在批量归一化和体重衰减的训练中进行两种先前反对的视角的概括,即平衡推定和不稳定的推定。
translated by 谷歌翻译
分布式深度学习(DDL)对于大型深度学习(DL)培训至关重要。同步随机梯度下降(SSGD)1是事实上的DDL优化方法。使用足够大的批量大小对于实现DDL运行时加速至关重要。在大量批量设置中,必须增加学习速率以补偿减少的参数更新数量。然而,大型学习率可能会损害SSGD和培训可以很容易地分歧。最近,已经提出了分散的平行SGD(DPSGD)以改善分布式训练速度。在本文中,我们发现DPSGD不仅具有系统明智的运行时效,而且在大批量设置中对SSGD的显着收敛性有益。基于对DPSGD学习动态的详细分析,我们发现DPSGD引入了额外的横向依赖性噪声,可自动调整有效的学习率以提高收敛。此外,我们理论上表明这种噪音平滑了损失景观,因此允许更大的学习率。我们在18个最先进的DL模型/任务中进行广泛的研究,并证明DPSGD通常会收敛于SSGD在大批批量设置中大的学习速率的情况下融合。我们的发现一致地遍布两个不同的应用领域:计算机视觉(CIFAR10和Imagenet-1K)和自动语音识别(SWB300和SWB2000),以及两种不同类型的神经网络模型:卷积神经网络和长短期内存经常性神经网络。
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译
L 2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is not the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L 2 regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is
translated by 谷歌翻译
重要的是要了解流行的正则化方法如何帮助神经网络训练找到良好的概括解决方案。在这项工作中,我们从理论上得出了辍学的隐式正则化,并研究了损失函数的Hessian矩阵与辍学噪声的协方差矩阵之间的关系,并由一系列实验支持。然后,我们在数值上研究了辍学的隐式正规化的两个含义,这直觉上合理化了辍学有助于概括。首先,我们发现辍学的训练与实验中的标准梯度下降训练相比,发现具有最低最小的神经网络,而隐式正则化是找到平坦溶液的关键。其次,经过辍学的训练,隐藏神经元的输入权重(隐藏神经元的输入权重由其输入层到隐藏的神经元及其偏见项组成),往往会凝结在孤立的方向上。凝结是非线性学习过程中的一个功能,它使神经网络的复杂性低。尽管我们的理论主要集中在最后一个隐藏层中使用的辍学,但我们的实验适用于训练神经网络中的一般辍学。这项工作指出了与随机梯度下降相比,辍学的独特特征,是完全理解辍学的重要基础。
translated by 谷歌翻译
学习率调度程序已在培训深层神经网络中广泛采用。尽管它们的实际重要性,但其实践与理论分析之间存在差异。例如,即使是出于优化二次目标等简单问题,也不知道哪些SGD的时间表达到了最佳收敛性。在本文中,我们提出了本特征库,这是第一个可以在二次目标上获得最小值最佳收敛速率(最多达到常数)的最佳最佳收敛速率(最多达到常数),当时基础Hessian矩阵的特征值分布偏好。这种情况在实践中很普遍。实验结果表明,在CIFAR-10上的图像分类任务中,特征库可以显着超过阶跃衰减,尤其是当时期数量较小时。此外,该理论激发了两个简单的学习率调度程序,用于实用应用程序,可以近似特征。对于某些问题,提议的调度程序的最佳形状类似于余弦衰减的最佳形状,这阐明了余弦衰减在这种情况下的成功。对于其他情况,建议的调度程序优于余弦衰减。
translated by 谷歌翻译