In this paper, we introduce a novel optimization algorithm for machine learning model training called Normalized Stochastic Gradient Descent (NSGD) inspired by Normalized Least Mean Squares (NLMS) from adaptive filtering. When we train a high-complexity model on a large dataset, the learning rate is significantly important as a poor choice of optimizer parameters can lead to divergence. The algorithm updates the new set of network weights using the stochastic gradient but with $\ell_1$ and $\ell_2$-based normalizations on the learning rate parameter similar to the NLMS algorithm. Our main difference from the existing normalization methods is that we do not include the error term in the normalization process. We normalize the update term using the input vector to the neuron. Our experiments present that the model can be trained to a better accuracy level on different initial settings using our optimization algorithm. In this paper, we demonstrate the efficiency of our training algorithm using ResNet-20 and a toy neural network on different benchmark datasets with different initializations. The NSGD improves the accuracy of the ResNet-20 from 91.96\% to 92.20\% on the CIFAR-10 dataset.
translated by 谷歌翻译
优化通常是一个确定性问题,其中通过诸如梯度下降的一些迭代过程找到解决方案。然而,当培训神经网络时,由于样本的子集的随机选择,损耗函数会超过(迭代)时间。该随机化将优化问题转变为随机级别。我们建议将损失视为关于一些参考最优参考的嘈杂观察。这种对损失的解释使我们能够采用卡尔曼滤波作为优化器,因为其递归制剂旨在估计来自嘈杂测量的未知参数。此外,我们表明,用于未知参数的演进的卡尔曼滤波器动力学模型可用于捕获高级方法的梯度动态,如动量和亚当。我们称之为该随机优化方法考拉,对于Kalman优化算法而言,具有损失适应性的缺陷。考拉是一种易于实现,可扩展,高效的方法来训练神经网络。我们提供了通过实验的收敛分析和显示,它产生了与跨多个神经网络架构和机器学习任务的现有技术优化算法的现有状态的参数估计,例如计算机视觉和语言建模。
translated by 谷歌翻译
We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.
translated by 谷歌翻译
The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam. 1 * Work was done during an internship at Microsoft Dynamics 365 AI. † Work was done during an internship at Microsoft Dynamics 365 AI.
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
神经架构的创新促进了语言建模和计算机视觉中的重大突破。不幸的是,如果网络参数未正确初始化,新颖的架构通常会导致挑战超参数选择和培训不稳定。已经提出了许多架构特定的初始化方案,但这些方案并不总是可移植到新体系结构。本文介绍了毕业,一种用于初始化神经网络的自动化和架构不可知论由方法。毕业基础是一个简单的启发式;调整每个网络层的规范,使得具有规定的超参数的SGD或ADAM的单个步骤导致可能的损耗值最小。通过在每个参数块前面引入标量乘数变量,然后使用简单的数字方案优化这些变量来完成此调整。 GradInit加速了许多卷积架构的收敛性和测试性能,无论是否有跳过连接,甚至没有归一化层。它还提高了机器翻译的原始变压器架构的稳定性,使得在广泛的学习速率和动量系数下使用ADAM或SGD来训练它而无需学习速率预热。代码可在https://github.com/zhuchen03/gradinit上获得。
translated by 谷歌翻译
目前,深层神经网络(DNN)主要使用一阶方法进行训练。其中一些方法(例如Adam,Adagrad和Rmsprop及其变体)通过使用对角线矩阵来预先处理随机梯度。最近,通过通过按层块 - diagonal矩阵对随机梯度进行预处理,已开发出有效的二阶方法,例如KFAC,K-BFGS,洗发水和TNT。在这里,我们提出了一种自适应的“迷你块Fisher(MBF)”预处理方法,其中在这两类方法之间。具体而言,我们的方法对经验渔民矩阵使用块对基近似值,在DNN中的每一层(无论是卷积还是馈送)和完全连接,相关的对角线本身都是块 - diagonal,并且由A组成。大量适度的迷你块。我们的新方法利用GPU的并行性来有效地对每一层的大量矩阵进行计算。因此,MBF的均值计算成本仅略高于一阶方法。将我们提出的方法的性能与在自动编码器和CNN问题上的几种基线方法进行了比较,以在时间效率和概括功率方面验证其有效性。最后,证明MBF的理想化版本线性收敛。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batchnormalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
translated by 谷歌翻译
深度学习归一化技术的基本特性,例如批准归一化,正在使范围前的参数量表不变。此类参数的固有域是单位球,因此可以通过球形优化的梯度优化动力学以不同的有效学习率(ELR)来表示,这是先前研究的。在这项工作中,我们使用固定的ELR直接研究了训练量表不变的神经网络的特性。我们根据ELR值发现了这种训练的三个方案:收敛,混乱平衡和差异。我们详细研究了这些制度示例的理论检查,以及对真实规模不变深度学习模型的彻底经验分析。每个制度都有独特的特征,并反映了内在损失格局的特定特性,其中一些与先前对常规和规模不变的神经网络培训的研究相似。最后,我们证明了如何在归一化网络的常规培训以及如何利用它们以实现更好的Optima中反映发现的制度。
translated by 谷歌翻译
神经网络的架构和参数通常独立优化,这需要每当修改体系结构时对参数的昂贵再次再次再次进行验证。在这项工作中,我们专注于在不需要昂贵的再培训的情况下越来越多。我们提出了一种在训练期间添加新神经元的方法,而不会影响已经学到的内容,同时改善了培训动态。我们通过最大化新重量的梯度来实现后者,并通过奇异值分解(SVD)有效地找到最佳初始化。我们称这种技术渐变最大化增长(Gradmax),并展示其各种视觉任务和架构的效力。
translated by 谷歌翻译
由于稀疏神经网络通常包含许多零权重,因此可以在不降低网络性能的情况下潜在地消除这些不必要的网络连接。因此,设计良好的稀疏神经网络具有显着降低拖鞋和计算资源的潜力。在这项工作中,我们提出了一种新的自动修剪方法 - 稀疏连接学习(SCL)。具体地,重量被重新参数化为可培训权重变量和二进制掩模的元素方向乘法。因此,由二进制掩模完全描述网络连接,其由单位步进函数调制。理论上,从理论上证明了使用直通估计器(STE)进行网络修剪的基本原理。这一原则是STE的代理梯度应该是积极的,确保掩模变量在其最小值处收敛。在找到泄漏的Relu后,SoftPlus和Identity Stes可以满足这个原理,我们建议采用SCL的身份STE以进行离散面膜松弛。我们发现不同特征的面具梯度非常不平衡,因此,我们建议将每个特征的掩模梯度标准化以优化掩码变量训练。为了自动训练稀疏掩码,我们将网络连接总数作为我们的客观函数中的正则化术语。由于SCL不需要由网络层设计人员定义的修剪标准或超级参数,因此在更大的假设空间中探讨了网络,以实现最佳性能的优化稀疏连接。 SCL克服了现有自动修剪方法的局限性。实验结果表明,SCL可以自动学习并选择各种基线网络结构的重要网络连接。 SCL培训的深度学习模型以稀疏性,精度和减少脚波特的SOTA人类设计和自动修剪方法训练。
translated by 谷歌翻译
Deep Learning optimization involves minimizing a high-dimensional loss function in the weight space which is often perceived as difficult due to its inherent difficulties such as saddle points, local minima, ill-conditioning of the Hessian and limited compute resources. In this paper, we provide a comprehensive review of 12 standard optimization methods successfully used in deep learning research and a theoretical assessment of the difficulties in numerical optimization from the optimization literature.
translated by 谷歌翻译
重量衰减通常用于确保具有批归归量的深神经网络的训练实践中的良好概括(BN-DNNS),在该训练中,由于归一化,某些卷积层对于重量重新恢复是不变的。在本文中,我们证明了重量衰减的实际用法仍然存在一些未解决的问题,尽管现有的理论工作在解释BN-DNNS中体重衰减的影响方面。一方面,当非自适应学习率例如使用动量的SGD,即使在初始训练阶段,有效学习率也会继续增加,从而导致许多神经体系结构的过度拟合效果。另一方面,在SGDM和自适应学习率优化器中,例如亚当,体重衰减对概括的影响对超参数非常敏感。因此,找到最佳的重量衰减参数需要广泛的参数搜索。为了解决这些弱点,我们建议使用简单而有效的重量重新缩放(WRS)方案来规范重量规范,以替代体重衰减。 WRS通过将重量标准明确地重新定为单位规范来控制重量规范,从而防止梯度增加,但也确保了足够大的有效学习率以提高概括。在各种计算机视觉应用程序中,包括图像分类,对象检测,语义细分和人群计数,我们与重量衰减,隐含重量重新缩放(重量标准化)和梯度投影(ADAMP)相比,显示了WR的有效性和鲁棒性。
translated by 谷歌翻译
尽管神经网络取得了巨大的经验成功,但对培训程序的理论理解仍然有限,尤其是在为优化问题的非凸性性质而提供测试性能的性能保证时。当前的论文通过简化了凸结构的另一个问题来研究神经网络培训的另一种方法 - 解决单调变异不平等(MVI) - 灵感来自最近的工作(Juditsky&Nemirovsky,2019年)。可以通过计算有效的过程找到对MVI的解决方案,重要的是,这会导致$ \ ell_2 $和$ \ ell _ {\ elfty} $在模型恢复和预测准确性下的性能保证层线性神经网络。此外,我们研究了MVI在训练多层神经网络中的使用,并提出了一种称为\ textit {随机变异不平等}(SVI)的实用算法,并证明了其在训练完全连接的神经网络和图形神经网络(GNN)中的适用性(GNN )(SVI是完全一般的,可用于训练其他类型的神经网络)。与广泛使用的随机梯度下降方法相比,我们证明了SVI的竞争性或更好的性能,涉及各种性能指标的合成和真实网络数据预测任务,尤其是在培训早期阶段提高效率方面。
translated by 谷歌翻译
L 2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is not the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L 2 regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译
在神经网络的经验风险景观中扁平最小值的性质已经讨论了一段时间。越来越多的证据表明他们对尖锐物质具有更好的泛化能力。首先,我们讨论高斯混合分类模型,并分析显示存在贝叶斯最佳点估算器,其对应于属于宽平区域的最小值。可以通过直接在分类器(通常是独立的)或学习中使用的可分解损耗函数上应用最大平坦度算法来找到这些估计器。接下来,我们通过广泛的数值验证将分析扩展到深度学习场景。使用两种算法,熵-SGD和复制-SGD,明确地包括在优化目标中,所谓的非局部平整度措施称为本地熵,我们一直提高常见架构的泛化误差(例如Resnet,CeffectnNet)。易于计算的平坦度测量显示与测试精度明确的相关性。
translated by 谷歌翻译
亚当是训练深神经网络的最具影响力的自适应随机算法之一,即使在简单的凸面设置中,它也被指出是不同的。许多尝试,例如降低自适应学习率,采用较大的批量大小,结合了时间去相关技术,寻求类似的替代物,\ textit {etc。},以促进Adam-type算法融合。与现有方法相反,我们引入了另一种易于检查的替代条件,这仅取决于基础学习率的参数和历史二阶时刻的组合,以确保通用ADAM的全球融合以解决大型融合。缩放非凸随机优化。这种观察结果以及这种足够的条件,对亚当的差异产生了更深刻的解释。另一方面,在实践中,无需任何理论保证,广泛使用了迷你ADAM和分布式ADAM。我们进一步分析了分布式系统中的批次大小或节点的数量如何影响亚当的收敛性,从理论上讲,这表明迷你批次和分布式亚当可以通过使用较大的迷你批量或较大的大小来线性地加速节点的数量。最后,我们应用了通用的Adam和Mini Batch Adam,具有足够条件来求解反例并在各种真实世界数据集上训练多个神经网络。实验结果完全符合我们的理论分析。
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译