在一阶算法的历史中,Nesterov的加速梯度下降(NAG)是里程碑之一。但是,长期以来,加速的原因一直是一个谜。直到[Shi等,2021]中提出的高分辨率微分方程框架之前,梯度校正的存在尚未得到揭示。在本文中,我们继续研究加速现象。首先,我们基于精确的观察结果和$ L $ SMOTH功能的不等式提供了明显的简化证明。然后,提出了一个新的隐式高分辨率差分方程框架,以及相应的隐式 - 速度版本的相位空间表示和lyapunov函数,以研究迭代序列$ \ {x_k \} _的迭代序列的收敛行为{k = 0}^{\ infty} $的nag。此外,从两种类型的相空间表示形式中,我们发现梯度校正所起的作用等同于按速度隐含在梯度中包含的作用,其中唯一的区别来自迭代序列$ \ \ {y_ {y_ {k} \} _ {k = 0}^{\ infty} $由$ \ {x_k \} _ {k = 0}^{\ infty} $代替。最后,对于NAG的梯度规范最小化是否具有更快的速率$ O(1/K^3)$的开放问题,我们为证明提供了一个积极的答案。同时,为$ r> 2 $显示了目标值最小化$ o(1/k^2)$的更快的速度。
translated by 谷歌翻译
Nesterov's accelerated gradient descent (NAG) is one of the milestones in the history of first-order algorithms. It was not successfully uncovered until the high-resolution differential equation framework was proposed in [Shi et al., 2022] that the mechanism behind the acceleration phenomenon is due to the gradient correction term. To deepen our understanding of the high-resolution differential equation framework on the convergence rate, we continue to investigate NAG for the $\mu$-strongly convex function based on the techniques of Lyapunov analysis and phase-space representation in this paper. First, we revisit the proof from the gradient-correction scheme. Similar to [Chen et al., 2022], the straightforward calculation simplifies the proof extremely and enlarges the step size to $s=1/L$ with minor modification. Meanwhile, the way of constructing Lyapunov functions is principled. Furthermore, we also investigate NAG from the implicit-velocity scheme. Due to the difference in the velocity iterates, we find that the Lyapunov function is constructed from the implicit-velocity scheme without the additional term and the calculation of iterative difference becomes simpler. Together with the optimal step size obtained, the high-resolution differential equation framework from the implicit-velocity scheme of NAG is perfect and outperforms the gradient-correction scheme.
translated by 谷歌翻译
In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.
translated by 谷歌翻译
几种广泛使用的一阶马鞍点优化方法将衍生天然衍生时的梯度下降成本(GDA)方法的相同连续时间常分等式(ODE)。然而,即使在简单的双线性游戏上,它们的收敛性也很差异。我们使用一种来自流体动力学的技术,称为高分辨率微分方程(HRDE)来设计几个骑马点优化方法的杂散。在双线性游戏中,派生HRDE的收敛性属性对应于起始离散方法的收敛性。使用这些技术,我们表明乐观梯度下降的HRDE具有最后迭代单调变分不等式的迭代收敛。据我们所知,这是第一个连续时间动态,用于收敛此类常规设置。此外,我们提供了ogda方法的最佳迭代收敛的速率,仅依靠单调运营商的一阶平滑度。
translated by 谷歌翻译
Recently, there has been great interest in connections between continuous-time dynamical systems and optimization algorithms, notably in the context of accelerated methods for smooth and unconstrained problems. In this paper we extend this perspective to nonsmooth and constrained problems by obtaining differential inclusions associated to novel accelerated variants of the alternating direction method of multipliers (ADMM). Through a Lyapunov analysis, we derive rates of convergence for these dynamical systems in different settings that illustrate an interesting tradeoff between decaying versus constant damping strategies. We also obtain perturbed equations capturing fine-grained details of these methods, which have improved stability and preserve the leading order convergence rates.
translated by 谷歌翻译
遵循与[SSJ20]相同的常规,我们继续在本文中介绍具有动量(SGD)的随机梯度下降的理论分析。不同的是,对于具有动量的SGD,我们证明了这是两个超参数在一起,学习率和动量系数,它在非convex优化中的线性收敛速率起着重要作用。我们的分析基于使用超参数依赖性随机微分方程(HP依赖性SDE),该方程是SGD的连续替代,并具有动量。同样,我们通过动量建立了SGD连续时间公式的线性收敛,并通过分析Kramers-Fokker-Planck操作员的光谱来获得最佳线性速率的显式表达。相比之下,我们证明,仅在引入动量时,仅在学习率方面的最佳线性收敛速率和SGD的最终差距如何随着动量系数从零增加到一个而变化。然后,我们提出了一种数学解释,为什么具有动量的SGD比在实践中比标准SGD更快,更强大的学习率收敛。最后,我们显示了在噪声存在下的Nesterov动量与标准动量没有根本差异。
translated by 谷歌翻译
交替的梯度 - 下降 - 上升(Altgda)是一种优化算法,已广泛用于各种机器学习应用中的模型培训,其旨在解决非渗透最小新的优化问题。然而,现有的研究表明,它遭受了非凸起最小值优化中的高计算复杂性。在本文中,我们开发了一种单环和快速Altgda型算法,利用了近端渐变更新和动量加速来解决正常的非透露极限优化问题。通过识别该算法的内在Lyapunov函数,我们证明它会收敛到非凸起最小化优化问题的临界点,并实现了计算复杂度$ \ mathcal {o}(\ kappa ^ {1.5} \ epsilon ^ { - 2} )$,其中$ \ epsilon $是理想的准确度,$ \ kappa $是问题的条件号。这种计算复杂性改善了单环GDA和AltGDA算法的最先进的复杂性(参见表1中的比较摘要)。我们通过对对抗深层学习的实验展示了算法的有效性。
translated by 谷歌翻译
在本文中,我们开发了一种新型加速算法,以解决一些最大单调方程以及单调夹杂物。我们的方法而不是使用Nesterov的加速方法,而是依赖于[32]中所谓的Halpern型固定点迭代,最近由许多研究人员利用,包括[24,70]。首先,我们基于Popov过去的超梯度方法来解决[70]中的锚定梯度方案的新变种,以解决最大单调方程$ g(x)= 0 $。我们表明我们的方法与运营商规范$ \ vert g(x_k)\ vert上的锚定梯度算法相同$,但只需要在每次迭代的每次迭代时进行一次评估,其中$ k $是迭代计数器。接下来,我们开发两个分割算法,以近似两个最大单调的运算符之和的零点。第一算法源自与分裂技术组合的锚定梯度方法,而第二个是其波波夫的变体,其可以降低偏移复杂度。这两种算法似乎都是新的,可以被视为Douglas-Rachford(DR)分裂方法的加速变体。他们均达到$ \ mathcal {o}(1 / k)$ rations上的正常r_ {\ gamma}(x_k)\ vert $ g _ {\ gamma}(\ cdot) $与问题相关联。我们还提出了一个新的加速Douglas-Rachford分裂方案,用于解决这个问题,该问题在$ \ vert g _ {\ gamma}(x_k)\ vert $下的$ \ mathcal {o}(1 / k)$收敛率下面只有最大单调假设。最后,我们指定了我们的第一算法来解决凸凹minimax问题,并应用我们加速的DR方案来得出乘法器(ADMM)的交替方向方法的新变型。
translated by 谷歌翻译
We introduce a class of first-order methods for smooth constrained optimization that are based on an analogy to non-smooth dynamical systems. Two distinctive features of our approach are that (i) projections or optimizations over the entire feasible set are avoided, in stark contrast to projected gradient methods or the Frank-Wolfe method, and (ii) iterates are allowed to become infeasible, which differs from active set or feasible direction methods, where the descent motion stops as soon as a new constraint is encountered. The resulting algorithmic procedure is simple to implement even when constraints are nonlinear, and is suitable for large-scale constrained optimization problems in which the feasible set fails to have a simple structure. The key underlying idea is that constraints are expressed in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. In particular, this means that at each iteration only constraints that are violated are taken into account. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.
translated by 谷歌翻译
深度神经网络和其他现代机器学习模型的培训通常包括解决高维且受大规模数据约束的非凸优化问题。在这里,基于动量的随机优化算法在近年来变得尤其流行。随机性来自数据亚采样,从而降低了计算成本。此外,动量和随机性都应该有助于算法克服当地的最小化器,并希望在全球范围内融合。从理论上讲,这种随机性和动量的结合被糟糕地理解。在这项工作中,我们建议并分析具有动量的随机梯度下降的连续时间模型。该模型是一个分段确定的马尔可夫过程,它通过阻尼不足的动态系统和通过动力学系统的随机切换来代表粒子运动。在我们的分析中,我们研究了长期限制,子采样到无填充采样极限以及动量到非摩托车的限制。我们对随着时间的推移降低动量的情况特别感兴趣:直觉上,动量有助于在算法的初始阶段克服局部最小值,但禁止后来快速收敛到全球最小化器。在凸度的假设下,当降低随时间的动量时,我们显示了动力学系统与全局最小化器的收敛性,并让子采样率转移到无穷大。然后,我们提出了一个稳定的,合成的离散方案,以从我们的连续时间动力学系统中构造算法。在数值实验中,我们研究了我们在凸面和非凸测试问题中的离散方案。此外,我们训练卷积神经网络解决CIFAR-10图像分类问题。在这里,与动量相比,我们的算法与随机梯度下降相比达到了竞争性结果。
translated by 谷歌翻译
我们研究了具有有限和结构的平滑非凸化优化问题的随机重新洗脱(RR)方法。虽然该方法在诸如神经网络的训练之类的实践中广泛利用,但其会聚行为仅在几个有限的环境中被理解。在本文中,在众所周知的Kurdyka-LojasiewiCz(KL)不等式下,我们建立了具有适当递减步长尺寸的RR的强极限点收敛结果,即,RR产生的整个迭代序列是会聚并会聚到单个静止点几乎肯定的感觉。 In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes.当KL指数在$ [0,\ FRAC12] $以$ [0,\ FRAC12] $时,收敛率以$ \ mathcal {o}(t ^ { - 1})$的速率计算,以$ t $ counting迭代号。当KL指数属于$(\ FRAC12,1)$时,我们的派生收敛速率是FORM $ \ MATHCAL {O}(T ^ { - Q})$,$ Q \ IN(0,1)$取决于在KL指数上。基于标准的KL不等式的收敛分析框架仅适用于具有某种阶段性的算法。我们对基于KL不等式的步长尺寸减少的非下降RR方法进行了新的收敛性分析,这概括了标准KL框架。我们总结了我们在非正式分析框架中的主要步骤和核心思想,这些框架是独立的兴趣。作为本框架的直接应用,我们还建立了类似的强极限点收敛结果,为重组的近端点法。
translated by 谷歌翻译
在本文中,我们介绍了训练两层过度参数的Relu神经网络中动量方法的收敛分析,其中参数的数量明显大于训练实例的参数。动量方法上的现有作品表明,重球方法(HB)和Nesterov的加速方法(NAG)共享相同的限制普通微分方程(ODE),从而导致相同的收敛速率。从高分辨率的动力学角度来看,我们表明HB与NAG在收敛速率方面有所不同。此外,我们的发现为HB和NAG的高分辨率ODES的收敛性提供了更严格的上限。
translated by 谷歌翻译
我们介绍一种用于惯性梯度系统的新型自适应阻尼技术,该梯度系统将应用作为梯度下降算法,用于无约束优化。在使用非凸罗森布洛克函数的示例中,我们对现有的基于动力的梯度优化方法显示了改进。还使用Lyapunov稳定性分析,我们展示了算法的连续时间版本的性能。使用数值模拟,我们考虑通过使用辛欧拉方法的离散方式获得的离散时间对应的性能。
translated by 谷歌翻译
黎曼优化中加速梯度方法的研究最近见证了显着的进展。然而,与欧几里德的环境相比,利莫曼环境仍然缺乏对加速的系统理解。我们重新审视\ citet {monteiro2013accelerated}的\ citet {monteiro2013accelerated}的\ citeterated {monteiro2013accelerated},这是一个强大的框架,用于获得加速的欧几里德方法。随后,我们提出了一个Riemannian版的A-HPE。我们对Riemannian A-HPE分析的基础是欧几里德A-HPE的一系列洞察力,我们将仔细控制Riemannian几何形状引起的扭曲。我们描述了许多riemannian加速梯度方法作为我们框架的具体实例。
translated by 谷歌翻译
In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants. We aim at explaining these components from a mathematical point of view, including theoretical and practical aspects, but at an elementary level. We will focus on basic variants of the gradient descent method and then extend our view to recent variants, especially variance-reduced stochastic gradient schemes (SGD). Our approach relies on revealing the structures presented inside the problem and the assumptions imposed on the objective function. Our convergence analysis unifies several known results and relies on a general, but elementary recursive expression. We have illustrated this analysis on several common schemes.
translated by 谷歌翻译
我们研究了基于动量的一阶优化算法,其中迭代利用了前两个步骤中的信息,并受到加性白噪声的影响。这类算法包括重型球和Nesterov作为特殊情况的加速方法。对于强烈凸出的二次问题,我们在优化变量中使用误差的稳态差异来量化噪声放大并利用新颖的几何观点,以在沉降时间和最小/最大的可实现的噪声扩增之间建立分析性下限。对于所有稳定参数,这些边界与条件编号双重规模。我们还使用本文中开发的几何见解来引入两个参数化的算法族,这些算法族在噪声放大和沉降时间之间取得平衡,同时保留订单的帕累托最佳性。最后,对于一类连续的时梯度流动动力学(其合适的离散化都会产生两步动量算法),我们建立了类似的下限,同时也随条件数的数字四次扩展。
translated by 谷歌翻译
当使用有限的阶梯尺寸\ citep {shi20211undanding}时,Nesterov的加速梯度(NAG)进行优化的性能比其连续的时间限制(无噪声动力学Langevin)更好。这项工作探讨了该现象的采样对应物,并提出了一个扩散过程,其离散化可以产生基于梯度的MCMC方法。更确切地说,我们将NAG的优化器重新制定为强烈凸功能(NAG-SC)作为无Hessian的高分辨率ODE,将其高分辨率系数更改为超参数,注入适当的噪声,并将其离散化。新的超参数的加速效应是量化的,它不是由时间响应创造的人造效应。取而代之的是,在连续动力学级别和离散算法级别上,在$ w_2 $距离中以$ W_2 $距离的加速度均已定量确定。在对数符号和多模式案例中的经验实验也证明了这一加速度。
translated by 谷歌翻译
本文评价用机器学习问题的数值优化方法。由于机器学习模型是高度参数化的,我们专注于适合高维优化的方法。我们在二次模型上构建直觉,以确定哪种方法适用于非凸优化,并在凸函数上开发用于这种方法的凸起函数。随着随机梯度下降和动量方法的这种理论基础,我们试图解释为什么机器学习领域通常使用的方法非常成功。除了解释成功的启发式之外,最后一章还提供了对更多理论方法的广泛审查,这在实践中并不像惯例。所以在某些情况下,这项工作试图回答这个问题:为什么默认值中包含的默认TensorFlow优化器?
translated by 谷歌翻译
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of regression and inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence. Our approach compares favorably with other alternatives, as confirmed also in numerical simulations.
translated by 谷歌翻译
现代统计应用常常涉及最小化可能是非流动和/或非凸起的目标函数。本文侧重于广泛的Bregman-替代算法框架,包括本地线性近似,镜像下降,迭代阈值,DC编程以及许多其他实例。通过广义BREGMAN功能的重新发出使我们能够构建合适的误差测量并在可能高维度下建立非凸起和非凸起和非球形目标的全球收敛速率。对于稀疏的学习问题,在一些规律性条件下,所获得的估算器作为代理人的固定点,尽管不一定是局部最小化者,但享受可明确的统计保障,并且可以证明迭代顺序在所需的情况下接近统计事实准确地快速。本文还研究了如何通过仔细控制步骤和放松参数来设计基于适应性的动力的加速度而不假设凸性或平滑度。
translated by 谷歌翻译