We propose a Randomised Subspace Gauss-Newton (R-SGN) algorithm for solving nonlinear least-squares optimization problems, that uses a sketched Jacobian of the residual in the variable domain and solves a reduced linear least-squares on each iteration. A sublinear global rate of convergence result is presented for a trust-region variant of R-SGN, with high probability, which matches deterministic counterpart results in the order of the accuracy tolerance. Promising preliminary numerical results are presented for R-SGN on logistic regression and on nonlinear regression problems from the CUTEst collection.
translated by 谷歌翻译
我们介绍了一种牛顿型方法,可以从任何初始化和带有Lipschitz Hessians的任意凸面目标收敛。通过将立方规范化与某种自适应levenberg - Marquardt罚款合并来实现这一目标。特别地,我们表明由$ x ^ {k + 1} = x ^ k - \ bigl(\ nabla ^ 2 f(x ^ k)+ \ sqrt {h \ | \ nabla f(x ^ k)给出的迭代)\ |} \ mathbf {i} \ bigr)^ { - 1} \ nabla f(x ^ k)$,其中$ h> 0 $是一个常数,用$ \ mathcal {o}全球收敛(\ frac{1} {k ^ 2})$率。我们的方法是牛顿方法的第一个变体,具有廉价迭代和可怕的全球融合。此外,我们证明当目的强烈凸起时,本地我们的方法会收敛超连续。为了提高方法的性能,我们提供了一种不需要超参数的线路搜索程序,并且可提供高效。
translated by 谷歌翻译
We consider minimizing a smooth and strongly convex objective function using a stochastic Newton method. At each iteration, the algorithm is given an oracle access to a stochastic estimate of the Hessian matrix. The oracle model includes popular algorithms such as Subsampled Newton and Newton Sketch. Despite using second-order information, these existing methods do not exhibit superlinear convergence, unless the stochastic noise is gradually reduced to zero during the iteration, which would lead to a computational blow-up in the per-iteration cost. We propose to address this limitation with Hessian averaging: instead of using the most recent Hessian estimate, our algorithm maintains an average of all the past estimates. This reduces the stochastic noise while avoiding the computational blow-up. We show that this scheme exhibits local $Q$-superlinear convergence with a non-asymptotic rate of $(\Upsilon\sqrt{\log (t)/t}\,)^{t}$, where $\Upsilon$ is proportional to the level of stochastic noise in the Hessian oracle. A potential drawback of this (uniform averaging) approach is that the averaged estimates contain Hessian information from the global phase of the method, i.e., before the iterates converge to a local neighborhood. This leads to a distortion that may substantially delay the superlinear convergence until long after the local neighborhood is reached. To address this drawback, we study a number of weighted averaging schemes that assign larger weights to recent Hessians, so that the superlinear convergence arises sooner, albeit with a slightly slower rate. Remarkably, we show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still exhibits a superlinear convergence rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.
translated by 谷歌翻译
近期在应用于培训深度神经网络和数据分析中的其他优化问题中的非凸优化的优化算法的兴趣增加,我们概述了最近对非凸优化优化算法的全球性能保证的理论结果。我们从古典参数开始,显示一般非凸面问题无法在合理的时间内有效地解决。然后,我们提供了一个问题列表,可以通过利用问题的结构来有效地找到全球最小化器,因为可能的问题。处理非凸性的另一种方法是放宽目标,从找到全局最小,以找到静止点或局部最小值。对于该设置,我们首先为确定性一阶方法的收敛速率提出了已知结果,然后是最佳随机和随机梯度方案的一般理论分析,以及随机第一阶方法的概述。之后,我们讨论了非常一般的非凸面问题,例如最小化$ \ alpha $ -weakly-are-convex功能和满足Polyak-lojasiewicz条件的功能,这仍然允许获得一阶的理论融合保证方法。然后,我们考虑更高阶和零序/衍生物的方法及其收敛速率,以获得非凸优化问题。
translated by 谷歌翻译
我们引入了一种降低尺寸的二阶方法(DRSOM),用于凸和非凸的不受约束优化。在类似信任区域的框架下,我们的方法保留了二阶方法的收敛性,同时仅在两个方向上使用Hessian-Vector产品。此外,计算开销仍然与一阶相当,例如梯度下降方法。我们证明该方法的复杂性为$ O(\ epsilon^{ - 3/2})$,以满足子空间中的一阶和二阶条件。DRSOM的适用性和性能通过逻辑回归,$ L_2-L_P $最小化,传感器网络定位和神经网络培训的各种计算实验展示。对于神经网络,我们的初步实施似乎在训练准确性和迭代复杂性方面与包括SGD和ADAM在内的最先进的一阶方法获得了计算优势。
translated by 谷歌翻译
在本文中,我们提出了SC-REG(自助正规化)来学习过共同的前馈神经网络来学习\ EMPH {牛顿递减}框架的二阶信息进行凸起问题。我们提出了具有自助正规化(得分-GGN)算法的广义高斯 - 牛顿,其每次接收到新输入批处理时都会更新网络参数。所提出的算法利用Hessian矩阵中的二阶信息的结构,从而减少训练计算开销。虽然我们的目前的分析仅考虑凸面的情况,但数值实验表明了我们在凸和非凸面设置下的方法和快速收敛的效率,这对基线一阶方法和准牛顿方法进行了比较。
translated by 谷歌翻译
We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by a constrained non-convex optimization problem involving the Grassmann manifold. We propose two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme, both of which include first- and second-order variants. Both sets of algorithms have theoretical guarantees. In particular, for the alternating minimization, we establish global convergence and worst-case complexity bounds. Additionally, using the Kurdyka-Lojasiewicz property, we show that the alternating minimization converges to a unique limit point. We provide extensive numerical results for the recovery of union of subspaces and clustering under entry sampling and dense Gaussian sampling. Our methods are competitive with existing approaches and, in particular, high accuracy is achieved in the recovery using Riemannian second-order methods.
translated by 谷歌翻译
In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants. We aim at explaining these components from a mathematical point of view, including theoretical and practical aspects, but at an elementary level. We will focus on basic variants of the gradient descent method and then extend our view to recent variants, especially variance-reduced stochastic gradient schemes (SGD). Our approach relies on revealing the structures presented inside the problem and the assumptions imposed on the objective function. Our convergence analysis unifies several known results and relies on a general, but elementary recursive expression. We have illustrated this analysis on several common schemes.
translated by 谷歌翻译
We propose a trust-region stochastic sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with stochastic objectives and deterministic equality constraints. We consider a fully stochastic setting, where in each iteration a single sample is generated to estimate the objective gradient. The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to employ indefinite Hessian matrices (i.e., Hessians without modification) in SQP subproblems. As a trust-region method for constrained optimization, our algorithm needs to address an infeasibility issue -- the linearized equality constraints and trust-region constraints might lead to infeasible SQP subproblems. In this regard, we propose an \textit{adaptive relaxation technique} to compute the trial step that consists of a normal step and a tangential step. To control the lengths of the two steps, we adaptively decompose the trust-region radius into two segments based on the proportions of the feasibility and optimality residuals to the full KKT residual. The normal step has a closed form, while the tangential step is solved from a trust-region subproblem, to which a solution ensuring the Cauchy reduction is sufficient for our study. We establish the global almost sure convergence guarantee for TR-StoSQP, and illustrate its empirical performance on both a subset of problems in the CUTEst test set and constrained logistic regression problems using data from the LIBSVM collection.
translated by 谷歌翻译
We analyze Newton's method with lazy Hessian updates for solving general possibly non-convex optimization problems. We propose to reuse a previously seen Hessian for several iterations while computing new gradients at each step of the method. This significantly reduces the overall arithmetical complexity of second-order optimization schemes. By using the cubic regularization technique, we establish fast global convergence of our method to a second-order stationary point, while the Hessian does not need to be updated each iteration. For convex problems, we justify global and local superlinear rates for lazy Newton steps with quadratic regularization, which is easier to compute. The optimal frequency for updating the Hessian is once every $d$ iterations, where $d$ is the dimension of the problem. This provably improves the total arithmetical complexity of second-order algorithms by a factor $\sqrt{d}$.
translated by 谷歌翻译
我们应用随机顺序二次编程(STOSQP)算法来求解受约束的非线性优化问题,在该问题是随机的,并且约束是确定性的。我们研究了一个完全随机的设置,其中每次迭代中只有一个样本可用于估计物镜的梯度和黑森州。我们允许stosqp选择一个随机架子$ \ bar {\ alpha} _t $适应性,使得$ \ beta_t \ leq \ leq \ bar {\ alpha} _t \ leq \ leq \ beta_t+beta_t+\ chi_t+\ chi_t $,wither = o(\ beta_t)$是预定的确定性序列。我们还允许STOSQP通过随机迭代求解器(例如,使用草图和项目方法)求解牛顿系统。而且我们不需要不精确的牛顿方向的近似误差即可消失。对于这个一般的STOSQP框架,我们建立了其最后一次迭代的渐近收敛速率,最差的案例迭代复杂性是副产品。我们执行统计推断。特别是,有了适当的衰减$ \ beta_t,\ chi_t $,我们表明:(i)STOSQP方案最多可以采用$ o(1/\ epsilon^4)$ iterations $ iterations $ iTerations以实现$ \ epsilon $ -Stationarity; (ii)几乎毫无疑问,$ \ |(x_t -x^\ star,\ lambda_t- \ lambda^\ star)\ | | = o(\ sqrt {\ beta_t \ log(1/\ beta_t)})+o(\ chi_t/\ beta_t)$,其中$(x_t,\ lambda_t)$是primal-dimal-dimal-dialal-dialal-dialal-dual stosqp itselmate; (iii)序列$ 1/\ sqrt {\ beta_t} \ cdot(x_t -x^\ star,\ lambda_t- \ lambda_t- \ lambda^\ star)$收敛到平均零高斯分布,具有非琐事的共价矩阵。此外,我们建立了$(x_t,\ lambda_t)$的Berry-Esseen,以定量地测量其分布功能的收敛性。我们还为协方差矩阵提供了实用的估计器,可以使用iTerates $ \ {(x_t,\ lambda_t)\} _ t $构建$(x^\ star,\ lambda^\ star)$的置信区间(x^\ star,\ lambda^\ star)$。我们的定理使用最可爱的测试集中的非线性问题验证。
translated by 谷歌翻译
在本文中,我们考虑了第一和二阶技术来解决机器学习中产生的连续优化问题。在一阶案例中,我们提出了一种从确定性或半确定性到随机二次正则化方法的转换框架。我们利用随机优化的两相性质提出了一种具有自适应采样和自适应步长的新型一阶算法。在二阶案例中,我们提出了一种新型随机阻尼L-BFGS方法,该方法可以在深度学习的高度非凸起背景下提高先前的算法。这两种算法都在众所周知的深度学习数据集上进行评估并表现出有希望的性能。
translated by 谷歌翻译
这项工作考虑了从观察到的数据学习线性系统的马尔可夫参数的问题。最近的非渐近系统识别结果表征了单个和多卷展览设置中这个问题的样本复杂性。在这两个实例中,为了获得可接受的估计所需的样本数量可以为二阶算法的难以接触的判决变量产生优化问题。我们表明,基于Hessian-Sketching的随机和分布式牛顿算法可以生产$ \ epsilon $ -optimal解决方案并在几何上收敛。此外,该算法史无于衷。我们的结果适用于各种草图矩阵,我们用数字示例说明了理论。
translated by 谷歌翻译
Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes -- the Steffensen method avoids second derivatives and is still quadratically convergent like Newton method. By incorporating an optimal step size we can even push its convergence order beyond quadratic to $1+\sqrt{2} \approx 2.414$. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method -- note that this is not true for SGD or SLBFGS -- and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.
translated by 谷歌翻译
我们提出了一种基于优化的基于优化的框架,用于计算差异私有M估算器以及构建差分私立置信区的新方法。首先,我们表明稳健的统计数据可以与嘈杂的梯度下降或嘈杂的牛顿方法结合使用,以便分别获得具有全局线性或二次收敛的最佳私人估算。我们在局部强大的凸起和自我协调下建立当地和全球融合保障,表明我们的私人估算变为对非私人M估计的几乎最佳附近的高概率。其次,我们通过构建我们私有M估计的渐近方差的差异私有估算来解决参数化推断的问题。这自然导致近​​似枢轴统计,用于构建置信区并进行假设检测。我们展示了偏置校正的有效性,以提高模拟中的小样本实证性能。我们说明了我们在若干数值例子中的方法的好处。
translated by 谷歌翻译
我们介绍和分析结构化的随机零订单下降(S-SZD),这是一种有限的差异方法,该方法在一组$ l \ leq d $正交方向上近似于随机梯度,其中$ d $是环境空间的维度。这些方向是随机选择的,并且可能在每个步骤中发生变化。对于平滑的凸功能,我们几乎可以确保迭代的收敛性和对$ o(d/l k^{ - c})$的功能值的收敛速率,每$ c <1/2 $,这是任意关闭的就迭代次数而言,是随机梯度下降(SGD)。我们的界限还显示了使用$ l $多个方向而不是一个方向的好处。对于满足polyak-{\ l} ojasiewicz条件的非convex函数,我们在这种假设下建立了随机Zeroth Order Order Order算法的第一个收敛速率。我们在数值模拟中证实了我们的理论发现,在数值模拟中,满足假设以及对超参数优化的现实世界问题,观察到S-SZD具有很好的实践性能。
translated by 谷歌翻译
We introduce SketchySGD, a stochastic quasi-Newton method that uses sketching to approximate the curvature of the loss function. Quasi-Newton methods are among the most effective algorithms in traditional optimization, where they converge much faster than first-order methods such as SGD. However, for contemporary deep learning, quasi-Newton methods are considered inferior to first-order methods like SGD and Adam owing to higher per-iteration complexity and fragility due to inexact gradients. SketchySGD circumvents these issues by a novel combination of subsampling, randomized low-rank approximation, and dynamic regularization. In the convex case, we show SketchySGD with a fixed stepsize converges to a small ball around the optimum at a faster rate than SGD for ill-conditioned problems. In the non-convex case, SketchySGD converges linearly under two additional assumptions, interpolation and the Polyak-Lojaciewicz condition, the latter of which holds with high probability for wide neural networks. Numerical experiments on image and tabular data demonstrate the improved reliability and speed of SketchySGD for deep learning, compared to standard optimizers such as SGD and Adam and existing quasi-Newton methods.
translated by 谷歌翻译
找到模型的最佳超参数可以作为双重优化问题,通常使用零级技术解决。在这项工作中,当内部优化问题是凸但不平滑时,我们研究一阶方法。我们表明,近端梯度下降和近端坐标下降序列序列的前向模式分化,雅各比人会收敛到精确的雅各布式。使用隐式差异化,我们表明可以利用内部问题的非平滑度来加快计算。最后,当内部优化问题大约解决时,我们对高度降低的误差提供了限制。关于回归和分类问题的结果揭示了高参数优化的计算益处,尤其是在需要多个超参数时。
translated by 谷歌翻译
在本文中,我们研究并证明了拟牛顿算法的Broyden阶级的非渐近超线性收敛速率,包括Davidon - Fletcher - Powell(DFP)方法和泡沫 - 弗莱彻 - 夏诺(BFGS)方法。这些准牛顿方法的渐近超线性收敛率在文献中已经广泛研究,但它们明确的有限时间局部会聚率未得到充分调查。在本文中,我们为Broyden Quasi-Newton算法提供了有限时间(非渐近的)收敛分析,在目标函数强烈凸起的假设下,其梯度是Lipschitz连续的,并且其Hessian在最佳解决方案中连续连续。我们表明,在最佳解决方案的本地附近,DFP和BFGS生成的迭代以$(1 / k)^ {k / 2} $的超连线率收敛到最佳解决方案,其中$ k $是迭代次数。我们还证明了类似的本地超连线收敛结果,因为目标函数是自我协调的情况。几个数据集的数值实验证实了我们显式的收敛速度界限。我们的理论保证是第一个为准牛顿方法提供非渐近超线性收敛速率的效果之一。
translated by 谷歌翻译
尽管计算高昂和沟通成本,牛顿型方法仍然是分布式培训的吸引人选择,因为它们对不良条件的凸问题进行了稳健性。在这项工作中,我们研究了通信压缩和曲率信息的聚合机制,以降低这些成本,同时保留理论上优越的局部收敛保证。我们证明了Richtarik等人最近开发的三点压缩机(3PC)类。 [2022]对于梯度交流也可以推广到Hessian通信。该结果开辟了各种各样的沟通策略,例如承包压缩}和懒惰的聚合,可用于压缩过高的成本曲率信息。此外,我们发现了几种新的3PC机制,例如自适应阈值和Bernoulli聚集,这些机制需要减少通信和偶尔的Hessian计算。此外,我们扩展和分析了双向通信压缩和部分设备参与设置的方法,以迎合联合学习中应用的实际考虑。对于我们的所有方法,我们得出了与局部无关的局部线性和/或超线性收敛速率。最后,通过对凸优化问题进行广泛的数值评估,我们说明我们的设计方案与使用二阶信息相比,与几个关键基线相比,我们的设计方案达到了最新的通信复杂性。
translated by 谷歌翻译