Two-level stochastic optimization formulations have become instrumental in a number of machine learning contexts such as continual learning, neural architecture search, adversarial learning, and hyperparameter tuning. Practical stochastic bilevel optimization problems become challenging in optimization or learning scenarios where the number of variables is high or there are constraints. In this paper, we introduce a bilevel stochastic gradient method for bilevel problems with lower-level constraints. We also present a comprehensive convergence theory that covers all inexact calculations of the adjoint gradient (also called hypergradient) and addresses both the lower-level unconstrained and constrained cases. To promote the use of bilevel optimization in large-scale learning, we introduce a practical bilevel stochastic gradient method (BSG-1) that does not require second-order derivatives and, in the lower-level unconstrained case, dismisses any system solves and matrix-vector products.
translated by 谷歌翻译
近期在应用于培训深度神经网络和数据分析中的其他优化问题中的非凸优化的优化算法的兴趣增加,我们概述了最近对非凸优化优化算法的全球性能保证的理论结果。我们从古典参数开始,显示一般非凸面问题无法在合理的时间内有效地解决。然后,我们提供了一个问题列表,可以通过利用问题的结构来有效地找到全球最小化器,因为可能的问题。处理非凸性的另一种方法是放宽目标,从找到全局最小,以找到静止点或局部最小值。对于该设置,我们首先为确定性一阶方法的收敛速率提出了已知结果,然后是最佳随机和随机梯度方案的一般理论分析,以及随机第一阶方法的概述。之后,我们讨论了非常一般的非凸面问题,例如最小化$ \ alpha $ -weakly-are-convex功能和满足Polyak-lojasiewicz条件的功能,这仍然允许获得一阶的理论融合保证方法。然后,我们考虑更高阶和零序/衍生物的方法及其收敛速率,以获得非凸优化问题。
translated by 谷歌翻译
本文分析了双模的彼此优化随机算法框架。 Bilevel优化是一类表现出两级结构的问题,其目标是使具有变量的外目标函数最小化,该变量被限制为对(内部)优化问题的最佳解决方案。我们考虑内部问题的情况是不受约束的并且强烈凸起的情况,而外部问题受到约束并具有平滑的目标函数。我们提出了一种用于解决如此偏纤维问题的两次时间尺度随机近似(TTSA)算法。在算法中,使用较大步长的随机梯度更新用于内部问题,而具有较小步长的投影随机梯度更新用于外部问题。我们在各种设置下分析了TTSA算法的收敛速率:当外部问题强烈凸起(RESP。〜弱凸)时,TTSA算法查找$ \ MATHCAL {O}(k ^ { - 2/3})$ -Optimal(resp。〜$ \ mathcal {o}(k ^ {-2/5})$ - 静止)解决方案,其中$ k $是总迭代号。作为一个应用程序,我们表明,两个时间尺度的自然演员 - 批评批评近端策略优化算法可以被视为我们的TTSA框架的特殊情况。重要的是,与全球最优政策相比,自然演员批评算法显示以预期折扣奖励的差距,以$ \ mathcal {o}(k ^ { - 1/4})的速率收敛。
translated by 谷歌翻译
Bilevel优化是在机器学习的许多领域中最小化涉及另一个功能的价值函数的问题。在大规模的经验风险最小化设置中,样品数量很大,开发随机方法至关重要,而随机方法只能一次使用一些样品进行进展。但是,计算值函数的梯度涉及求解线性系统,这使得很难得出无偏的随机估计。为了克服这个问题,我们引入了一个新颖的框架,其中内部问题的解决方案,线性系统的解和主要变量同时发展。这些方向是作为总和写成的,使其直接得出无偏估计。我们方法的简单性使我们能够开发全球差异算法,其中所有变量的动力学都会降低差异。我们证明,萨巴(Saba)是我们框架中著名的传奇算法的改编,具有$ o(\ frac1t)$收敛速度,并且在polyak-lojasciewicz的假设下实现了线性收敛。这是验证这些属性之一的双光线优化的第一种随机算法。数值实验验证了我们方法的实用性。
translated by 谷歌翻译
我们研究了一类算法,用于在内部级别物镜强烈凸起时求解随机和确定性设置中的彼此优化问题。具体地,我们考虑基于不精确的隐含区分的算法,并且我们利用热门开始策略来摊销精确梯度的估计。然后,我们介绍了一个统一的理论框架,受到奇异的扰动系统(Habets,1974)的研究来分析这种摊销算法。通过使用此框架,我们的分析显示了匹配可以访问梯度无偏见估计的Oracle方法的计算复杂度的算法,从而优于彼此优化的许多现有结果。我们在合成实验中说明了这些发现,并展示了这些算法对涉及几千个变量的超参数优化实验的效率。
translated by 谷歌翻译
该工作研究限制了随机函数是凸的,并表示为随机函数的组成。问题是在公平分类,公平回归和排队系统设计的背景下出现的。特别令人感兴趣的是甲骨文提供组成函数的随机梯度的大规模设置,目标是用最小对Oracle的调用来解决问题。由于组成形式,Oracle提供的随机梯度不会产生目标或约束梯度的无偏估计。取而代之的是,我们通过跟踪内部函数评估来构建近似梯度,从而导致准差鞍点算法。我们证明,所提出的算法几乎可以肯定地找到最佳和可行的解决方案。我们进一步确定所提出的算法需要$ \ MATHCAL {O}(1/\ EPSILON^4)$数据样本,以便获得$ \ epsilon $ -Approximate-approximate-apptroximate Pointal点,同时也确保零约束违反。该结果与无约束问题的随机成分梯度下降方法的样品复杂性相匹配,并改善了受约束设置的最著名样品复杂性结果。在公平分类和公平回归问题上测试了所提出的算法的功效。数值结果表明,根据收敛速率,所提出的算法优于最新算法。
translated by 谷歌翻译
In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants. We aim at explaining these components from a mathematical point of view, including theoretical and practical aspects, but at an elementary level. We will focus on basic variants of the gradient descent method and then extend our view to recent variants, especially variance-reduced stochastic gradient schemes (SGD). Our approach relies on revealing the structures presented inside the problem and the assumptions imposed on the objective function. Our convergence analysis unifies several known results and relies on a general, but elementary recursive expression. We have illustrated this analysis on several common schemes.
translated by 谷歌翻译
用于解决无约束光滑游戏的两个最突出的算法是经典随机梯度下降 - 上升(SGDA)和最近引入的随机共识优化(SCO)[Mescheder等,2017]。已知SGDA可以收敛到特定类别的游戏的静止点,但是当前的收敛分析需要有界方差假设。 SCO用于解决大规模对抗问题,但其收敛保证仅限于其确定性变体。在这项工作中,我们介绍了预期的共同胁迫条件,解释了它的好处,并在这种情况下提供了SGDA和SCO的第一次迭代收敛保证,以解决可能是非单调的一类随机变分不等式问题。我们将两种方法的线性会聚到解决方案的邻域时,当它们使用恒定的步长时,我们提出了富有识别的步骤化切换规则,以保证对确切解决方案的融合。此外,我们的收敛保证在任意抽样范式下担保,因此,我们对迷你匹配的复杂性进行了解。
translated by 谷歌翻译
We study stochastic monotone inclusion problems, which widely appear in machine learning applications, including robust regression and adversarial learning. We propose novel variants of stochastic Halpern iteration with recursive variance reduction. In the cocoercive -- and more generally Lipschitz-monotone -- setup, our algorithm attains $\epsilon$ norm of the operator with $\mathcal{O}(\frac{1}{\epsilon^3})$ stochastic operator evaluations, which significantly improves over state of the art $\mathcal{O}(\frac{1}{\epsilon^4})$ stochastic operator evaluations required for existing monotone inclusion solvers applied to the same problem classes. We further show how to couple one of the proposed variants of stochastic Halpern iteration with a scheduled restart scheme to solve stochastic monotone inclusion problems with ${\mathcal{O}}(\frac{\log(1/\epsilon)}{\epsilon^2})$ stochastic operator evaluations under additional sharpness or strong monotonicity assumptions.
translated by 谷歌翻译
We introduce a class of first-order methods for smooth constrained optimization that are based on an analogy to non-smooth dynamical systems. Two distinctive features of our approach are that (i) projections or optimizations over the entire feasible set are avoided, in stark contrast to projected gradient methods or the Frank-Wolfe method, and (ii) iterates are allowed to become infeasible, which differs from active set or feasible direction methods, where the descent motion stops as soon as a new constraint is encountered. The resulting algorithmic procedure is simple to implement even when constraints are nonlinear, and is suitable for large-scale constrained optimization problems in which the feasible set fails to have a simple structure. The key underlying idea is that constraints are expressed in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. In particular, this means that at each iteration only constraints that are violated are taken into account. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.
translated by 谷歌翻译
近年来,已经开发出各种基于梯度的方法来解决机器学习和计算机视觉地区的双层优化(BLO)问题。然而,这些现有方法的理论正确性和实际有效性总是依赖于某些限制性条件(例如,下层单身,LLS),这在现实世界中可能很难满足。此外,以前的文献仅证明了基于其特定的迭代策略的理论结果,因此缺乏一般的配方,以统一分析不同梯度的BLO的收敛行为。在这项工作中,我们从乐观的双级视点制定BLOS,并建立一个名为Bi-Level血液血统聚合(BDA)的新梯度的算法框架,以部分地解决上述问题。具体而言,BDA提供模块化结构,以分级地聚合上层和下层子问题以生成我们的双级迭代动态。从理论上讲,我们建立了一般会聚分析模板,并导出了一种新的证据方法,以研究基于梯度的BLO方法的基本理论特性。此外,这项工作系统地探讨了BDA在不同优化场景中的收敛行为,即,考虑从解决近似子问题返回的各种解决方案质量(即,全局/本地/静止解决方案)。广泛的实验证明了我们的理论结果,并展示了所提出的超参数优化和元学习任务算法的优越性。源代码可在https://github.com/vis-opt-group/bda中获得。
translated by 谷歌翻译
在本文中,我们考虑通过结合目标函数的曲率信息来改善随机方差减少梯度(SVRG)方法。我们建议通过将其合并到SVRG中,以使用计算有效的Barzilai-Borwein(BB)方法来降低随机梯度的方差。我们还将BB步骤大小合并为其变体。我们证明其线性收敛定理不仅适用于所提出的方法,还适用于SVRG的其他现有变体,并使用二阶信息。我们在基准数据集上进行了数值实验,并表明具有恒定步长的提出方法的性能优于现有方差减少的方法,这些方法对于某些测试问题。
translated by 谷歌翻译
Bilevel programming has recently received attention in the literature, due to a wide range of applications, including reinforcement learning and hyper-parameter optimization. However, it is widely assumed that the underlying bilevel optimization problem is solved either by a single machine or in the case of multiple machines connected in a star-shaped network, i.e., federated learning setting. The latter approach suffers from a high communication cost on the central node (e.g., parameter server) and exhibits privacy vulnerabilities. Hence, it is of interest to develop methods that solve bilevel optimization problems in a communication-efficient decentralized manner. To that end, this paper introduces a penalty function based decentralized algorithm with theoretical guarantees for this class of optimization problems. Specifically, a distributed alternating gradient-type algorithm for solving consensus bilevel programming over a decentralized network is developed. A key feature of the proposed algorithm is to estimate the hyper-gradient of the penalty function via decentralized computation of matrix-vector products and few vector communications, which is then integrated within our alternating algorithm to give the finite-time convergence analysis under different convexity assumptions. Owing to the generality of this complexity analysis, our result yields convergence rates for a wide variety of consensus problems including minimax and compositional optimization. Empirical results on both synthetic and real datasets demonstrate that the proposed method works well in practice.
translated by 谷歌翻译
我们应用随机顺序二次编程(STOSQP)算法来求解受约束的非线性优化问题,在该问题是随机的,并且约束是确定性的。我们研究了一个完全随机的设置,其中每次迭代中只有一个样本可用于估计物镜的梯度和黑森州。我们允许stosqp选择一个随机架子$ \ bar {\ alpha} _t $适应性,使得$ \ beta_t \ leq \ leq \ bar {\ alpha} _t \ leq \ leq \ beta_t+beta_t+\ chi_t+\ chi_t $,wither = o(\ beta_t)$是预定的确定性序列。我们还允许STOSQP通过随机迭代求解器(例如,使用草图和项目方法)求解牛顿系统。而且我们不需要不精确的牛顿方向的近似误差即可消失。对于这个一般的STOSQP框架,我们建立了其最后一次迭代的渐近收敛速率,最差的案例迭代复杂性是副产品。我们执行统计推断。特别是,有了适当的衰减$ \ beta_t,\ chi_t $,我们表明:(i)STOSQP方案最多可以采用$ o(1/\ epsilon^4)$ iterations $ iterations $ iTerations以实现$ \ epsilon $ -Stationarity; (ii)几乎毫无疑问,$ \ |(x_t -x^\ star,\ lambda_t- \ lambda^\ star)\ | | = o(\ sqrt {\ beta_t \ log(1/\ beta_t)})+o(\ chi_t/\ beta_t)$,其中$(x_t,\ lambda_t)$是primal-dimal-dimal-dialal-dialal-dialal-dual stosqp itselmate; (iii)序列$ 1/\ sqrt {\ beta_t} \ cdot(x_t -x^\ star,\ lambda_t- \ lambda_t- \ lambda^\ star)$收敛到平均零高斯分布,具有非琐事的共价矩阵。此外,我们建立了$(x_t,\ lambda_t)$的Berry-Esseen,以定量地测量其分布功能的收敛性。我们还为协方差矩阵提供了实用的估计器,可以使用iTerates $ \ {(x_t,\ lambda_t)\} _ t $构建$(x^\ star,\ lambda^\ star)$的置信区间(x^\ star,\ lambda^\ star)$。我们的定理使用最可爱的测试集中的非线性问题验证。
translated by 谷歌翻译
通过在线规范相关性分析的问题,我们提出了\ emph {随机缩放梯度下降}(SSGD)算法,以最小化通用riemannian歧管上的随机功能的期望。 SSGD概括了投影随机梯度下降的思想,允许使用缩放的随机梯度而不是随机梯度。在特殊情况下,球形约束的特殊情况,在广义特征向量问题中产生的,我们建立了$ \ sqrt {1 / t} $的令人反感的有限样本,并表明该速率最佳最佳,直至具有积极的积极因素相关参数。在渐近方面,一种新的轨迹平均争论使我们能够实现局部渐近常态,其速率与鲁普特 - Polyak-Quaditsky平均的速率匹配。我们将这些想法携带在一个在线规范相关分析,从事文献中的第一次获得了最佳的一次性尺度算法,其具有局部渐近融合到正常性的最佳一次性尺度算法。还提供了用于合成数据的规范相关分析的数值研究。
translated by 谷歌翻译
我们介绍和分析结构化的随机零订单下降(S-SZD),这是一种有限的差异方法,该方法在一组$ l \ leq d $正交方向上近似于随机梯度,其中$ d $是环境空间的维度。这些方向是随机选择的,并且可能在每个步骤中发生变化。对于平滑的凸功能,我们几乎可以确保迭代的收敛性和对$ o(d/l k^{ - c})$的功能值的收敛速率,每$ c <1/2 $,这是任意关闭的就迭代次数而言,是随机梯度下降(SGD)。我们的界限还显示了使用$ l $多个方向而不是一个方向的好处。对于满足polyak-{\ l} ojasiewicz条件的非convex函数,我们在这种假设下建立了随机Zeroth Order Order Order算法的第一个收敛速率。我们在数值模拟中证实了我们的理论发现,在数值模拟中,满足假设以及对超参数优化的现实世界问题,观察到S-SZD具有很好的实践性能。
translated by 谷歌翻译
本文评价用机器学习问题的数值优化方法。由于机器学习模型是高度参数化的,我们专注于适合高维优化的方法。我们在二次模型上构建直觉,以确定哪种方法适用于非凸优化,并在凸函数上开发用于这种方法的凸起函数。随着随机梯度下降和动量方法的这种理论基础,我们试图解释为什么机器学习领域通常使用的方法非常成功。除了解释成功的启发式之外,最后一章还提供了对更多理论方法的广泛审查,这在实践中并不像惯例。所以在某些情况下,这项工作试图回答这个问题:为什么默认值中包含的默认TensorFlow优化器?
translated by 谷歌翻译
We consider minimizing the average of a very large number of smooth and possibly non-convex functions. This optimization problem has deserved much attention in the past years due to the many applications in different fields, the most challenging being training Machine Learning models. Widely used approaches for solving this problem are mini-batch gradient methods which, at each iteration, update the decision vector moving along the gradient of a mini-batch of the component functions. We consider the Incremental Gradient (IG) and the Random reshuffling (RR) methods which proceed in cycles, picking batches in a fixed order or by reshuffling the order after each epoch. Convergence properties of these schemes have been proved under different assumptions, usually quite strong. We aim to define ease-controlled modifications of the IG/RR schemes, which require a light additional computational effort and can be proved to converge under very weak and standard assumptions. In particular, we define two algorithmic schemes, monotone or non-monotone, in which the IG/RR iteration is controlled by using a watchdog rule and a derivative-free line search that activates only sporadically to guarantee convergence. The two schemes also allow controlling the updating of the stepsize used in the main IG/RR iteration, avoiding the use of preset rules. We prove convergence under the lonely assumption of Lipschitz continuity of the gradients of the component functions and perform extensive computational analysis using Deep Neural Architectures and a benchmark of datasets. We compare our implementation with both full batch gradient methods and online standard implementation of IG/RR methods, proving that the computational effort is comparable with the corresponding online methods and that the control on the learning rate may allow faster decrease.
translated by 谷歌翻译
对于光滑的强凸目标,梯度下降的经典理论可确保相对于梯度评估的数量的线性收敛。一个类似的非球形理论是具有挑战性的:即使目标在每一次迭代的目标流畅时,相应的本地模型也是不稳定的,传统的补救措施需要不可预测的许多切割平面。我们提出了对局部优化的梯度下降迭代的多点概括。虽然设计了一般目标,但我们受到“最大平滑”模型的动机,可在最佳状态下捕获子样本维度。当目标本身自象最大的情况时,我们证明了线性融合,并且实验表明了更普遍的现象。
translated by 谷歌翻译
在本文中,我们研究并证明了拟牛顿算法的Broyden阶级的非渐近超线性收敛速率,包括Davidon - Fletcher - Powell(DFP)方法和泡沫 - 弗莱彻 - 夏诺(BFGS)方法。这些准牛顿方法的渐近超线性收敛率在文献中已经广泛研究,但它们明确的有限时间局部会聚率未得到充分调查。在本文中,我们为Broyden Quasi-Newton算法提供了有限时间(非渐近的)收敛分析,在目标函数强烈凸起的假设下,其梯度是Lipschitz连续的,并且其Hessian在最佳解决方案中连续连续。我们表明,在最佳解决方案的本地附近,DFP和BFGS生成的迭代以$(1 / k)^ {k / 2} $的超连线率收敛到最佳解决方案,其中$ k $是迭代次数。我们还证明了类似的本地超连线收敛结果,因为目标函数是自我协调的情况。几个数据集的数值实验证实了我们显式的收敛速度界限。我们的理论保证是第一个为准牛顿方法提供非渐近超线性收敛速率的效果之一。
translated by 谷歌翻译