本文解决了一个与简单高阶正规化方法设计有关的开放挑战性的问题,该方法用于解决平滑而单调的变化不平等(VIS)。一个vi涉及在\ mathcal {x} $中查找$ x^\ star \,以使$ \ langle f(x),x -x^\ star \ star \ rangle \ geq 0 $ for All $ x \ in \ Mathcal {x} $,我们考虑$ f:\ mathbb {r}^d \ mapsto \ mathbb {r}^d $的设置,最多$(p-1)^{th} $ - 订购衍生物。对于$ p = 2 $,〜\ citet {Nesterov-2006限制}扩展了立方正规化的牛顿的方法,以$ o(\ epsilon^{ - 1})$。 -Iteration}提出了另一种二阶方法,该方法获得了$ O(\ epsilon^{ - 2/3} \ log(1/\ epsilon))$的提高速率,但是此方法需要一个非平凡的二进制搜索过程作为内部搜索过程环形。基于类似二进制搜索过程的高阶方法已进一步开发并显示出$ o(\ epsilon^{ - 2/(p+1)} \ log(1/\ epsilon))$的速率。但是,这种搜索程序在实践中可能在计算上是过敏性的,并且在优化理论中找到一种简单的高级正则方法的问题仍然是一个开放而充满挑战的问题。我们提出了一个$ p^{th} $ - 订购方法,该方法\ textit {not}需要任何二进制搜索过程,并证明它可以以$ o(\ epsilon^{ - 2/ (P+1)})$。还建立了$ \ omega(\ epsilon^{ - 2/(p+1)})$的下限,以证明我们的方法在单调设置中是最佳的。重新启动的版本达到了平滑且强烈单调的全球线性和局部超级线性收敛速率。此外,我们的方法可以实现$ o(\ epsilon^{ - 2/p})$的全局速率,以解决平滑和非单调的vis满足薄荷条件;此外,如果强烈的薄荷味状况保持,重新启动的版本再次达到全球线性和本地超级线性收敛速率。
translated by 谷歌翻译
从最佳运输到稳健的维度降低,可以将大量的机器学习应用程序放入Riemannian歧管上的Min-Max优化问题中。尽管在欧几里得的环境中已经分析了许多最小的最大算法,但事实证明,将这些结果转化为Riemannian案例已被证明是难以捉摸的。张等。 [2022]最近表明,测量凸凹入的凹入问题总是容纳鞍点解决方案。受此结果的启发,我们研究了Riemannian和最佳欧几里得空间凸入concove算法之间的性能差距。我们在负面的情况下回答了这个问题,证明Riemannian校正的外部(RCEG)方法在地球上强烈convex-concove案例中以线性速率实现了最后近期收敛,与欧几里得结果匹配。我们的结果还扩展到随机或非平滑案例,在这种情况下,RCEG和Riemanian梯度上升下降(RGDA)达到了近乎最佳的收敛速率,直到因歧管的曲率而定为因素。
translated by 谷歌翻译
我们研究单调夹杂物和单调变异不平等,及其对非单调环境的概括。我们首先表明,最初由Yoon和Ryu [2021]提出的额外的锚固梯度(EAG)算法用于无约束的凸孔conconcove min-max优化,可用于解决Lipschitz单调包含的更普遍的问题。更具体地说,我们证明了EAG解决了$ o(\ frac {1} {t})$的\ emph {Accelerated收敛速率}的Lipschitz单调包含问题,这是\ emph {所有一阶方法}的最佳{ [Diakonikolas,2020年,Yoon和Ryu,2021年]。我们的第二个结果是一种新算法,称为额外的锚固梯度加(EAG+),它不仅可以实现所有单调包含问题的加速$ O(\ frac {1} {t} {t} {t} {t})$收敛率,而且还表现出同样的加速度涉及负共酮操作员的一般(非单调)包容性问题的率。作为我们第二个结果的特殊情况,EAG+享受$ O(\ frac {1} {t})$收敛率,用于求解非平凡的非Conconvex-Nonconcave-Nonconcave Min-Max优化问题。我们的分析基于简单的潜在函数参数,这对于分析其他加速算法可能很有用。
translated by 谷歌翻译
我们考虑非凸凹minimax问题,$ \ min _ {\ mathbf {x}} \ mathcal {y}} f(\ mathbf {x},\ mathbf {y})$, $ f $在$ \ mathbf {x} $ on $ \ mathbf {y} $和$ \ mathcal {y} $中的$ \ \ mathbf {y} $。解决此问题的最受欢迎的算法之一是庆祝的梯度下降上升(GDA)算法,已广泛用于机器学习,控制理论和经济学。尽管凸凹设置的广泛收敛结果,但具有相等步骤的GDA可以收敛以限制循环甚至在一般设置中发散。在本文中,我们介绍了两次尺度GDA的复杂性结果,以解决非膨胀凹入的最小问题,表明该算法可以找到函数$ \ phi(\ cdot)的静止点:= \ max _ {\ mathbf {Y} \ In \ Mathcal {Y}} F(\ CDOT,\ MATHBF {Y})高效。据我们所知,这是对这一环境中的两次尺度GDA的第一个非因对药分析,阐明了其在培训生成对抗网络(GANS)和其他实际应用中的优越实际表现。
translated by 谷歌翻译
非滑动非概念优化问题在机器学习和业务决策中广泛出现,而两个核心挑战阻碍了具有有限时间收敛保证的有效解决方案方法的开发:缺乏计算可触及的最佳标准和缺乏计算功能强大的口腔。本文的贡献是两个方面。首先,我们建立了著名的Goldstein Subdferential〜 \ Citep {Goldstein-1977-Optimization}与均匀平滑之间的关系,从而为设计有限时间融合到一组无梯度的方法的基础和直觉提供了基础和直觉戈德斯坦固定点。其次,我们提出了无梯度方法(GFM)和随机GFM,用于解决一类非平滑非凸优化问题,并证明它们两个都可以返回$(\ delta,\ epsilon)$ - Lipschitz函数的Goldstein Sentary Point $ f $以$ o(d^{3/2} \ delta^{ - 1} \ epsilon^{ - 4})$的预期收敛速率为$ o(d^{3/2} \ delta^{ - 1} \ epsilon^{ - 4})$,其中$ d $是问题维度。还提出了两阶段版本的GFM和SGFM,并被证明可以改善大泄漏结果。最后,我们证明了2-SGFM使用\ textsc {minst}数据集对训练Relu神经网络的有效性。
translated by 谷歌翻译
We study stochastic monotone inclusion problems, which widely appear in machine learning applications, including robust regression and adversarial learning. We propose novel variants of stochastic Halpern iteration with recursive variance reduction. In the cocoercive -- and more generally Lipschitz-monotone -- setup, our algorithm attains $\epsilon$ norm of the operator with $\mathcal{O}(\frac{1}{\epsilon^3})$ stochastic operator evaluations, which significantly improves over state of the art $\mathcal{O}(\frac{1}{\epsilon^4})$ stochastic operator evaluations required for existing monotone inclusion solvers applied to the same problem classes. We further show how to couple one of the proposed variants of stochastic Halpern iteration with a scheduled restart scheme to solve stochastic monotone inclusion problems with ${\mathcal{O}}(\frac{\log(1/\epsilon)}{\epsilon^2})$ stochastic operator evaluations under additional sharpness or strong monotonicity assumptions.
translated by 谷歌翻译
We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility.
translated by 谷歌翻译
我们提出了一个基于预测校正范式的统一框架,用于在原始和双空间中的预测校正范式。在此框架中,以固定的间隔进行了连续变化的优化问题,并且每个问题都通过原始或双重校正步骤近似解决。通过预测步骤的输出,该解决方案方法是温暖启动的,该步骤的输出可以使用过去的信息解决未来问题的近似。在不同的假设集中研究并比较了预测方法。该框架涵盖的算法的示例是梯度方法的时变版本,分裂方法和著名的乘数交替方向方法(ADMM)。
translated by 谷歌翻译
近期在应用于培训深度神经网络和数据分析中的其他优化问题中的非凸优化的优化算法的兴趣增加,我们概述了最近对非凸优化优化算法的全球性能保证的理论结果。我们从古典参数开始,显示一般非凸面问题无法在合理的时间内有效地解决。然后,我们提供了一个问题列表,可以通过利用问题的结构来有效地找到全球最小化器,因为可能的问题。处理非凸性的另一种方法是放宽目标,从找到全局最小,以找到静止点或局部最小值。对于该设置,我们首先为确定性一阶方法的收敛速率提出了已知结果,然后是最佳随机和随机梯度方案的一般理论分析,以及随机第一阶方法的概述。之后,我们讨论了非常一般的非凸面问题,例如最小化$ \ alpha $ -weakly-are-convex功能和满足Polyak-lojasiewicz条件的功能,这仍然允许获得一阶的理论融合保证方法。然后,我们考虑更高阶和零序/衍生物的方法及其收敛速率,以获得非凸优化问题。
translated by 谷歌翻译
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions. Our algorithm achieves $O(\sigma / \sqrt{T})$ convergence when the oracle feedback is stochastic with variance $\sigma^2$, and improves its convergence to $O( 1 / T^3)$ with deterministic oracles, where $T$ is the number of iterations. Our method also interpolates these rates without knowing the nature of the oracle apriori, which is enabled by a parameter-free adaptive step-size that is oblivious to the knowledge of smoothness modulus, variance bounds and the diameter of the constrained set. To our knowledge, this is the first universal algorithm with such global guarantees within the second-order optimization literature.
translated by 谷歌翻译
本文是对解决平滑(强)单调随机变化不平等的方法的调查。首先,我们给出了随机方法最终发展的确定性基础。然后,我们回顾了通用随机配方的方法,并查看有限的总和设置。本文的最后部分致力于各种算法的各种(不一定是随机)的变化不平等现象。
translated by 谷歌翻译
用于解决无约束光滑游戏的两个最突出的算法是经典随机梯度下降 - 上升(SGDA)和最近引入的随机共识优化(SCO)[Mescheder等,2017]。已知SGDA可以收敛到特定类别的游戏的静止点,但是当前的收敛分析需要有界方差假设。 SCO用于解决大规模对抗问题,但其收敛保证仅限于其确定性变体。在这项工作中,我们介绍了预期的共同胁迫条件,解释了它的好处,并在这种情况下提供了SGDA和SCO的第一次迭代收敛保证,以解决可能是非单调的一类随机变分不等式问题。我们将两种方法的线性会聚到解决方案的邻域时,当它们使用恒定的步长时,我们提出了富有识别的步骤化切换规则,以保证对确切解决方案的融合。此外,我们的收敛保证在任意抽样范式下担保,因此,我们对迷你匹配的复杂性进行了解。
translated by 谷歌翻译
Theoretical properties of bilevel problems are well studied when the lower-level problem is strongly convex. In this work, we focus on bilevel optimization problems without the strong-convexity assumption. In these cases, we first show that the common local optimality measures such as KKT condition or regularization can lead to undesired consequences. Then, we aim to identify the mildest conditions that make bilevel problems tractable. We identify two classes of growth conditions on the lower-level objective that leads to continuity. Under these assumptions, we show that the local optimality of the bilevel problem can be defined via the Goldstein stationarity condition of the hyper-objective. We then propose the Inexact Gradient-Free Method (IGFM) to solve the bilevel problem, using an approximate zeroth order oracle that is of independent interest. Our non-asymptotic analysis demonstrates that the proposed method can find a $(\delta, \varepsilon)$ Goldstein stationary point for bilevel problems with a zeroth order oracle complexity that is polynomial in $d, 1/\delta$ and $1/\varepsilon$.
translated by 谷歌翻译
NonConvex-Concave Minimax优化已经对机器学习产生了浓厚的兴趣,包括对数据分配具有稳健性,以非解释性损失,对抗性学习为单一的学习。然而,大多数现有的作品都集中在梯度散发性(GDA)变体上,这些变体只能在平滑的设置中应用。在本文中,我们考虑了一个最小问题的家族,其目标功能在最小化变量中享有非平滑复合结构,并且在最大化的变量中是凹入的。通过充分利用复合结构,我们提出了平滑的近端线性下降上升(\ textit {平滑} plda)算法,并进一步建立了其$ \ Mathcal {o}(\ epsilon^{ - 4})在平滑设置下,平滑的gda〜 \ cite {zhang2020single}。此外,在一个温和的假设下,目标函数满足单方面的kurdyka- \ l {} ojasiewicz条件,带有指数$ \ theta \ in(0,1)$,我们可以进一步将迭代复杂性提高到$ \ MATHCAL {O }(\ epsilon^{ - 2 \ max \ {2 \ theta,1 \}})$。据我们所知,这是第一种非平滑nonconvex-concave问题的可证明有效的算法,它可以实现最佳迭代复杂性$ \ MATHCAL {o}(\ epsilon^{ - 2})$,如果$ \ theta \ 0,1/2] $。作为副产品,我们讨论了不同的平稳性概念并定量澄清它们的关系,这可能具有独立的兴趣。从经验上,我们说明了拟议的平滑PLDA在变体正规化WassErstein分布在鲁棒优化问题上的有效性。
translated by 谷歌翻译
Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
translated by 谷歌翻译
Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance. Recent work suggests that this quantity is more robust than the standard Wasserstein distance, in particular when comparing probability measures in high-dimensions. However, it is ruled out for practical application because the optimization model is essentially non-convex and non-smooth which makes the computation intractable. Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved by~\citet{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation. More specifically, we provide three simple algorithms with solid theoretical guarantee on their complexity bound (one in the appendix), and demonstrate their effectiveness and efficiency by conducing extensive experiments on synthetic and real data. This paper provides a first step into a computational theory of the PRW distance and provides the links between optimal transport and Riemannian optimization.
translated by 谷歌翻译
我们研究了具有有限和结构的平滑非凸化优化问题的随机重新洗脱(RR)方法。虽然该方法在诸如神经网络的训练之类的实践中广泛利用,但其会聚行为仅在几个有限的环境中被理解。在本文中,在众所周知的Kurdyka-LojasiewiCz(KL)不等式下,我们建立了具有适当递减步长尺寸的RR的强极限点收敛结果,即,RR产生的整个迭代序列是会聚并会聚到单个静止点几乎肯定的感觉。 In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes.当KL指数在$ [0,\ FRAC12] $以$ [0,\ FRAC12] $时,收敛率以$ \ mathcal {o}(t ^ { - 1})$的速率计算,以$ t $ counting迭代号。当KL指数属于$(\ FRAC12,1)$时,我们的派生收敛速率是FORM $ \ MATHCAL {O}(T ^ { - Q})$,$ Q \ IN(0,1)$取决于在KL指数上。基于标准的KL不等式的收敛分析框架仅适用于具有某种阶段性的算法。我们对基于KL不等式的步长尺寸减少的非下降RR方法进行了新的收敛性分析,这概括了标准KL框架。我们总结了我们在非正式分析框架中的主要步骤和核心思想,这些框架是独立的兴趣。作为本框架的直接应用,我们还建立了类似的强极限点收敛结果,为重组的近端点法。
translated by 谷歌翻译
我们考虑光滑的凸孔concave双线性耦合的鞍点问题,$ \ min _ {\ mathbf {x}}} \ max _ {\ mathbf {y Mathbf {y}} 〜f(\ mathbf {x}} },\ mathbf {y}) - g(\ mathbf {y})$,其中一个人可以访问$ f $,$ g $的随机一阶oracles以及biinear耦合函数$ h $。基于标准的随机外部分析,我们提出了随机\ emph {加速梯度 - extragradient(ag-eg)}下降的算法,该算法在一般随机设置中结合了外部和Nesterov的加速度。该算法利用计划重新启动以接收一种良好的非震动收敛速率,该算法与\ citet {ibrahim202020linear}和\ citet {zhang2021lower}相匹配,并在其相应的设置中,还有一个额外的统计误差期限,以及\ citet {zhang2021lower}最多达到恒定的预取子。这是在鞍点优化中实现这种相对成熟的最佳表征的第一个结果。
translated by 谷歌翻译
广义自我符合是许多重要学习问题的目标功能中存在的关键属性。我们建立了一个简单的Frank-Wolfe变体的收敛速率,该变体使用开环步数策略$ \ gamma_t = 2/(t+2)$,获得了$ \ Mathcal {o}(1/t)$收敛率对于这类功能,就原始差距和弗兰克 - 沃尔夫差距而言,$ t $是迭代计数。这避免了使用二阶信息或估计以前工作的局部平滑度参数的需求。我们还显示了各种常见病例的收敛速率的提高,例如,当所考虑的可行区域均匀地凸或多面体时。
translated by 谷歌翻译
Nonconvex-nonconcave minimax optimization has been the focus of intense research over the last decade due to its broad applications in machine learning and operation research. Unfortunately, most existing algorithms cannot be guaranteed to converge and always suffer from limit cycles. Their global convergence relies on certain conditions that are difficult to check, including but not limited to the global Polyak-\L{}ojasiewicz condition, the existence of a solution satisfying the weak Minty variational inequality and $\alpha$-interaction dominant condition. In this paper, we develop the first provably convergent algorithm called doubly smoothed gradient descent ascent method, which gets rid of the limit cycle without requiring any additional conditions. We further show that the algorithm has an iteration complexity of $\mathcal{O}(\epsilon^{-4})$ for finding a game stationary point, which matches the best iteration complexity of single-loop algorithms under nonconcave-concave settings. The algorithm presented here opens up a new path for designing provable algorithms for nonconvex-nonconcave minimax optimization problems.
translated by 谷歌翻译