In this paper, we propose a novel primal-dual proximal splitting algorithm (PD-PSA), named BALPA, for the composite optimization problem with equality constraints, where the loss function consists of a smooth term and a nonsmooth term composed with a linear mapping. In BALPA, the dual update is designed as a proximal point for a time-varying quadratic function, which balances the implementation of primal and dual update and retains the proximity-induced feature of classic PD-PSAs. In addition, by this balance, BALPA eliminates the inefficiency of classic PD-PSAs for composite optimization problems in which the Euclidean norm of the linear mapping or the equality constraint mapping is large. Therefore, BALPA not only inherits the advantages of simple structure and easy implementation of classic PD-PSAs but also ensures a fast convergence when these norms are large. Moreover, we propose a stochastic version of BALPA (S-BALPA) and apply the developed BALPA to distributed optimization to devise a new distributed optimization algorithm. Furthermore, a comprehensive convergence analysis for BALPA and S-BALPA is conducted, respectively. Finally, numerical experiments demonstrate the efficiency of the proposed algorithms.
translated by 谷歌翻译
本文提出了一种针对分布式凸复合优化问题的新型双重不精确拆分算法(DISA),其中本地损耗函数由$ L $ -SMOOTH的项组成,可能是由线性操作员组成的非平滑项。我们证明,当原始和双重尺寸$ \ tau $,$ \ beta $满足$ 0 <\ tau <{2}/{l} $和$ 0 <\ tau \ beta <1 $时,我们证明了DISA是收敛的。与现有的原始双侧近端分裂算法(PD-PSA)相比,DISA克服了收敛步骤范围对线性操作员欧几里得范围的依赖性。这意味着当欧几里得规范大时,DISA允许更大的步骤尺寸,从而确保其快速收敛。此外,我们分别在一般凸度和度量次级性下分别建立了disa的均值和线性收敛速率。此外,还提供了DISA的近似迭代版本,并证明了该近似版本的全局收敛性和sublinear收敛速率。最后,数值实验不仅证实了理论分析,而且还表明,与现有的PD-PSA相比,DISA达到了显着的加速度。
translated by 谷歌翻译
我们考虑最小化三个凸功能的总和,其中第一个f是光滑的,第二个f是非平滑且可近的,第三个是与线性操作员L的非光滑近似函数的组成。此模板问题具有许多应用程序,有许多应用程序,有许多应用程序,,具有许多应用程序,,具有许多应用程序。例如,在图像处理和机器学习中。首先,我们为这个问题提出了一种新的原始偶算法,我们称之为PDDY。它是通过将davis-yin分裂应用于原始二重式产品空间中的单调包含的,在特定度量下,操作员在特定度量下是单调的。我们显示了三种现有算法(Condat-VU算法的两种形式) PD3O算法)具有相同的结构,因此PDDY是这种自洽的原始偶算法中的第四个丢失链接。这种表示可以简化收敛分析:它使我们能够总体上得出sublinear收敛速率,而线性收敛导致存在强凸度的存在。此外,在我们的广泛而灵活的分析框架内,我们提出了对算法的新随机概括,其中使用了Friancation降低F梯度的随机估计值,而不是真实的梯度。此外,我们作为pddy的特殊情况获得了线性收敛算法,用于在线性约束下最小化强凸功能f。我们讨论了其对分散优化的重要应用。
translated by 谷歌翻译
Square-Root Lasso问题被证明是强大的回归问题。此外,结构化稀疏性的平方根回归问题也在统计和机器学习中起着重要作用。在本文中,我们专注于大规模线性约束稀疏组方形套索问题的数值计算。为了克服客观函数中有两个非空体术语的困难,我们为其提出了一种基于双半法牛顿(SSN)的增强拉格朗日方法(ALM)。也就是说,我们将ALM应用于SSN方法解决的子问题的双重问题。为了应用SSN方法,广义雅可比的正肯定非常重要。因此,我们表征了其积极明确度的等价性和相应的原始问题的约束不平衡条件。在数值实现中,我们完全采用二阶稀疏性,以便可以有效地获得牛顿方向。数值实验证明了所提出的算法的效率。
translated by 谷歌翻译
二重优化发现在现代机器学习问题中发现了广泛的应用,例如超参数优化,神经体系结构搜索,元学习等。而具有独特的内部最小点(例如,内部功能是强烈凸的,都具有唯一的内在最小点)的理解,这是充分理解的,多个内部最小点的问题仍然是具有挑战性和开放的。为此问题设计的现有算法适用于限制情况,并且不能完全保证融合。在本文中,我们采用了双重优化的重新制定来限制优化,并通过原始的双二线优化(PDBO)算法解决了问题。 PDBO不仅解决了多个内部最小挑战,而且还具有完全一阶效率的情况,而无需涉及二阶Hessian和Jacobian计算,而不是大多数现有的基于梯度的二杆算法。我们进一步表征了PDBO的收敛速率,它是与多个内部最小值的双光线优化的第一个已知的非质合收敛保证。我们的实验证明了所提出的方法的预期性能。
translated by 谷歌翻译
Nonconvex minimax problems have attracted wide attention in machine learning, signal processing and many other fields in recent years. In this paper, we propose a primal dual alternating proximal gradient (PDAPG) algorithm and a primal dual proximal gradient (PDPG-L) algorithm for solving nonsmooth nonconvex-strongly concave and nonconvex-linear minimax problems with coupled linear constraints, respectively. The corresponding iteration complexity of the two algorithms are proved to be $\mathcal{O}\left( \varepsilon ^{-2} \right)$ and $\mathcal{O}\left( \varepsilon ^{-3} \right)$ to reach an $\varepsilon$-stationary point, respectively. To our knowledge, they are the first two algorithms with iteration complexity guarantee for solving the two classes of minimax problems.
translated by 谷歌翻译
Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
translated by 谷歌翻译
我们在大规模设置中研究一类广义的线性程序(GLP),包括可能简单的非光滑凸规律器和简单的凸集合约束。通过将GLP作为等效凸凹入最大问题的重新介绍,我们表明问题中的线性结构可用于设计高效,可扩展的一阶算法,我们给出了名称\ EMPH {坐标线性方差减少}(\ textsc {clvr};发音为``clever'')。 \ textsc {clvr}是一种增量坐标方法,具有隐式方差差异,输出双变量迭代的\ emph {仿射组合}。 \ textsc {clvr}产生改善的复杂性结果(glp),这取决于(glp)中的线性约束矩阵的最大行标准而不是光谱标准。当正常化术语和约束是可分离的,\ textsc {clvr}承认有效的延迟更新策略,使其复杂性界限与(glp)中的线性约束矩阵的非零元素的数量而不是矩阵尺寸。我们表明,通过引入稀疏连接的辅助变量,可以将基于$ F $ -divergence和Wassersein指标的歧义组的分布稳健优化(DRO)问题进行重新重整为(GLP)。我们补充了我们的理论保证,具有验证我们算法的实际效果的数值实验,无论是在壁钟时间和数据次数方面。
translated by 谷歌翻译
我们提出了一个基于预测校正范式的统一框架,用于在原始和双空间中的预测校正范式。在此框架中,以固定的间隔进行了连续变化的优化问题,并且每个问题都通过原始或双重校正步骤近似解决。通过预测步骤的输出,该解决方案方法是温暖启动的,该步骤的输出可以使用过去的信息解决未来问题的近似。在不同的假设集中研究并比较了预测方法。该框架涵盖的算法的示例是梯度方法的时变版本,分裂方法和著名的乘数交替方向方法(ADMM)。
translated by 谷歌翻译
Many real-world problems not only have complicated nonconvex functional constraints but also use a large number of data points. This motivates the design of efficient stochastic methods on finite-sum or expectation constrained problems. In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i.e. smooth+nonsmooth) objective and nonconvex smooth functional constraints. We adopt the standard iALM framework and design a subroutine by using the momentum-based variance-reduced proximal stochastic gradient method (PStorm) and a postprocessing step. Under certain regularity conditions (assumed also in existing works), to reach an $\varepsilon$-KKT point in expectation, we establish an oracle complexity result of $O(\varepsilon^{-5})$, which is better than the best-known $O(\varepsilon^{-6})$ result. Numerical experiments on the fairness constrained problem and the Neyman-Pearson classification problem with real data demonstrate that our proposed method outperforms an existing method with the previously best-known complexity result.
translated by 谷歌翻译
本文认为具有非线性耦合约束的多块非斜率非凸优化问题。通过开发使用信息区和提出的自适应制度的想法[J.Bolte,S。Sabach和M. Teboulle,NonConvex Lagrangian优化:监视方案和全球收敛性,运营研究数学,43:1210--1232,2018],我们提出了一种多键交替方向来解决此问题的多块交替方向方法。我们通过在每个块更新中采用大量最小化过程来指定原始变量的更新。进行了独立的收敛分析,以证明生成的序列与增强Lagrangian的临界点的随后和全局收敛。我们还建立了迭代复杂性,并为所提出的算法提供初步的数值结果。
translated by 谷歌翻译
本文认为,使用一组不平等凸期望约束最小化凸期望函数的问题。我们提出了一种可计算的随机近似类型算法,即乘数的随机线性近端方法来解决此凸随机优化问题。该算法可以粗略地看作是随机近似和传统的乘数近端方法的混合体。在轻度条件下,我们表明该算法表现出$ o(k^{ - 1/2})$预期的收敛速率,如果正确选择了算法中的参数,则客观降低和约束违规率,其中$ k $表示$ k $表示的数量表示迭代。此外,我们表明,算法具有$ o(\ log(k)k^{ - 1/2})$约束违规和$ o(\ log^{3/2}(k)k)^{ - 1/2})$目标结合。一些初步的数值结果证明了所提出的算法的性能。
translated by 谷歌翻译
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two convex functions, has been found rich applications in statistical learning and studied extensively for decades. However, existing methods are primarily based on multi-stage convex relaxation, only leading to weak optimality of critical points. This paper proposes a coordinate descent method for minimizing a class of DC functions based on sequential nonconvex approximation. Our approach iteratively solves a nonconvex one-dimensional subproblem globally, and it is guaranteed to converge to a coordinate-wise stationary point. We prove that this new optimality condition is always stronger than the standard critical point condition and directional point condition under a mild \textit{locally bounded nonconvexity assumption}. For comparisons, we also include a naive variant of coordinate descent methods based on sequential convex approximation in our study. When the objective function satisfies a \textit{globally bounded nonconvexity assumption} and \textit{Luo-Tseng error bound assumption}, coordinate descent methods achieve \textit{Q-linear} convergence rate. Also, for many applications of interest, we show that the nonconvex one-dimensional subproblem can be computed exactly and efficiently using a breakpoint searching method. Finally, we have conducted extensive experiments on several statistical learning tasks to show the superiority of our approach. Keywords: Coordinate Descent, DC Minimization, DC Programming, Difference-of-Convex Programs, Nonconvex Optimization, Sparse Optimization, Binary Optimization.
translated by 谷歌翻译
由于其吸引人的稳健性以及可提供的效率保证,随机模型的方法最近得到了最新的关注。我们为改善基于模型的方法进行了两个重要扩展,即在随机弱凸优化上提高了基于模型的方法。首先,我们通过涉及一组样本来提出基于MiniBatch模型的方法,以近似每次迭代中的模型函数。我们首次表明随机算法即使对于非平滑和非凸(特别是弱凸)问题,即使是批量大小也可以实现线性加速。为此,我们开发了对每个算法迭代中涉及的近端映射的新颖敏感性分析。我们的分析似乎是更多常规设置的独立利益。其次,由于动量随机梯度下降的成功,我们提出了一种新的随机外推模型的方法,大大延伸到更广泛的随机算法中的经典多济会动量技术,用于弱凸优化。在相当灵活的外推术语范围内建立收敛速率。虽然主要关注弱凸优化,但我们还将我们的工作扩展到凸优化。我们将小纤维和外推模型的方法应用于随机凸优化,为此,我们为其提供了一种新的复杂性绑定和有前途的线性加速,批量尺寸。此外,提出了一种基于基于Nesterov动量的基于模型的方法,为此,我们建立了达到最优性的最佳复杂性。
translated by 谷歌翻译
现代统计应用常常涉及最小化可能是非流动和/或非凸起的目标函数。本文侧重于广泛的Bregman-替代算法框架,包括本地线性近似,镜像下降,迭代阈值,DC编程以及许多其他实例。通过广义BREGMAN功能的重新发出使我们能够构建合适的误差测量并在可能高维度下建立非凸起和非凸起和非球形目标的全球收敛速率。对于稀疏的学习问题,在一些规律性条件下,所获得的估算器作为代理人的固定点,尽管不一定是局部最小化者,但享受可明确的统计保障,并且可以证明迭代顺序在所需的情况下接近统计事实准确地快速。本文还研究了如何通过仔细控制步骤和放松参数来设计基于适应性的动力的加速度而不假设凸性或平滑度。
translated by 谷歌翻译
This paper studies the communication complexity of risk averse optimization over a network. The problem generalizes the well-studied risk-neutral finite-sum distributed optimization problem and its importance stems from the need to handle risk in an uncertain environment. For algorithms in the literature, there exists a gap in communication complexities for solving risk-averse and risk-neutral problems. We propose two distributed algorithms, namely the distributed risk averse optimization (DRAO) method and the distributed risk averse optimization with sliding (DRAO-S) method, to close the gap. Specifically, the DRAO method achieves the optimal communication complexity by assuming a certain saddle point subproblem can be easily solved in the server node. The DRAO-S method removes the strong assumption by introducing a novel saddle point sliding subroutine which only requires the projection over the ambiguity set $P$. We observe that the number of $P$-projections performed by DRAO-S is optimal. Moreover, we develop matching lower complexity bounds to show that communication complexities of both DRAO and DRAO-S are not improvable. Numerical experiments are conducted to demonstrate the encouraging empirical performance of the DRAO-S method.
translated by 谷歌翻译
Nonconvex optimization is central in solving many machine learning problems, in which block-wise structure is commonly encountered. In this work, we propose cyclic block coordinate methods for nonconvex optimization problems with non-asymptotic gradient norm guarantees. Our convergence analysis is based on a gradient Lipschitz condition with respect to a Mahalanobis norm, inspired by a recent progress on cyclic block coordinate methods. In deterministic settings, our convergence guarantee matches the guarantee of (full-gradient) gradient descent, but with the gradient Lipschitz constant being defined w.r.t.~the Mahalanobis norm. In stochastic settings, we use recursive variance reduction to decrease the per-iteration cost and match the arithmetic operation complexity of current optimal stochastic full-gradient methods, with a unified analysis for both finite-sum and infinite-sum cases. We further prove the faster, linear convergence of our methods when a Polyak-{\L}ojasiewicz (P{\L}) condition holds for the objective function. To the best of our knowledge, our work is the first to provide variance-reduced convergence guarantees for a cyclic block coordinate method. Our experimental results demonstrate the efficacy of the proposed variance-reduced cyclic scheme in training deep neural nets.
translated by 谷歌翻译
随机成分优化(SCO)引起了人们的关注,因为它在重要的现实问题上的广泛适用性。但是,SCO上的现有作品假设解决方案更新中的投影很简单,对于以期望形式的约束(例如经验性的条件价值危险约束),该预测无法保留。我们研究了一个新型模型,该模型将单层期望值和两级组成约束结合到当前的SCO框架中。我们的模型可以广泛应用于数据驱动的优化和风险管理,包括规避风险的优化和高音阶组合选择,并可以处理多个约束。我们进一步提出了一类Primal-Dual算法,该算法以$ \ co(\ frac {1} {\ sqrt {n}} $的速率生成序列,以$ \ co(\ frac {1}级别组成约束,其中$ n $是迭代计数器,在预期值约束的SCO中建立基准。
translated by 谷歌翻译
在本文中,我们介绍了泰坦(Titan),这是一种新型的惯性块最小化框架,用于非平滑非凸优化问题。据我们所知,泰坦是块坐标更新方法的第一个框架,该方法依赖于大型最小化框架,同时将惯性力嵌入到块更新的每个步骤中。惯性力是通过外推算子获得的,该操作员累积了重力和Nesterov型加速度,以作为特殊情况作为块近端梯度方法。通过选择各种替代功能,例如近端,Lipschitz梯度,布雷格曼,二次和复合替代功能,并通过改变外推操作员来生成一组丰富的惯性块坐标坐标更新方法。我们研究了泰坦生成序列的子顺序收敛以及全局收敛。我们说明了泰坦对两个重要的机器学习问题的有效性,即稀疏的非负矩阵分解和矩阵完成。
translated by 谷歌翻译
最近有利息线性编程(LP)的一阶方法。在本文中,我们提出了一种使用差异减少的随机算法,并重新启动,用于解决LP等尖锐的原始 - 双重问题。我们表明,所提出的随机方法表现出具有高概率的尖锐实例的线性收敛速率,这提高了现有的确定性和随机算法的复杂性。此外,我们提出了一个有效的基于坐标的随机甲骨文,用于无限制的双线性问题,它具有$ \ Mathcal O(1)$彼得迭代成本并改善总牌数量达到一定的准确性。
translated by 谷歌翻译