本文主要侧重于计算向量的欧几里德投影到$ \ ell_ {p} $ ball,其中$ p \ in(0,1)$。这种问题是统计机器学习中的核心构建块和信号处理任务,因为它促进了稀疏性的能力。但是,用于查找投影的有效数值算法仍然不可用,特别是在大规模优化中。为满足这一挑战,我们首先推出了这个问题的一流必备的最优性条件。基于该表征,我们通过求解一系列投影来制定一种用于计算静止点的新颖性方法,以在重新重量$ \ ell_ {1} $ - 球上。这种方法实际上是简单的实现和计算效率。此外,所提出的算法显示在温和条件下唯一会聚,并且具有最坏情况$ O(1 / \ SQRT {k})$收敛速率。数值实验证明了我们所提出的算法的效率。
translated by 谷歌翻译
在许多机器学习应用程序中出现了非convex-concave min-max问题,包括最大程度地减少一组非凸函数的最大程度,并对神经网络的强大对抗训练。解决此问题的一种流行方法是梯度下降(GDA)算法,不幸的是,在非凸性的情况下可以表现出振荡。在本文中,我们引入了一种“平滑”方案,该方案可以与GDA结合以稳定振荡并确保收敛到固定溶液。我们证明,稳定的GDA算法可以实现$ O(1/\ epsilon^2)$迭代复杂性,以最大程度地减少有限的非convex函数收集的最大值。此外,平滑的GDA算法达到了$ O(1/\ epsilon^4)$ toseration复杂性,用于一般的nonconvex-concave问题。提出了这种稳定的GDA算法的扩展到多块情况。据我们所知,这是第一个实现$ o(1/\ epsilon^2)$的算法,用于一类NonConvex-Concave问题。我们说明了稳定的GDA算法在健壮训练中的实际效率。
translated by 谷歌翻译
稀疏数据的恢复是机器学习和信号处理中许多应用的核心。虽然可以使用$ \ ell_1 $ -regularization在套索估算器中使用此类问题,但在基础上,通常需要专用算法来解决大型实例的相应高维非平滑优化。迭代地重新重复的最小二乘(IRLS)是一种广泛使用的算法,其出于其优异的数值性能。然而,虽然现有理论能够保证该算法的收敛到最小化器,但它不提供全局收敛速度。在本文中,我们证明了IRLS的变型以全局线性速率收敛到稀疏解决方案,即,如果测量结果满足通常的空空间属性假设,则立即发生线性误差。我们通过数值实验支持我们的理论,表明我们的线性速率捕获了正确的维度依赖性。我们预计我们的理论调查结果将导致IRLS算法的许多其他用例的新见解,例如在低级矩阵恢复中。
translated by 谷歌翻译
在本文中,我们考虑发现非凸锥优化的近似二阶固定点(SOSP),该点在仿射子空间和凸锥的交点上最小化了两倍的可微分函数。特别是,我们提出了一个基于牛顿 - 偶联的梯度(牛顿-CG)的障碍方法,用于查找$(\ epsilon,\ sqrt {\ epsilon})$ - 此问题的SOSP。我们的方法不仅可以实现,而且还达到了$ {\ cal o}(\ epsilon^{ - 3/2})$的迭代复杂性,它匹配找到$的二阶方法的最著名迭代复杂性(以找到$(\ epsilon,\ sqrt {\ epsilon})$ - 无约束的非convex优化的sosp。$ \ widetilde {\ cal o}的操作复杂性(\ epsilon^{ - 3/2} \ min \ {也是为我们的方法建立的。
translated by 谷歌翻译
诸如压缩感测,图像恢复,矩阵/张恢复和非负矩阵分子等信号处理和机器学习中的许多近期问题可以作为约束优化。预计的梯度下降是一种解决如此约束优化问题的简单且有效的方法。本地收敛分析将我们对解决方案附近的渐近行为的理解,与全球收敛分析相比,收敛率的较小界限提供了较小的界限。然而,本地保证通常出现在机器学习和信号处理的特定问题领域。此稿件在约束最小二乘范围内,对投影梯度下降的局部收敛性分析提供了统一的框架。该建议的分析提供了枢转局部收敛性的见解,例如线性收敛的条件,收敛区域,精确的渐近收敛速率,以及达到一定程度的准确度所需的迭代次数的界限。为了证明所提出的方法的适用性,我们介绍了PGD的收敛分析的配方,并通过在四个基本问题上的配方的开始延迟应用来证明它,即线性约束最小二乘,稀疏恢复,最小二乘法使用单位规范约束和矩阵完成。
translated by 谷歌翻译
随机多变最小化 - 最小化(SMM)是大多数变化最小化的经典原则的在线延伸,这包括采样I.I.D。来自固定数据分布的数据点,并最小化递归定义的主函数的主要替代。在本文中,我们引入了随机块大大化 - 最小化,其中替代品现在只能块多凸,在半径递减内的时间优化单个块。在SMM中的代理人放松标准的强大凸起要求,我们的框架在内提供了更广泛的适用性,包括在线CANDECOMP / PARAFAC(CP)字典学习,并且尤其是当问题尺寸大时产生更大的计算效率。我们对所提出的算法提供广泛的收敛性分析,我们在可能的数据流下派生,放松标准i.i.d。对数据样本的假设。我们表明,所提出的算法几乎肯定会收敛于速率$ O((\ log n)^ {1+ \ eps} / n ^ {1/2})$的约束下的非凸起物镜的静止点集合。实证丢失函数和$ O((\ log n)^ {1+ \ eps} / n ^ {1/4})$的预期丢失函数,其中$ n $表示处理的数据样本数。在一些额外的假设下,后一趋同率可以提高到$ o((\ log n)^ {1+ \ eps} / n ^ {1/2})$。我们的结果为一般马尔维亚数据设置提供了各种在线矩阵和张量分解算法的第一融合率界限。
translated by 谷歌翻译
Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
translated by 谷歌翻译
现代统计应用常常涉及最小化可能是非流动和/或非凸起的目标函数。本文侧重于广泛的Bregman-替代算法框架,包括本地线性近似,镜像下降,迭代阈值,DC编程以及许多其他实例。通过广义BREGMAN功能的重新发出使我们能够构建合适的误差测量并在可能高维度下建立非凸起和非凸起和非球形目标的全球收敛速率。对于稀疏的学习问题,在一些规律性条件下,所获得的估算器作为代理人的固定点,尽管不一定是局部最小化者,但享受可明确的统计保障,并且可以证明迭代顺序在所需的情况下接近统计事实准确地快速。本文还研究了如何通过仔细控制步骤和放松参数来设计基于适应性的动力的加速度而不假设凸性或平滑度。
translated by 谷歌翻译
我们引入了一种降低尺寸的二阶方法(DRSOM),用于凸和非凸的不受约束优化。在类似信任区域的框架下,我们的方法保留了二阶方法的收敛性,同时仅在两个方向上使用Hessian-Vector产品。此外,计算开销仍然与一阶相当,例如梯度下降方法。我们证明该方法的复杂性为$ O(\ epsilon^{ - 3/2})$,以满足子空间中的一阶和二阶条件。DRSOM的适用性和性能通过逻辑回归,$ L_2-L_P $最小化,传感器网络定位和神经网络培训的各种计算实验展示。对于神经网络,我们的初步实施似乎在训练准确性和迭代复杂性方面与包括SGD和ADAM在内的最先进的一阶方法获得了计算优势。
translated by 谷歌翻译
我们应用随机顺序二次编程(STOSQP)算法来求解受约束的非线性优化问题,在该问题是随机的,并且约束是确定性的。我们研究了一个完全随机的设置,其中每次迭代中只有一个样本可用于估计物镜的梯度和黑森州。我们允许stosqp选择一个随机架子$ \ bar {\ alpha} _t $适应性,使得$ \ beta_t \ leq \ leq \ bar {\ alpha} _t \ leq \ leq \ beta_t+beta_t+\ chi_t+\ chi_t $,wither = o(\ beta_t)$是预定的确定性序列。我们还允许STOSQP通过随机迭代求解器(例如,使用草图和项目方法)求解牛顿系统。而且我们不需要不精确的牛顿方向的近似误差即可消失。对于这个一般的STOSQP框架,我们建立了其最后一次迭代的渐近收敛速率,最差的案例迭代复杂性是副产品。我们执行统计推断。特别是,有了适当的衰减$ \ beta_t,\ chi_t $,我们表明:(i)STOSQP方案最多可以采用$ o(1/\ epsilon^4)$ iterations $ iterations $ iTerations以实现$ \ epsilon $ -Stationarity; (ii)几乎毫无疑问,$ \ |(x_t -x^\ star,\ lambda_t- \ lambda^\ star)\ | | = o(\ sqrt {\ beta_t \ log(1/\ beta_t)})+o(\ chi_t/\ beta_t)$,其中$(x_t,\ lambda_t)$是primal-dimal-dimal-dialal-dialal-dialal-dual stosqp itselmate; (iii)序列$ 1/\ sqrt {\ beta_t} \ cdot(x_t -x^\ star,\ lambda_t- \ lambda_t- \ lambda^\ star)$收敛到平均零高斯分布,具有非琐事的共价矩阵。此外,我们建立了$(x_t,\ lambda_t)$的Berry-Esseen,以定量地测量其分布功能的收敛性。我们还为协方差矩阵提供了实用的估计器,可以使用iTerates $ \ {(x_t,\ lambda_t)\} _ t $构建$(x^\ star,\ lambda^\ star)$的置信区间(x^\ star,\ lambda^\ star)$。我们的定理使用最可爱的测试集中的非线性问题验证。
translated by 谷歌翻译
给定数据点之间的一组差异测量值,确定哪种度量表示与输入测量最“一致”或最能捕获数据相关几何特征的度量是许多机器学习算法的关键步骤。现有方法仅限于特定类型的指标或小问题大小,因为在此类问题中有大量的度量约束。在本文中,我们提供了一种活跃的集合算法,即项目和忘记,该算法使用Bregman的预测,以解决许多(可能是指数)不平等约束的度量约束问题。我们提供了\ textsc {project and Hoses}的理论分析,并证明我们的算法会收敛到全局最佳解决方案,并以指数速率渐近地渐近地衰减了当前迭代的$ L_2 $距离。我们证明,使用我们的方法,我们可以解决三种类型的度量约束问题的大型问题实例:一般体重相关聚类,度量近距离和度量学习;在每种情况下,就CPU时间和问题尺寸而言,超越了艺术方法的表现。
translated by 谷歌翻译
We introduce a class of first-order methods for smooth constrained optimization that are based on an analogy to non-smooth dynamical systems. Two distinctive features of our approach are that (i) projections or optimizations over the entire feasible set are avoided, in stark contrast to projected gradient methods or the Frank-Wolfe method, and (ii) iterates are allowed to become infeasible, which differs from active set or feasible direction methods, where the descent motion stops as soon as a new constraint is encountered. The resulting algorithmic procedure is simple to implement even when constraints are nonlinear, and is suitable for large-scale constrained optimization problems in which the feasible set fails to have a simple structure. The key underlying idea is that constraints are expressed in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. In particular, this means that at each iteration only constraints that are violated are taken into account. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.
translated by 谷歌翻译
We propose a trust-region stochastic sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with stochastic objectives and deterministic equality constraints. We consider a fully stochastic setting, where in each iteration a single sample is generated to estimate the objective gradient. The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to employ indefinite Hessian matrices (i.e., Hessians without modification) in SQP subproblems. As a trust-region method for constrained optimization, our algorithm needs to address an infeasibility issue -- the linearized equality constraints and trust-region constraints might lead to infeasible SQP subproblems. In this regard, we propose an \textit{adaptive relaxation technique} to compute the trial step that consists of a normal step and a tangential step. To control the lengths of the two steps, we adaptively decompose the trust-region radius into two segments based on the proportions of the feasibility and optimality residuals to the full KKT residual. The normal step has a closed form, while the tangential step is solved from a trust-region subproblem, to which a solution ensuring the Cauchy reduction is sufficient for our study. We establish the global almost sure convergence guarantee for TR-StoSQP, and illustrate its empirical performance on both a subset of problems in the CUTEst test set and constrained logistic regression problems using data from the LIBSVM collection.
translated by 谷歌翻译
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two convex functions, has been found rich applications in statistical learning and studied extensively for decades. However, existing methods are primarily based on multi-stage convex relaxation, only leading to weak optimality of critical points. This paper proposes a coordinate descent method for minimizing a class of DC functions based on sequential nonconvex approximation. Our approach iteratively solves a nonconvex one-dimensional subproblem globally, and it is guaranteed to converge to a coordinate-wise stationary point. We prove that this new optimality condition is always stronger than the standard critical point condition and directional point condition under a mild \textit{locally bounded nonconvexity assumption}. For comparisons, we also include a naive variant of coordinate descent methods based on sequential convex approximation in our study. When the objective function satisfies a \textit{globally bounded nonconvexity assumption} and \textit{Luo-Tseng error bound assumption}, coordinate descent methods achieve \textit{Q-linear} convergence rate. Also, for many applications of interest, we show that the nonconvex one-dimensional subproblem can be computed exactly and efficiently using a breakpoint searching method. Finally, we have conducted extensive experiments on several statistical learning tasks to show the superiority of our approach. Keywords: Coordinate Descent, DC Minimization, DC Programming, Difference-of-Convex Programs, Nonconvex Optimization, Sparse Optimization, Binary Optimization.
translated by 谷歌翻译
我们考虑凸优化问题,这些问题被广泛用作低级基质恢复问题的凸松弛。特别是,在几个重要问题(例如相位检索和鲁棒PCA)中,在许多情况下的基本假设是最佳解决方案是排名一列。在本文中,我们考虑了目标上的简单自然的条件,以使这些放松的最佳解决方案确实是独特的,并且是一个排名。主要是,我们表明,在这种情况下,使用线路搜索的标准Frank-Wolfe方法(即,没有任何参数调整),该方法仅需要单个排名一级的SVD计算,可以找到$ \ epsilon $ - 仅在$ o(\ log {1/\ epsilon})$迭代(而不是以前最著名的$ o(1/\ epsilon)$)中的近似解决方案,尽管目的不是强烈凸。我们考虑了基本方法的几种变体,具有改善的复杂性,以及由强大的PCA促进的扩展,最后是对非平滑问题的扩展。
translated by 谷歌翻译
我们研究了具有有限和结构的平滑非凸化优化问题的随机重新洗脱(RR)方法。虽然该方法在诸如神经网络的训练之类的实践中广泛利用,但其会聚行为仅在几个有限的环境中被理解。在本文中,在众所周知的Kurdyka-LojasiewiCz(KL)不等式下,我们建立了具有适当递减步长尺寸的RR的强极限点收敛结果,即,RR产生的整个迭代序列是会聚并会聚到单个静止点几乎肯定的感觉。 In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes.当KL指数在$ [0,\ FRAC12] $以$ [0,\ FRAC12] $时,收敛率以$ \ mathcal {o}(t ^ { - 1})$的速率计算,以$ t $ counting迭代号。当KL指数属于$(\ FRAC12,1)$时,我们的派生收敛速率是FORM $ \ MATHCAL {O}(T ^ { - Q})$,$ Q \ IN(0,1)$取决于在KL指数上。基于标准的KL不等式的收敛分析框架仅适用于具有某种阶段性的算法。我们对基于KL不等式的步长尺寸减少的非下降RR方法进行了新的收敛性分析,这概括了标准KL框架。我们总结了我们在非正式分析框架中的主要步骤和核心思想,这些框架是独立的兴趣。作为本框架的直接应用,我们还建立了类似的强极限点收敛结果,为重组的近端点法。
translated by 谷歌翻译
本文提出了弗兰克 - 沃尔夫(FW)的新变种​​,称为$ k $ fw。标准FW遭受缓慢的收敛性:迭代通常是Zig-zag作为更新方向振荡约束集的极端点。新变种,$ k $ fw,通过在每次迭代中使用两个更强的子问题oracelles克服了这个问题。第一个是$ k $线性优化Oracle($ k $ loo),计算$ k $最新的更新方向(而不是一个)。第二个是$ k $方向搜索($ k $ ds),最大限度地减少由$ k $最新更新方向和之前迭代表示的约束组的目标。当问题解决方案承认稀疏表示时,奥克斯都易于计算,而且$ k $ FW会迅速收敛,以便平滑凸起目标和几个有趣的约束集:$ k $ fw实现有限$ \ frac {4l_f ^ 3d ^} { \ Gamma \ Delta ^ 2} $融合在多台和集团规范球上,以及光谱和核规范球上的线性收敛。数值实验验证了$ k $ fw的有效性,并展示了现有方法的数量级加速。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
非滑动优化在许多工程领域中找到了广泛的应用程序。在这项工作中,我们建议利用{随机坐标亚级别方法}(RCS)来求解非平滑凸凸和非平滑凸(非平滑弱弱凸)优化问题。在每次迭代中,RCS随机选择一个块坐标,而不是所有要更新的坐标。由实用应用激发,我们考虑了目标函数的{线性界限亚级别假设},这比Lipschitz的连续性假设要笼统得多。在这样的一般假设下,我们在凸和非凸病例中对RCS进行了彻底的收敛分析,并建立了预期的收敛速率和几乎确定的渐近收敛结果。为了得出这些收敛结果,我们建立了收敛的引理以及弱凸功能的全局度量超值属性与其莫罗膜的关系,它们是基本的和独立的利益。最后,我们进行了几项实验,以显示RC的优势比亚级别方法的优势。
translated by 谷歌翻译
本文认为,使用一组不平等凸期望约束最小化凸期望函数的问题。我们提出了一种可计算的随机近似类型算法,即乘数的随机线性近端方法来解决此凸随机优化问题。该算法可以粗略地看作是随机近似和传统的乘数近端方法的混合体。在轻度条件下,我们表明该算法表现出$ o(k^{ - 1/2})$预期的收敛速率,如果正确选择了算法中的参数,则客观降低和约束违规率,其中$ k $表示$ k $表示的数量表示迭代。此外,我们表明,算法具有$ o(\ log(k)k^{ - 1/2})$约束违规和$ o(\ log^{3/2}(k)k)^{ - 1/2})$目标结合。一些初步的数值结果证明了所提出的算法的性能。
translated by 谷歌翻译