最佳子集选择被认为是许多稀疏学习问题的“黄金标准”。已经提出了各种优化技术来攻击这一非凸和NP障碍问题。在本文中,我们研究了$ \ ell_0 $登记的问题的双重形式。基于原始和双重问题结构已经开发了一种有效的原始偶对偶方法。通过利用双重范围估计以及增量策略,我们的算法可能会减少冗余计算并改善最佳子集选择的解决方案。关于合成和现实世界数据集的理论分析和实验验证了拟议溶液的效率和统计特性。
translated by 谷歌翻译
稀疏性损失最小化问题在包括机器学习,数据挖掘和现代统计的各个领域中起着重要作用。近端梯度下降法和坐标下降法是解决最小化问题的最流行方法。尽管现有方法可以实现隐式模型识别,但在有限数量的迭代中,也就是支持集合识别,但在高维情况下,这些方法仍然遭受巨大的计算成本和内存负担。原因是这些方法中的支持集识别是隐式的,因此无法明确识别实践中的低复杂性结构,即,它们无法通过降低尺寸丢弃相关特征的无用系数,以实现算法加速。为了应对这一挑战,我们提出了一种新颖的加速双随机梯度下降(ADSGD)方法,用于稀疏性损失最小化问题,这可以通过在优化过程中消除无效系数来减少块迭代次数的数量,并最终实现更快的显式模型识别和改进的模型识别和改进和改进的模型识别和改进速度算法效率。从理论上讲,我们首先证明ADSGD可以达到线性收敛速率并降低总体计算复杂性。更重要的是,我们证明ADSGD可以实现显式模型识别的线性速率。从数值上讲,基准数据集上的实验结果证实了我们提出的方法的效率。
translated by 谷歌翻译
Sparsity promoting regularizers are widely used to impose low-complexity structure (e.g. l1-norm for sparsity) to the regression coefficients of supervised learning. In the realm of deterministic optimization, the sequence generated by iterative algorithms (such as proximal gradient descent) exhibit "finite activity identification", namely, they can identify the low-complexity structure in a finite number of iterations. However, most online algorithms (such as proximal stochastic gradient descent) do not have the property owing to the vanishing step-size and non-vanishing variance. In this paper, by combining with a screening rule, we show how to eliminate useless features of the iterates generated by online algorithms, and thereby enforce finite activity identification. One consequence is that when combined with any convergent online algorithm, sparsity properties imposed by the regularizer can be exploited for computational gains. Numerically, significant acceleration can be obtained.
translated by 谷歌翻译
在这项工作中,我们将该算法考虑到(非线性)回归问题与$ \ ell_0 $罚款。用于$ \ ell_0 $基于$的优化问题的现有算法通常用固定的步长进行,并且选择适当的步长度取决于限制的强凸性和损耗功能的平滑度,因此难以计算计算。在Sprite的支持检测和根查找\ Cite {HJK2020}的思想中,我们提出了一种新颖且有效的数据驱动线搜索规则,以自适应地确定适当的步长。我们证明了绑定到所提出的算法的$ \ ell_2 $ error,而没有限制成本函数。在线性和逻辑回归问题中具有最先进的算法的大量数值比较显示了所提出的算法的稳定性,有效性和优越性。
translated by 谷歌翻译
Difference-of-Convex (DC) minimization, referring to the problem of minimizing the difference of two convex functions, has been found rich applications in statistical learning and studied extensively for decades. However, existing methods are primarily based on multi-stage convex relaxation, only leading to weak optimality of critical points. This paper proposes a coordinate descent method for minimizing a class of DC functions based on sequential nonconvex approximation. Our approach iteratively solves a nonconvex one-dimensional subproblem globally, and it is guaranteed to converge to a coordinate-wise stationary point. We prove that this new optimality condition is always stronger than the standard critical point condition and directional point condition under a mild \textit{locally bounded nonconvexity assumption}. For comparisons, we also include a naive variant of coordinate descent methods based on sequential convex approximation in our study. When the objective function satisfies a \textit{globally bounded nonconvexity assumption} and \textit{Luo-Tseng error bound assumption}, coordinate descent methods achieve \textit{Q-linear} convergence rate. Also, for many applications of interest, we show that the nonconvex one-dimensional subproblem can be computed exactly and efficiently using a breakpoint searching method. Finally, we have conducted extensive experiments on several statistical learning tasks to show the superiority of our approach. Keywords: Coordinate Descent, DC Minimization, DC Programming, Difference-of-Convex Programs, Nonconvex Optimization, Sparse Optimization, Binary Optimization.
translated by 谷歌翻译
广义线性模型(GLM)形成了一类广泛的回归和分类模型,其中预测是输入变量的线性组合的函数。对于高维度的统计推断,事实证明,诱导正规化的稀疏性在提供统计保证时很有用。但是,解决最终的优化问题可能具有挑战性:即使对于流行的迭代算法,例如协调下降,也需要在大量变量上循环。为了减轻这种情况,称为筛选规则和工作集的技术可以通过逐步删除变量或解决增长的较小问题的序列来减少手头优化问题的大小。对于这两种技术,都可以鉴定出大量变量,这要归功于凸双重性论点。在本文中,我们表明,GLM的双重迭代在标志识别后表现出矢量自回归(VAR)行为,当使用近端梯度下降或环状坐标下降解决原始问题时。利用这种规律性,可以构建双重点,以提供最佳的最佳证书,增强筛选规则的性能并帮助设计竞争性的工作集算法。
translated by 谷歌翻译
Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance. In addition, we show that the complexity is further improvable in terms of the dependence on some problem parameter, via a restart scheme that calls the accelerated method repeatedly. As an application, we consider sparsity-inducing constrained optimization which has a separable convex objective and a strongly convex loss constraint. In addition to achieving fast convergence, we show that the restarted method can effectively identify the sparsity pattern (active-set) of the optimal solution in finite steps. To the best of our knowledge, this is the first active-set identification result for sparsity-inducing constrained optimization.
translated by 谷歌翻译
路径跟踪算法经常用于复合优化问题,其中一系列具有不同正则化超参数的子问题,顺序解决。通过将以前的解决方案重用为初始化,在数值上观察到更好的收敛速度。这使得它成为加速机器学习中优化算法的执行的相当有用的启发式。我们提出了路径跟踪算法的原始双重分析,并探索了如何设计其超参数,以及确定每个子问题的解决方案应该如何解决,以保证目标问题的线性收敛速度。此外,考虑用稀疏诱导惩罚的优化,我们分析了关于正则化参数的活动集的变化。然后可以自适应地校准后者以精细地确定沿解决方案路径选择的特征的数量。这导致简单的启发式校准主动集方法的超级参数,以降低他们的复杂性并提高他们的执行时间。
translated by 谷歌翻译
Sparse reduced rank regression is an essential statistical learning method. In the contemporary literature, estimation is typically formulated as a nonconvex optimization that often yields to a local optimum in numerical computation. Yet, their theoretical analysis is always centered on the global optimum, resulting in a discrepancy between the statistical guarantee and the numerical computation. In this research, we offer a new algorithm to address the problem and establish an almost optimal rate for the algorithmic solution. We also demonstrate that the algorithm achieves the estimation with a polynomial number of iterations. In addition, we present a generalized information criterion to simultaneously ensure the consistency of support set recovery and rank estimation. Under the proposed criterion, we show that our algorithm can achieve the oracle reduced rank estimation with a significant probability. The numerical studies and an application in the ovarian cancer genetic data demonstrate the effectiveness and scalability of our approach.
translated by 谷歌翻译
在本文中,我们提出了近似的Frank-Wolfe(FW)算法,以在\ textit {线性最小化oracle}(LMO)一般不能有效地获得图形结构的支持集上解决凸的优化问题。我们首先证明了两个流行的近似假设(\ textIt {addive}和\ textit {乘法差距错误)},对于我们的问题而言无效,因为一般不存在便宜的间隙 - 差异lmo oracle。取而代之的是,提出了一个新的\ textit {近似双重最大化oracle}(dmo),该(DMO)近似于内部产品而不是间隙。当目标为$ l $ -smooth时,我们证明了使用$ \ delta $ -Approximate DMO的标准FW方法收敛为$ \ Mathcal {o}(l / \ delta t +(1- \ delta)(\ delta)(\ delta)一般而言放松约束集。此外,当目标为$ \ mu $ -sronglongly凸面并且该解决方案是唯一的,FW的变体收敛到$ \ Mathcal {o}(l^2 \ log log(t)/(\ mu \ mu \ delta^6 T^) 2))$具有相同的触电复杂性。我们的经验结果表明,即使这些改进的界限也是悲观的,在恢复具有图形结构稀疏性的现实世界图像方面,有了显着改善。
translated by 谷歌翻译
广义自我符合是许多重要学习问题的目标功能中存在的关键属性。我们建立了一个简单的Frank-Wolfe变体的收敛速率,该变体使用开环步数策略$ \ gamma_t = 2/(t+2)$,获得了$ \ Mathcal {o}(1/t)$收敛率对于这类功能,就原始差距和弗兰克 - 沃尔夫差距而言,$ t $是迭代计数。这避免了使用二阶信息或估计以前工作的局部平滑度参数的需求。我们还显示了各种常见病例的收敛速率的提高,例如,当所考虑的可行区域均匀地凸或多面体时。
translated by 谷歌翻译
We study distributionally robust optimization (DRO) with Sinkhorn distance -- a variant of Wasserstein distance based on entropic regularization. We provide convex programming dual reformulation for a general nominal distribution. Compared with Wasserstein DRO, it is computationally tractable for a larger class of loss functions, and its worst-case distribution is more reasonable. We propose an efficient first-order algorithm with bisection search to solve the dual reformulation. We demonstrate that our proposed algorithm finds $\delta$-optimal solution of the new DRO formulation with computation cost $\tilde{O}(\delta^{-3})$ and memory cost $\tilde{O}(\delta^{-2})$, and the computation cost further improves to $\tilde{O}(\delta^{-2})$ when the loss function is smooth. Finally, we provide various numerical examples using both synthetic and real data to demonstrate its competitive performance and light computational speed.
translated by 谷歌翻译
组选择的最佳子集(BSG)是选择一小部分非重叠组以在响应变量上获得最佳解释性的过程。它吸引了越来越多的关注,并且在实践中具有深远的应用。但是,由于BSG在高维环境中的计算棘手性,开发用于解决BSGS的有效算法仍然是研究热点。在本文中,我们提出了一种划分的算法,该算法迭代地检测相关组并排除了无关的组。此外,再加上新的组信息标准,我们开发了一种自适应算法来确定最佳模型大小。在轻度条件下,我们的算法可以在多项式时间内以高概率确定组的最佳子集是可以证明的。最后,我们通过将它们与合成数据集和现实世界中的几种最新算法进行比较来证明我们的方法的效率和准确性。
translated by 谷歌翻译
现代技术正在生成越来越多的数据。利用这些数据需要既有统计学上的声音又有效率的方法。通常,统计和计算方面会分别处理。在本文中,我们提出了一种在正规化估计的背景下纠缠这两个方面的方法。将我们的方法应用于稀疏和小组的回归,我们表明它可以在统计和计算上对标准管道进行改进。
translated by 谷歌翻译
稀疏条件随机场(CRF)是一种强大的计算机视觉和结构预测的自然语言处理技术。然而,在大规模应用中解决稀疏CRF仍然具有挑战性。在本文中,我们提出了一种新的安全动态筛选方法,该方法利用准确的双重最佳估计来识别和去除训练过程中的无关功能。因此,问题大小可以连续减小,从不牺牲最终学习模型的任何准确性,以计算成本很大地节省。据我们所知,这是第一种筛选方法,介绍了双重最佳估计技术 - 通过仔细探索和利用强大的凸起和双重问题的复杂结构 - 在静态筛选方法中动态筛选。通过这种方式,我们可以吸收静态和动态筛选方法的优点,避免其缺点。我们的估计比基于二元间隙开发的估计更准确,这有助于更强大的筛选规则。此外,我们的方法也是稀疏CRFS甚至结构预测模型中的第一筛选方法。合成和现实世界数据集的实验结果表明,我们的方法获得的加速是显着的。
translated by 谷歌翻译
二重优化发现在现代机器学习问题中发现了广泛的应用,例如超参数优化,神经体系结构搜索,元学习等。而具有独特的内部最小点(例如,内部功能是强烈凸的,都具有唯一的内在最小点)的理解,这是充分理解的,多个内部最小点的问题仍然是具有挑战性和开放的。为此问题设计的现有算法适用于限制情况,并且不能完全保证融合。在本文中,我们采用了双重优化的重新制定来限制优化,并通过原始的双二线优化(PDBO)算法解决了问题。 PDBO不仅解决了多个内部最小挑战,而且还具有完全一阶效率的情况,而无需涉及二阶Hessian和Jacobian计算,而不是大多数现有的基于梯度的二杆算法。我们进一步表征了PDBO的收敛速率,它是与多个内部最小值的双光线优化的第一个已知的非质合收敛保证。我们的实验证明了所提出的方法的预期性能。
translated by 谷歌翻译
我们研究了估计多元高斯分布中的精度矩阵的问题,其中所有部分相关性都是非负面的,也称为多变量完全阳性的顺序阳性($ \ mathrm {mtp} _2 $)。近年来,这种模型得到了重大关注,主要是由于有趣的性质,例如,无论底层尺寸如何,最大似然估计值都存在于两个观察。我们将此问题作为加权$ \ ell_1 $ -norm正常化高斯的最大似然估计下$ \ mathrm {mtp} _2 $约束。在此方向上,我们提出了一种新颖的预计牛顿样算法,该算法包含精心设计的近似牛顿方向,这导致我们具有与一阶方法相同的计算和内存成本的算法。我们证明提出的预计牛顿样算法会聚到问题的最小值。从理论和实验中,我们进一步展示了我们使用加权$ \ ell_1 $ -norm的制剂的最小化器能够正确地恢复基础精密矩阵的支持,而无需在$ \ ell_1 $ -norm中存在不连贯状态方法。涉及合成和实世界数据的实验表明,我们所提出的算法从计算时间透视比最先进的方法显着更有效。最后,我们在金融时序数据中应用我们的方法,这些数据对于显示积极依赖性,在那里我们在学习金融网络上的模块间值方面观察到显着性能。
translated by 谷歌翻译
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we connect the model selection problem with structure-promoting regularizers to submodular function minimization with continuous and discrete arguments. In particular, we leverage the theory of submodular functions to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes and extend these ideas to a robust optimization framework. We also show how some problems outside of this class can be embedded within the class, further extending the class of problems our framework can accommodate. Finally, we numerically validate our theoretical results with several proof-of-concept examples with synthetic and real-world data, comparing against state-of-the-art algorithms.
translated by 谷歌翻译
我们提出了一个基于预测校正范式的统一框架,用于在原始和双空间中的预测校正范式。在此框架中,以固定的间隔进行了连续变化的优化问题,并且每个问题都通过原始或双重校正步骤近似解决。通过预测步骤的输出,该解决方案方法是温暖启动的,该步骤的输出可以使用过去的信息解决未来问题的近似。在不同的假设集中研究并比较了预测方法。该框架涵盖的算法的示例是梯度方法的时变版本,分裂方法和著名的乘数交替方向方法(ADMM)。
translated by 谷歌翻译
本文重点介绍了解决光滑非凸强凹入最小问题的随机方法,这导致了由于其深度学习中的潜在应用而受到越来越长的关注(例如,深度AUC最大化,分布鲁棒优化)。然而,大多数现有算法在实践中都很慢,并且它们的分析围绕到几乎静止点的收敛。我们考虑利用Polyak-\ L Ojasiewicz(PL)条件来设计更快的随机算法,具有更强的收敛保证。尽管已经用于设计许多随机最小化算法的PL条件,但它们对非凸敏最大优化的应用仍然罕见。在本文中,我们提出并分析了基于近端的跨越时代的方法的通用框架,许多众所周知的随机更新嵌入。以{\ BF原始物镜差和二元间隙}的方式建立快速收敛。与现有研究相比,(i)我们的分析基于一个新的Lyapunov函数,包括原始物理差距和正则化功能的二元间隙,(ii)结果更加全面,提高了更好的依赖性的速率不同假设下的条件号。我们还开展深层和非深度学习实验,以验证我们的方法的有效性。
translated by 谷歌翻译