本文介绍了针对非负矩阵分解的新的乘法更新,并使用$ \ beta $ -Divergence和两个因素之一的稀疏正则化(例如,激活矩阵)。众所周知,需要控制另一个因素(字典矩阵)的规范,以避免使用不良的公式。标准实践包括限制字典的列具有单位规范,这导致了非平凡的优化问题。我们的方法利用原始问题对等效规模不变的目标函数的优化进行了重新处理。从那里,我们得出了块状大量最小化算法,这些算法可为$ \ ell_ {1} $ - 正则化或更“激进的” log-regularization提供简单的乘法更新。与其他最先进的方法相反,我们的算法是通用的,因为它们可以应用于任何$ \ beta $ -Divergence(即任何$ \ beta $的任何值),并且它们具有融合保证。我们使用各种数据集报告了与现有的启发式和拉格朗日方法的数值比较:面部图像,音频谱图,高光谱数据和歌曲播放计数。我们表明,我们的方法获得了收敛时类似质量的溶液(相似的目标值),但CPU时间显着减少。
translated by 谷歌翻译
本文提出了具有$ \ Beta $ -divercent objectivent函数的非负面矩阵分组(NMF)的新乘法更新。我们的新更新来自联合大修 - 最小化(MM)方案,其中包括在每次迭代的两个因素中构建了两个因素的辅助功能(客观函数的紧密上限)。这与经典方法相反,其中主要是针对每个因素导出的主要方法。与那种经典方法一样,我们的关节MM算法也导致乘法更新易于实现。然而,它们产生了显着的计算时间(适用于同样的良好解决方案),特别是对于一些$ \β$ - 重要的申请兴趣,如平方欧几里德距离和kullback-Leibler或Itakura-Saito分歧。我们使用不同数据集报告实验结果:面部图像,音频谱图,高光谱数据和歌曲播放计数。根据$ \ beta $和dataSet的值,我们的关节MM方法可以与经典交替方案相比,从大约13 \%$ 78 \%$产生CPU时间减少。
translated by 谷歌翻译
具有转换学习的非负矩阵分解(TL-NMF)是最近的一个想法,其旨在学习适合NMF的数据表示。在这项工作中,我们将TL-NMF与古典矩阵关节 - 对角化(JD)问题相关联。我们展示,当数据实现的数量足够大时,TL-NMF可以由作为JD + NMF称为JD + NMF的两步接近 - 通过JD在NMF计算之前估计变换。相比之下,我们发现,当数据实现的数量有限时,不仅是JD + NMF不等于TL-NMF,但TL-NMF的固有低级约束结果是学习有意义的基本成分转变为NMF。
translated by 谷歌翻译
约束的张量和矩阵分子化模型允许从多道数据中提取可解释模式。因此,对于受约束的低秩近似度的可识别性特性和有效算法是如此重要的研究主题。这项工作涉及低秩近似的因子矩阵的列,以众所周知的和可能的过度顺序稀疏,该模型包括基于字典的低秩近似(DLRA)。虽然早期的贡献集中在候选列字典内的发现因子列,即一稀疏的近似值,这项工作是第一个以大于1的稀疏性解决DLRA。我建议专注于稀疏编码的子问题,在解决DLRA时出现的混合稀疏编码(MSC)以交替的优化策略在解决DLRA时出现。提供了基于稀疏编码启发式的几种算法(贪婪方法,凸起放松)以解决MSC。在模拟数据上评估这些启发式的性能。然后,我展示了如何基于套索来调整一个有效的MSC求解器,以计算高光谱图像处理和化学测量学的背景下的基于词典的基于矩阵分解和规范的多adic分解。这些实验表明,DLRA扩展了低秩近似的建模能力,有助于降低估计方差并提高估计因子的可识别性和可解释性。
translated by 谷歌翻译
现代统计应用常常涉及最小化可能是非流动和/或非凸起的目标函数。本文侧重于广泛的Bregman-替代算法框架,包括本地线性近似,镜像下降,迭代阈值,DC编程以及许多其他实例。通过广义BREGMAN功能的重新发出使我们能够构建合适的误差测量并在可能高维度下建立非凸起和非凸起和非球形目标的全球收敛速率。对于稀疏的学习问题,在一些规律性条件下,所获得的估算器作为代理人的固定点,尽管不一定是局部最小化者,但享受可明确的统计保障,并且可以证明迭代顺序在所需的情况下接近统计事实准确地快速。本文还研究了如何通过仔细控制步骤和放松参数来设计基于适应性的动力的加速度而不假设凸性或平滑度。
translated by 谷歌翻译
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we connect the model selection problem with structure-promoting regularizers to submodular function minimization with continuous and discrete arguments. In particular, we leverage the theory of submodular functions to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes and extend these ideas to a robust optimization framework. We also show how some problems outside of this class can be embedded within the class, further extending the class of problems our framework can accommodate. Finally, we numerically validate our theoretical results with several proof-of-concept examples with synthetic and real-world data, comparing against state-of-the-art algorithms.
translated by 谷歌翻译
在本文中,我们提出了一个算法框架,称为乘数的惯性交替方向方法(IADMM),用于求解与线性约束线性约束的一类非convex非conmooth多块复合优化问题。我们的框架采用了一般最小化 - 更大化(MM)原理来更新每个变量块,从而不仅统一了先前在MM步骤中使用特定替代功能的AMDM的收敛分析,还导致新的有效ADMM方案。据我们所知,在非convex非平滑设置中,ADMM与MM原理结合使用,以更新每个变量块,而ADMM与\ emph {Primal变量的惯性术语结合在一起}尚未在文献中研究。在标准假设下,我们证明了生成的迭代序列的后续收敛和全局收敛性。我们说明了IADMM对一类非凸低级别表示问题的有效性。
translated by 谷歌翻译
稀疏数据的恢复是机器学习和信号处理中许多应用的核心。虽然可以使用$ \ ell_1 $ -regularization在套索估算器中使用此类问题,但在基础上,通常需要专用算法来解决大型实例的相应高维非平滑优化。迭代地重新重复的最小二乘(IRLS)是一种广泛使用的算法,其出于其优异的数值性能。然而,虽然现有理论能够保证该算法的收敛到最小化器,但它不提供全局收敛速度。在本文中,我们证明了IRLS的变型以全局线性速率收敛到稀疏解决方案,即,如果测量结果满足通常的空空间属性假设,则立即发生线性误差。我们通过数值实验支持我们的理论,表明我们的线性速率捕获了正确的维度依赖性。我们预计我们的理论调查结果将导致IRLS算法的许多其他用例的新见解,例如在低级矩阵恢复中。
translated by 谷歌翻译
重建 /特征提取的联合问题是图像处理中的一项具有挑战性的任务。它包括以联合方式执行图像的恢复及其特征的提取。在这项工作中,我们首先提出了一个新颖的非平滑和非凸变性表述。为此,我们介绍了一种通用的高斯先验,其参数(包括其指数)是空间变化的。其次,我们设计了一种基于近端的交替优化算法,该算法有效利用了所提出的非convex目标函数的结构。我们还分析了该算法的收敛性。如在关节分割/脱张任务进行的数值实验中所示,该方法提供了高质量的结果。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
目前的论文研究了最小化损失$ f(\ boldsymbol {x})$的问题,而在s $ \ boldsymbol {d} \ boldsymbol {x} \的约束,其中$ s $是一个关闭的集合,凸面或非,$ \ boldsymbol {d} $是熔化参数的矩阵。融合约束可以捕获平滑度,稀疏或更一般的约束模式。为了解决这个通用的问题,我们将Beltrami-Courant罚球方法与近距离原则相结合。后者是通过最小化惩罚目标的推动$ f(\ boldsymbol {x})+ \ frac {\ rho} {2} \ text {dist}(\ boldsymbol {d} \ boldsymbol {x},s)^ 2 $涉及大型调整常量$ \ rho $和$ \ boldsymbol {d} \ boldsymbol {x} $的平方欧几里德距离$ s $。通过最小化大多数代理函数$ f(\ boldsymbol {x},从当前迭代$ \ boldsymbol {x} _n $构建相应的近距离算法的下一个迭代$ \ boldsymbol {x} _ {n + 1} $。 )+ \ frac {\ rho} {2} \ | \ boldsymbol {d} \ boldsymbol {x} - \ mathcal {p} _ {s}(\ boldsymbol {d} \ boldsymbol {x} _n)\ | ^ 2 $。对于固定$ \ rho $和subanalytic损失$ f(\ boldsymbol {x})$和子质约束设置$ s $,我们证明了汇聚点。在更强大的假设下,我们提供了收敛速率并展示线性本地收敛性。我们还构造了一个最陡的下降(SD)变型,以避免昂贵的线性系统解决。为了基准我们的算法,我们比较乘法器(ADMM)的交替方向方法。我们广泛的数值测试包括在度量投影,凸回归,凸聚类,总变化图像去噪和矩阵的投影到良好状态数的问题。这些实验表明了我们在高维问题上最陡的速度和可接受的准确性。
translated by 谷歌翻译
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of regression and inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence. Our approach compares favorably with other alternatives, as confirmed also in numerical simulations.
translated by 谷歌翻译
近期在应用于培训深度神经网络和数据分析中的其他优化问题中的非凸优化的优化算法的兴趣增加,我们概述了最近对非凸优化优化算法的全球性能保证的理论结果。我们从古典参数开始,显示一般非凸面问题无法在合理的时间内有效地解决。然后,我们提供了一个问题列表,可以通过利用问题的结构来有效地找到全球最小化器,因为可能的问题。处理非凸性的另一种方法是放宽目标,从找到全局最小,以找到静止点或局部最小值。对于该设置,我们首先为确定性一阶方法的收敛速率提出了已知结果,然后是最佳随机和随机梯度方案的一般理论分析,以及随机第一阶方法的概述。之后,我们讨论了非常一般的非凸面问题,例如最小化$ \ alpha $ -weakly-are-convex功能和满足Polyak-lojasiewicz条件的功能,这仍然允许获得一阶的理论融合保证方法。然后,我们考虑更高阶和零序/衍生物的方法及其收敛速率,以获得非凸优化问题。
translated by 谷歌翻译
We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by a constrained non-convex optimization problem involving the Grassmann manifold. We propose two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme, both of which include first- and second-order variants. Both sets of algorithms have theoretical guarantees. In particular, for the alternating minimization, we establish global convergence and worst-case complexity bounds. Additionally, using the Kurdyka-Lojasiewicz property, we show that the alternating minimization converges to a unique limit point. We provide extensive numerical results for the recovery of union of subspaces and clustering under entry sampling and dense Gaussian sampling. Our methods are competitive with existing approaches and, in particular, high accuracy is achieved in the recovery using Riemannian second-order methods.
translated by 谷歌翻译
给定数据点之间的一组差异测量值,确定哪种度量表示与输入测量最“一致”或最能捕获数据相关几何特征的度量是许多机器学习算法的关键步骤。现有方法仅限于特定类型的指标或小问题大小,因为在此类问题中有大量的度量约束。在本文中,我们提供了一种活跃的集合算法,即项目和忘记,该算法使用Bregman的预测,以解决许多(可能是指数)不平等约束的度量约束问题。我们提供了\ textsc {project and Hoses}的理论分析,并证明我们的算法会收敛到全局最佳解决方案,并以指数速率渐近地渐近地衰减了当前迭代的$ L_2 $距离。我们证明,使用我们的方法,我们可以解决三种类型的度量约束问题的大型问题实例:一般体重相关聚类,度量近距离和度量学习;在每种情况下,就CPU时间和问题尺寸而言,超越了艺术方法的表现。
translated by 谷歌翻译
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data.
translated by 谷歌翻译
We introduce a class of first-order methods for smooth constrained optimization that are based on an analogy to non-smooth dynamical systems. Two distinctive features of our approach are that (i) projections or optimizations over the entire feasible set are avoided, in stark contrast to projected gradient methods or the Frank-Wolfe method, and (ii) iterates are allowed to become infeasible, which differs from active set or feasible direction methods, where the descent motion stops as soon as a new constraint is encountered. The resulting algorithmic procedure is simple to implement even when constraints are nonlinear, and is suitable for large-scale constrained optimization problems in which the feasible set fails to have a simple structure. The key underlying idea is that constraints are expressed in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. In particular, this means that at each iteration only constraints that are violated are taken into account. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.
translated by 谷歌翻译
在本文中,我们介绍了泰坦(Titan),这是一种新型的惯性块最小化框架,用于非平滑非凸优化问题。据我们所知,泰坦是块坐标更新方法的第一个框架,该方法依赖于大型最小化框架,同时将惯性力嵌入到块更新的每个步骤中。惯性力是通过外推算子获得的,该操作员累积了重力和Nesterov型加速度,以作为特殊情况作为块近端梯度方法。通过选择各种替代功能,例如近端,Lipschitz梯度,布雷格曼,二次和复合替代功能,并通过改变外推操作员来生成一组丰富的惯性块坐标坐标更新方法。我们研究了泰坦生成序列的子顺序收敛以及全局收敛。我们说明了泰坦对两个重要的机器学习问题的有效性,即稀疏的非负矩阵分解和矩阵完成。
translated by 谷歌翻译
深矩阵因子化(深MF)是最新的无监督数据挖掘技术,其灵感来自受约束的低级别近似值。他们旨在提取高维数据集中功能的复杂层次结构。文献中提出的大多数损失函数用于评估深MF模型的质量和基础优化框架不一致,因为在不同层上使用了不同的损失。在本文中,我们引入了深层MF的两个有意义的损失功能,并提出了一个通用框架来解决相应的优化问题。我们通过整合各种约束和正规化(例如稀疏性,非负和最小体积)来说明这种方法的有效性。这些模型已成功应用于合成数据和真实数据,即高光谱的不混合和提取面部特征。
translated by 谷歌翻译
本文主要侧重于计算向量的欧几里德投影到$ \ ell_ {p} $ ball,其中$ p \ in(0,1)$。这种问题是统计机器学习中的核心构建块和信号处理任务,因为它促进了稀疏性的能力。但是,用于查找投影的有效数值算法仍然不可用,特别是在大规模优化中。为满足这一挑战,我们首先推出了这个问题的一流必备的最优性条件。基于该表征,我们通过求解一系列投影来制定一种用于计算静止点的新颖性方法,以在重新重量$ \ ell_ {1} $ - 球上。这种方法实际上是简单的实现和计算效率。此外,所提出的算法显示在温和条件下唯一会聚,并且具有最坏情况$ O(1 / \ SQRT {k})$收敛速率。数值实验证明了我们所提出的算法的效率。
translated by 谷歌翻译