We study inductive matrix completion (matrix completion with side information) under an i.i.d. subgaussian noise assumption at a low noise regime, with uniform sampling of the entries. We obtain for the first time generalization bounds with the following three properties: (1) they scale like the standard deviation of the noise and in particular approach zero in the exact recovery case; (2) even in the presence of noise, they converge to zero when the sample size approaches infinity; and (3) for a fixed dimension of the side information, they only have a logarithmic dependence on the size of the matrix. Differently from many works in approximate recovery, we present results both for bounded Lipschitz losses and for the absolute loss, with the latter relying on Talagrand-type inequalities. The proofs create a bridge between two approaches to the theoretical analysis of matrix completion, since they consist in a combination of techniques from both the exact recovery literature and the approximate recovery literature.
translated by 谷歌翻译
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M . Can we complete the matrix and recover the entries that we have not seen?We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
translated by 谷歌翻译
矩阵正常模型,高斯矩阵变化分布的系列,其协方差矩阵是两个较低尺寸因子的Kronecker乘积,经常用于模拟矩阵变化数据。张量正常模型将该家庭推广到三个或更多因素的Kronecker产品。我们研究了矩阵和张量模型中协方差矩阵的Kronecker因子的估计。我们向几个自然度量中的最大似然估计器(MLE)实现的误差显示了非因素界限。与现有范围相比,我们的结果不依赖于条件良好或稀疏的因素。对于矩阵正常模型,我们所有的所有界限都是最佳的对数因子最佳,对于张量正常模型,我们对最大因数和整体协方差矩阵的绑定是最佳的,所以提供足够的样品以获得足够的样品以获得足够的样品常量Frobenius错误。在与我们的样本复杂性范围相同的制度中,我们表明迭代程序计算称为触发器算法称为触发器算法的MLE的线性地收敛,具有高概率。我们的主要工具是Fisher信息度量诱导的正面矩阵的几何中的测地强凸性。这种强大的凸起由某些随机量子通道的扩展来决定。我们还提供了数值证据,使得将触发器算法与简单的收缩估计器组合可以提高缺乏采样制度的性能。
translated by 谷歌翻译
随机奇异值分解(RSVD)是用于计算大型数据矩阵截断的SVD的一类计算算法。给定A $ n \ times n $对称矩阵$ \ mathbf {m} $,原型RSVD算法输出通过计算$ \ mathbf {m mathbf {m} $的$ k $引导singular vectors的近似m}^{g} \ mathbf {g} $;这里$ g \ geq 1 $是一个整数,$ \ mathbf {g} \ in \ mathbb {r}^{n \ times k} $是一个随机的高斯素描矩阵。在本文中,我们研究了一般的“信号加上噪声”框架下的RSVD的统计特性,即,观察到的矩阵$ \ hat {\ mathbf {m}} $被认为是某种真实但未知的加法扰动信号矩阵$ \ mathbf {m} $。我们首先得出$ \ ell_2 $(频谱规范)和$ \ ell_ {2 \ to \ infty} $(最大行行列$ \ ell_2 $ norm)$ \ hat {\ hat {\ Mathbf {M}} $和信号矩阵$ \ Mathbf {M} $的真实单数向量。这些上限取决于信噪比(SNR)和功率迭代$ g $的数量。观察到一个相变现象,其中较小的SNR需要较大的$ g $值以保证$ \ ell_2 $和$ \ ell_ {2 \ to \ fo \ infty} $ distances的收敛。我们还表明,每当噪声矩阵满足一定的痕量生长条件时,这些相变发生的$ g $的阈值都会很清晰。最后,我们得出了近似奇异向量的行波和近似矩阵的进入波动的正常近似。我们通过将RSVD的几乎最佳性能保证在应用于三个统计推断问题的情况下,即社区检测,矩阵完成和主要的组件分析,并使用缺失的数据来说明我们的理论结果。
translated by 谷歌翻译
We consider the nonlinear inverse problem of learning a transition operator $\mathbf{A}$ from partial observations at different times, in particular from sparse observations of entries of its powers $\mathbf{A},\mathbf{A}^2,\cdots,\mathbf{A}^{T}$. This Spatio-Temporal Transition Operator Recovery problem is motivated by the recent interest in learning time-varying graph signals that are driven by graph operators depending on the underlying graph topology. We address the nonlinearity of the problem by embedding it into a higher-dimensional space of suitable block-Hankel matrices, where it becomes a low-rank matrix completion problem, even if $\mathbf{A}$ is of full rank. For both a uniform and an adaptive random space-time sampling model, we quantify the recoverability of the transition operator via suitable measures of incoherence of these block-Hankel embedding matrices. For graph transition operators these measures of incoherence depend on the interplay between the dynamics and the graph topology. We develop a suitable non-convex iterative reweighted least squares (IRLS) algorithm, establish its quadratic local convergence, and show that, in optimal scenarios, no more than $\mathcal{O}(rn \log(nT))$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator $\mathbf{A}$ of size $n \times n$. This establishes that spatial samples can be substituted by a comparable number of space-time samples. We provide an efficient implementation of the proposed IRLS algorithm with space complexity of order $O(r n T)$ and per-iteration time complexity linear in $n$. Numerical experiments for transition operators based on several graph models confirm that the theoretical findings accurately track empirical phase transitions, and illustrate the applicability and scalability of the proposed algorithm.
translated by 谷歌翻译
素描和项目是一个框架,它统一了许多已知的迭代方法来求解线性系统及其变体,并进一步扩展了非线性优化问题。它包括流行的方法,例如随机kaczmarz,坐标下降,凸优化的牛顿方法的变体等。在本文中,我们通过新的紧密频谱边界为预期的草图投影矩阵获得了素描和项目的收敛速率的敏锐保证。我们的估计值揭示了素描和项目的收敛率与另一个众所周知但看似无关的算法家族的近似误差之间的联系,这些算法使用草图加速了流行的矩阵因子化,例如QR和SVD。这种连接使我们更接近准确量化草图和项目求解器的性能如何取决于其草图大小。我们的分析不仅涵盖了高斯和次高斯的素描矩阵,还涵盖了一个有效的稀疏素描方法,称为较少的嵌入方法。我们的实验备份了理论,并证明即使极稀疏的草图在实践中也显示出相同的收敛属性。
translated by 谷歌翻译
我们研究了张量张量的回归,其中的目标是将张量的响应与张量协变量与塔克等级参数张量/矩阵连接起来,而没有其内在等级的先验知识。我们提出了Riemannian梯度下降(RGD)和Riemannian Gauss-Newton(RGN)方法,并通过研究等级过度参数化的影响来应对未知等级的挑战。我们通过表明RGD和RGN分别线性地和四边形地收敛到两个等级的统计最佳估计值,从而为一般的张量调节回归提供了第一个收敛保证。我们的理论揭示了一种有趣的现象:Riemannian优化方法自然地适应了过度参数化,而无需修改其实施。我们还为低度多项式框架下的标量调整回归中的统计计算差距提供了第一个严格的证据。我们的理论证明了``统计计算差距的祝福''现象:在张张量的张量回归中,对于三个或更高的张紧器,在张张量的张量回归中,计算所需的样本量与中等级别相匹配的计算量相匹配。在考虑计算可行的估计器时,虽然矩阵设置没有此类好处。这表明中等等级的过度参数化本质上是``在张量调整的样本量三分或更高的样本大小上,三分或更高的样本量。最后,我们进行仿真研究以显示我们提出的方法的优势并证实我们的理论发现。
translated by 谷歌翻译
我们考虑与高斯数据的高维线性回归中的插值学习,并在类高斯宽度方面证明了任意假设类别中的内插器的泛化误差。将通用绑定到欧几里德常规球恢复了Bartlett等人的一致性结果。(2020)对于最小规范内插器,并确认周等人的预测。(2020)在高斯数据的特殊情况下,对于近乎最小常态的内插器。我们通过将其应用于单位来证明所界限的一般性,从而获得最小L1-NORM Interpoolator(基础追踪)的新型一致性结果。我们的结果表明,基于规范的泛化界限如何解释并用于分析良性过度装备,至少在某些设置中。
translated by 谷歌翻译
In this paper, we study the trace regression when a matrix of parameters B* is estimated via the convex relaxation of a rank-regularized regression or via regularized non-convex optimization. It is known that these estimators satisfy near-optimal error bounds under assumptions on the rank, coherence, and spikiness of B*. We start by introducing a general notion of spikiness for B* that provides a generic recipe to prove the restricted strong convexity of the sampling operator of the trace regression and obtain near-optimal and non-asymptotic error bounds for the estimation error. Similar to the existing literature, these results require the regularization parameter to be above a certain theory-inspired threshold that depends on observation noise that may be unknown in practice. Next, we extend the error bounds to cases where the regularization parameter is chosen via cross-validation. This result is significant in that existing theoretical results on cross-validated estimators (Kale et al., 2011; Kumar et al., 2013; Abou-Moustafa and Szepesvari, 2017) do not apply to our setting since the estimators we study are not known to satisfy their required notion of stability. Finally, using simulations on synthetic and real data, we show that the cross-validated estimator selects a near-optimal penalty parameter and outperforms the theory-inspired approach of selecting the parameter.
translated by 谷歌翻译
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.
translated by 谷歌翻译
低秩矩阵恢复的现有结果在很大程度上专注于二次损失,这享有有利的性质,例如限制强的强凸/平滑度(RSC / RSM)以及在所有低等级矩阵上的良好调节。然而,许多有趣的问题涉及更一般,非二次损失,这不满足这些属性。对于这些问题,标准的非耦合方法,例如秩约为秩约为预定的梯度下降(A.K.A.迭代硬阈值)和毛刺蒙特罗分解可能具有差的经验性能,并且没有令人满意的理论保证了这些算法的全球和快速收敛。在本文中,我们表明,具有非二次损失的可证实低级恢复中的关键组成部分是规律性投影oracle。该Oracle限制在适当的界限集中迭代到低级矩阵,损耗功能在其上表现良好并且满足一组近似RSC / RSM条件。因此,我们分析配备有这样的甲骨文的(平均)投影的梯度方法,并证明它在全球和线性地收敛。我们的结果适用于广泛的非二次低级估计问题,包括一个比特矩阵感测/完成,个性化排名聚集,以及具有等级约束的更广泛的广义线性模型。
translated by 谷歌翻译
在本文中,我们提出{\ it \下划线{r} ecursive} {\ it \ usef \ undesline {i} mortance} {\ it \ it \ usew supsline {s} ketching} algorithM squares {\ it \下划线{o} ptimization}(risro)。 Risro的关键步骤是递归重要性草图,这是一个基于确定性设计的递归投影的新素描框架,它与文献中的随机素描\ Citep {Mahoney2011 randomized,Woodruff2014sketching}有很大不同。在这个新的素描框架下,可以重新解释文献中的几种现有算法,而Risro比它们具有明显的优势。 Risro易于实现,并在计算上有效,其中每次迭代中的核心过程是解决降低尺寸最小二乘问题的问题。我们在某些轻度条件下建立了Risro的局部二次线性和二次收敛速率。我们还发现了Risro与Riemannian Gauss-Newton算法在固定等级矩阵上的联系。在机器学习和统计数据中的两种应用中,RISRO的有效性得到了证明:低级别矩阵痕量回归和相位检索。仿真研究证明了Risro的出色数值性能。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
我们考虑估计与I.I.D的排名$ 1 $矩阵因素的问题。高斯,排名$ 1 $的测量值,这些测量值非线性转化和损坏。考虑到非线性的两种典型选择,我们研究了从随机初始化开始的此非convex优化问题的天然交流更新规则的收敛性能。我们通过得出确定性递归,即使在高维问题中也是准确的,我们显示出算法的样本分割版本的敏锐收敛保证。值得注意的是,虽然无限样本的种群更新是非信息性的,并提示单个步骤中的精确恢复,但算法 - 我们的确定性预测 - 从随机初始化中迅速地收敛。我们尖锐的非反应分析也暴露了此问题的其他几种细粒度,包括非线性和噪声水平如何影响收敛行为。从技术层面上讲,我们的结果可以通过证明我们的确定性递归可以通过我们的确定性顺序来预测我们的确定性序列,而当每次迭代都以$ n $观测来运行时,我们的确定性顺序可以通过$ n^{ - 1/2} $的波动。我们的技术利用了源自有关高维$ m $估计文献的遗留工具,并为通过随机数据的其他高维优化问题的随机初始化而彻底地分析了高阶迭代算法的途径。
translated by 谷歌翻译
我们重新审视耐受分发测试的问题。也就是说,给出来自未知分发$ P $超过$ \ {1,\ dots,n \} $的样本,它是$ \ varepsilon_1 $ -close到或$ \ varepsilon_2 $ -far从引用分发$ q $(总变化距离)?尽管过去十年来兴趣,但在极端情况下,这个问题很好。在无噪声设置(即,$ \ varepsilon_1 = 0 $)中,样本复杂性是$ \ theta(\ sqrt {n})$,强大的域大小。在频谱的另一端时,当$ \ varepsilon_1 = \ varepsilon_2 / 2 $时,样本复杂性跳转到勉强su​​blinear $ \ theta(n / \ log n)$。然而,非常少于中级制度。我们充分地表征了分发测试中的公差价格,作为$ N $,$ varepsilon_1 $,$ \ varepsilon_2 $,最多一个$ \ log n $ factor。具体来说,我们显示了\ [\ tilde \ theta \ left的样本复杂性(\ frac {\ sqrt {n}} {\ varepsilon_2 ^ {2}} + \ frac {n} {\ log n} \ cdot \ max \左\ {\ frac {\ varepsilon_1} {\ varepsilon_2 ^ 2},\ left(\ frac {\ varepsilon_1} {\ varepsilon_2 ^ 2} \右)^ {\!\!\!2} \ \ \} \右) ,\]提供两个先前已知的案例之间的顺利折衷。我们还为宽容的等价测试问题提供了类似的表征,其中$ p $和$ q $均未赘述。令人惊讶的是,在这两种情况下,对样本复杂性的主数量是比率$ \ varepsilon_1 / varepsilon_2 ^ 2 $,而不是更直观的$ \ varepsilon_1 / \ varepsilon_2 $。特别是技术兴趣是我们的下限框架,这涉及在以往的工作中处理不对称所需的新颖近似性理论工具,从而缺乏以前的作品。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
我们提出了一个算法框架,用于近距离矩阵上的量子启发的经典算法,概括了Tang的突破性量子启发算法开始的一系列结果,用于推荐系统[STOC'19]。由量子线性代数算法和gily \'en,su,low和wiebe [stoc'19]的量子奇异值转换(SVT)框架[SVT)的动机[STOC'19],我们开发了SVT的经典算法合适的量子启发的采样假设。我们的结果提供了令人信服的证据,表明在相应的QRAM数据结构输入模型中,量子SVT不会产生指数量子加速。由于量子SVT框架基本上概括了量子线性代数的所有已知技术,因此我们的结果与先前工作的采样引理相结合,足以概括所有有关取消量子机器学习算法的最新结果。特别是,我们的经典SVT框架恢复并经常改善推荐系统,主成分分析,监督聚类,支持向量机器,低秩回归和半决赛程序解决方案的取消结果。我们还为汉密尔顿低级模拟和判别分析提供了其他取消化结果。我们的改进来自识别量子启发的输入模型的关键功能,该模型是所有先前量子启发的结果的核心:$ \ ell^2 $ -Norm采样可以及时近似于其尺寸近似矩阵产品。我们将所有主要结果减少到这一事实,使我们的简洁,独立和直观。
translated by 谷歌翻译
Tensor完成是矩阵完成的自然高阶泛化,其中目标是从其条目的稀疏观察中恢复低级张量。现有算法在没有可证明的担保的情况下是启发式,基于解决运行不切实际的大型半纤维程序,或者需要强大的假设,例如需要因素几乎正交。在本文中,我们介绍了交替最小化的新变型,其又通过了解如何对矩阵设置中的交替最小化的收敛性的进展措施来调整到张量设置的启发。我们展示了强大的可证明的保证,包括表明我们的算法即使当因素高度相关时,我们的算法也会在真正的张量线上会聚,并且可以在几乎线性的时间内实现。此外,我们的算法也非常实用,我们表明我们可以完成具有千维尺寸的三阶张量,从观察其条目的微小一部分。相比之下,有些令人惊讶的是,我们表明,如果没有我们的新扭曲,则表明交替最小化的标准版本可以在实践中以急剧速度收敛。
translated by 谷歌翻译
元学习或学习学习,寻求设计算法,可以利用以前的经验快速学习新技能或适应新环境。表示学习 - 用于执行元学习的关键工具 - 了解可以在多个任务中传输知识的数据表示,这在数据稀缺的状态方面是必不可少的。尽管最近在Meta-Leature的实践中感兴趣的兴趣,但缺乏元学习算法的理论基础,特别是在学习可转让陈述的背景下。在本文中,我们专注于多任务线性回归的问题 - 其中多个线性回归模型共享常见的低维线性表示。在这里,我们提供了可提供的快速,采样高效的算法,解决了(1)的双重挑战,从多个相关任务和(2)将此知识转移到新的,看不见的任务中的常见功能。两者都是元学习的一般问题的核心。最后,我们通过在学习这些线性特征的样本复杂性上提供信息定理下限来补充这些结果。
translated by 谷歌翻译
In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We consider an efficient Riemannian Gauss-Newton (RGN) method for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first local quadratic convergence guarantee of RGN for low-rank tensor estimation in the noisy setting under some regularity conditions and provide the corresponding estimation error upper bounds. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.
translated by 谷歌翻译