Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
我们提出了一个算法框架,用于近距离矩阵上的量子启发的经典算法,概括了Tang的突破性量子启发算法开始的一系列结果,用于推荐系统[STOC'19]。由量子线性代数算法和gily \'en,su,low和wiebe [stoc'19]的量子奇异值转换(SVT)框架[SVT)的动机[STOC'19],我们开发了SVT的经典算法合适的量子启发的采样假设。我们的结果提供了令人信服的证据,表明在相应的QRAM数据结构输入模型中,量子SVT不会产生指数量子加速。由于量子SVT框架基本上概括了量子线性代数的所有已知技术,因此我们的结果与先前工作的采样引理相结合,足以概括所有有关取消量子机器学习算法的最新结果。特别是,我们的经典SVT框架恢复并经常改善推荐系统,主成分分析,监督聚类,支持向量机器,低秩回归和半决赛程序解决方案的取消结果。我们还为汉密尔顿低级模拟和判别分析提供了其他取消化结果。我们的改进来自识别量子启发的输入模型的关键功能,该模型是所有先前量子启发的结果的核心:$ \ ell^2 $ -Norm采样可以及时近似于其尺寸近似矩阵产品。我们将所有主要结果减少到这一事实,使我们的简洁,独立和直观。
translated by 谷歌翻译
基于草图的随机算法已成为低级矩阵近似中的主力工具。要在应用中安全地使用这些算法,应将它们与诊断相结合以评估近似质量。为了满足这种需求,本文提出了一种折刀重采样方法,以估计随机矩阵计算的输出的可变性。可变性估计可以识别计算需要其他数据,或者该计算本质上不稳定。作为示例,论文研究对两种随机低级矩阵近似算法的小刀估计。在每种情况下,折刀估计的操作计数与目标矩阵的尺寸无关。在数值实验中,估计器可以准确评估可变性,并提供均方误差的量化顺序估计。
translated by 谷歌翻译
素描和项目是一个框架,它统一了许多已知的迭代方法来求解线性系统及其变体,并进一步扩展了非线性优化问题。它包括流行的方法,例如随机kaczmarz,坐标下降,凸优化的牛顿方法的变体等。在本文中,我们通过新的紧密频谱边界为预期的草图投影矩阵获得了素描和项目的收敛速率的敏锐保证。我们的估计值揭示了素描和项目的收敛率与另一个众所周知但看似无关的算法家族的近似误差之间的联系,这些算法使用草图加速了流行的矩阵因子化,例如QR和SVD。这种连接使我们更接近准确量化草图和项目求解器的性能如何取决于其草图大小。我们的分析不仅涵盖了高斯和次高斯的素描矩阵,还涵盖了一个有效的稀疏素描方法,称为较少的嵌入方法。我们的实验备份了理论,并证明即使极稀疏的草图在实践中也显示出相同的收敛属性。
translated by 谷歌翻译
本文涉及使用多项式的有限样品的平滑,高维函数的近似。这项任务是计算科学和工程中许多应用的核心 - 尤其是由参数建模和不确定性量化引起的。通常在此类应用中使用蒙特卡洛(MC)采样,以免屈服于维度的诅咒。但是,众所周知,这种策略在理论上是最佳的。尺寸$ n $有许多多项式空间,样品复杂度尺度划分为$ n $。这种有据可查的现象导致了一致的努力,以设计改进的,实际上是近乎最佳的策略,其样本复杂性是线性的,甚至线性地缩小了$ n $。自相矛盾的是,在这项工作中,我们表明MC实际上是高维度中的一个非常好的策略。我们首先通过几个数值示例记录了这种现象。接下来,我们提出一个理论分析,该分析能够解决这种悖论,以实现无限多变量的全体形态功能。我们表明,基于$ M $ MC样本的最小二乘方案,其错误衰减为$ m/\ log(m)$,其速率与最佳$ n $ term的速率相同多项式近似。该结果是非构造性的,因为它假定了进行近似的合适多项式空间的知识。接下来,我们提出了一个基于压缩感应的方案,该方案达到了相同的速率,除了较大的聚类因子。该方案是实用的,并且在数值上,它的性能和比知名的自适应最小二乘方案的性能和更好。总体而言,我们的发现表明,当尺寸足够高时,MC采样非常适合平滑功能近似。因此,改进的采样策略的好处通常仅限于较低维度的设置。
translated by 谷歌翻译
近似消息传递(AMP)是解决高维统计问题的有效迭代范式。但是,当迭代次数超过$ o \ big(\ frac {\ log n} {\ log log \ log \ log n} \时big)$(带有$ n $问题维度)。为了解决这一不足,本文开发了一个非吸附框架,用于理解峰值矩阵估计中的AMP。基于AMP更新的新分解和可控的残差项,我们布置了一个分析配方,以表征在存在独立初始化的情况下AMP的有限样本行为,该过程被进一步概括以进行光谱初始化。作为提出的分析配方的两个具体后果:(i)求解$ \ mathbb {z} _2 $同步时,我们预测了频谱初始化AMP的行为,最高为$ o \ big(\ frac {n} {\ mathrm {\ mathrm { poly} \ log n} \ big)$迭代,表明该算法成功而无需随后的细化阶段(如最近由\ citet {celentano2021local}推测); (ii)我们表征了稀疏PCA中AMP的非反应性行为(在尖刺的Wigner模型中),以广泛的信噪比。
translated by 谷歌翻译
这项调查旨在提供线性模型及其背后的理论的介绍。我们的目标是对读者进行严格的介绍,并事先接触普通最小二乘。在机器学习中,输出通常是输入的非线性函数。深度学习甚至旨在找到需要大量计算的许多层的非线性依赖性。但是,这些算法中的大多数都基于简单的线性模型。然后,我们从不同视图中描述线性模型,并找到模型背后的属性和理论。线性模型是回归问题中的主要技术,其主要工具是最小平方近似,可最大程度地减少平方误差之和。当我们有兴趣找到回归函数时,这是一个自然的选择,该回归函数可以最大程度地减少相应的预期平方误差。这项调查主要是目的的摘要,即线性模型背后的重要理论的重要性,例如分布理论,最小方差估计器。我们首先从三种不同的角度描述了普通的最小二乘,我们会以随机噪声和高斯噪声干扰模型。通过高斯噪声,该模型产生了可能性,因此我们引入了最大似然估计器。它还通过这种高斯干扰发展了一些分布理论。最小二乘的分布理论将帮助我们回答各种问题并引入相关应用。然后,我们证明最小二乘是均值误差的最佳无偏线性模型,最重要的是,它实际上接近了理论上的极限。我们最终以贝叶斯方法及以后的线性模型结束。
translated by 谷歌翻译
我们研究了用于线性回归的主动采样算法,该算法仅旨在查询目标向量$ b \ in \ mathbb {r} ^ n $的少量条目,并将近最低限度输出到$ \ min_ {x \ In \ mathbb {r} ^ d} \ | ax-b \ | $,其中$ a \ in \ mathbb {r} ^ {n \ times d} $是一个设计矩阵和$ \ | \ cdot \ | $是一些损失函数。对于$ \ ell_p $ norm回归的任何$ 0 <p <\ idty $,我们提供了一种基于Lewis权重采样的算法,其使用只需$ \ tilde {o}输出$(1+ \ epsilon)$近似解决方案(d ^ {\ max(1,{p / 2})} / \ mathrm {poly}(\ epsilon))$查询到$ b $。我们表明,这一依赖于$ D $是最佳的,直到对数因素。我们的结果解决了陈和Derezi的最近开放问题,陈和Derezi \'{n} Ski,他们为$ \ ell_1 $ norm提供了附近的最佳界限,以及$ p \中的$ \ ell_p $回归的次优界限(1,2) $。我们还提供了$ O的第一个总灵敏度上限(D ^ {\ max \ {1,p / 2 \} \ log ^ 2 n)$以满足最多的$ p $多项式增长。这改善了Tukan,Maalouf和Feldman的最新结果。通过将此与我们的技术组合起来的$ \ ell_p $回归结果,我们获得了一个使$ \ tilde o的活动回归算法(d ^ {1+ \ max \ {1,p / 2 \}} / \ mathrm {poly}。 (\ epsilon))$疑问,回答陈和德里兹的另一个打开问题{n}滑雪。对于Huber损失的重要特殊情况,我们进一步改善了我们对$ \ tilde o的主动样本复杂性的绑定(d ^ {(1+ \ sqrt2)/ 2} / \ epsilon ^ c)$和非活跃$ \ tilde o的样本复杂性(d ^ {4-2 \ sqrt 2} / \ epsilon ^ c)$,由于克拉克森和伍德拉夫而改善了Huber回归的以前的D ^ 4 $。我们的敏感性界限具有进一步的影响,使用灵敏度采样改善了各种先前的结果,包括orlicz规范子空间嵌入和鲁棒子空间近似。最后,我们的主动采样结果为每种$ \ ell_p $ norm提供的第一个Sublinear时间算法。
translated by 谷歌翻译
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M . Can we complete the matrix and recover the entries that we have not seen?We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
translated by 谷歌翻译
随机旋转的Cholesky(RPCholesky)是一种用于计算N X N阳性半芬酸矩阵(PSD)矩阵的等级K近似的天然算法。RPCholesky只需几行代码就可以实现。它仅需要(k+1)n进入评估,o(k^2 n)其他算术操作。本文对其实验和理论行为进行了首次认真研究。从经验上讲,rpcholesky匹配或改善了低级别PSD近似的替代算法的性能。此外,RPCholesky可证明达到了近乎最佳的近似保证。该算法的简单性,有效性和鲁棒性强烈支持其在科学计算和机器学习应用中的使用。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
The Forster transform is a method of regularizing a dataset by placing it in {\em radial isotropic position} while maintaining some of its essential properties. Forster transforms have played a key role in a diverse range of settings spanning computer science and functional analysis. Prior work had given {\em weakly} polynomial time algorithms for computing Forster transforms, when they exist. Our main result is the first {\em strongly polynomial time} algorithm to compute an approximate Forster transform of a given dataset or certify that no such transformation exists. By leveraging our strongly polynomial Forster algorithm, we obtain the first strongly polynomial time algorithm for {\em distribution-free} PAC learning of halfspaces. This learning result is surprising because {\em proper} PAC learning of halfspaces is {\em equivalent} to linear programming. Our learning approach extends to give a strongly polynomial halfspace learner in the presence of random classification noise and, more generally, Massart noise.
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
Koopman运算符是无限维的运算符,可全球线性化非线性动态系统,使其光谱信息可用于理解动态。然而,Koopman运算符可以具有连续的光谱和无限维度的子空间,使得它们的光谱信息提供相当大的挑战。本文介绍了具有严格融合的数据驱动算法,用于从轨迹数据计算Koopman运算符的频谱信息。我们引入了残余动态模式分解(ResDMD),它提供了第一种用于计算普通Koopman运算符的Spectra和PseudtoStra的第一种方案,无需光谱污染。使用解析器操作员和RESDMD,我们还计算与测量保存动态系统相关的光谱度量的平滑近似。我们证明了我们的算法的显式收敛定理,即使计算连续频谱和离散频谱的密度,也可以实现高阶收敛即使是混沌系统。我们展示了在帐篷地图,高斯迭代地图,非线性摆,双摆,洛伦茨系统和11美元延长洛伦兹系统的算法。最后,我们为具有高维状态空间的动态系统提供了我们的算法的核化变体。这使我们能够计算与具有20,046维状态空间的蛋白质分子的动态相关的光谱度量,并计算出湍流流过空气的误差界限的非线性Koopman模式,其具有雷诺数为$> 10 ^ 5 $。一个295,122维的状态空间。
translated by 谷歌翻译
我们研究基于Krylov子空间的迭代方法,用于在任何Schatten $ p $ Norm中的低级别近似值。在这里,通过矩阵向量产品访问矩阵$ a $ $如此$ \ | a(i -zz^\ top)\ | _ {s_p} \ leq(1+ \ epsilon)\ min_ {u^\ top u = i_k} } $,其中$ \ | m \ | _ {s_p} $表示$ m $的单数值的$ \ ell_p $ norm。对于$ p = 2 $(frobenius norm)和$ p = \ infty $(频谱规范)的特殊情况,musco and Musco(Neurips 2015)获得了基于Krylov方法的算法,该方法使用$ \ tilde {o}(k)(k /\ sqrt {\ epsilon})$ matrix-vector产品,改进na \“ ive $ \ tilde {o}(k/\ epsilon)$依赖性,可以通过功率方法获得,其中$ \ tilde {o} $抑制均可抑制poly $(\ log(dk/\ epsilon))$。我们的主要结果是仅使用$ \ tilde {o}(kp^{1/6}/\ epsilon^{1/3} {1/3})$ matrix $ matrix的算法 - 矢量产品,并为所有$ p \ geq 1 $。为$ p = 2 $工作,我们的限制改进了先前的$ \ tilde {o}(k/\ epsilon^{1/2})$绑定到$ \ tilde {o}(k/\ epsilon^{1/3})$。由于schatten- $ p $和schatten-$ \ infty $ norms在$(1+ \ epsilon)$ pers $ p时相同\ geq(\ log d)/\ epsilon $,我们的界限恢复了Musco和Musco的结果,以$ p = \ infty $。此外,我们证明了矩阵矢量查询$ \ omega的下限(1/\ epsilon^ {1/3})$对于任何固定常数$ p \ geq 1 $,表明令人惊讶的$ \ tilde {\ theta}(1/\ epsilon^{ 1/3})$是常数〜$ k $的最佳复杂性。为了获得我们的结果,我们介绍了几种新技术,包括同时对多个Krylov子空间进行优化,以及针对分区操作员的不平等现象。我们在[1,2] $中以$ p \的限制使用了Araki-lieb-thirring Trace不平等,而对于$ p> 2 $,我们呼吁对安装分区操作员的规范压缩不平等。
translated by 谷歌翻译
我们为特殊神经网络架构,称为运营商复发性神经网络的理论分析,用于近似非线性函数,其输入是线性运算符。这些功能通常在解决方案算法中出现用于逆边值问题的问题。传统的神经网络将输入数据视为向量,因此它们没有有效地捕获与对应于这种逆问题中的数据的线性运算符相关联的乘法结构。因此,我们介绍一个类似标准的神经网络架构的新系列,但是输入数据在向量上乘法作用。由较小的算子出现在边界控制中的紧凑型操作员和波动方程的反边值问题分析,我们在网络中的选择权重矩阵中促进结构和稀疏性。在描述此架构后,我们研究其表示属性以及其近似属性。我们还表明,可以引入明确的正则化,其可以从所述逆问题的数学分析导出,并导致概括属性上的某些保证。我们观察到重量矩阵的稀疏性改善了概括估计。最后,我们讨论如何将运营商复发网络视为深度学习模拟,以确定诸如用于从边界测量的声波方程中重建所未知的WAVESTED的边界控制的算法算法。
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
We consider the nonlinear inverse problem of learning a transition operator $\mathbf{A}$ from partial observations at different times, in particular from sparse observations of entries of its powers $\mathbf{A},\mathbf{A}^2,\cdots,\mathbf{A}^{T}$. This Spatio-Temporal Transition Operator Recovery problem is motivated by the recent interest in learning time-varying graph signals that are driven by graph operators depending on the underlying graph topology. We address the nonlinearity of the problem by embedding it into a higher-dimensional space of suitable block-Hankel matrices, where it becomes a low-rank matrix completion problem, even if $\mathbf{A}$ is of full rank. For both a uniform and an adaptive random space-time sampling model, we quantify the recoverability of the transition operator via suitable measures of incoherence of these block-Hankel embedding matrices. For graph transition operators these measures of incoherence depend on the interplay between the dynamics and the graph topology. We develop a suitable non-convex iterative reweighted least squares (IRLS) algorithm, establish its quadratic local convergence, and show that, in optimal scenarios, no more than $\mathcal{O}(rn \log(nT))$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator $\mathbf{A}$ of size $n \times n$. This establishes that spatial samples can be substituted by a comparable number of space-time samples. We provide an efficient implementation of the proposed IRLS algorithm with space complexity of order $O(r n T)$ and per-iteration time complexity linear in $n$. Numerical experiments for transition operators based on several graph models confirm that the theoretical findings accurately track empirical phase transitions, and illustrate the applicability and scalability of the proposed algorithm.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
量子计算有可能彻底改变和改变我们的生活和理解世界的方式。该审查旨在提供对量子计算的可访问介绍,重点是统计和数据分析中的应用。我们从介绍了了解量子计算所需的基本概念以及量子和经典计算之间的差异。我们描述了用作量子算法的构建块的核心量子子程序。然后,我们审查了一系列预期的量子算法,以便在统计和机器学习中提供计算优势。我们突出了将量子计算应用于统计问题的挑战和机遇,并讨论潜在的未来研究方向。
translated by 谷歌翻译