Models in which the covariance matrix has the structure of a sparse matrix plus a low rank perturbation are ubiquitous in machine learning applications. It is often desirable for learning algorithms to take advantage of such structures, avoiding costly matrix computations that often require cubic time and quadratic storage. This is often accomplished by performing operations that maintain such structures, e.g. matrix inversion via the Sherman-Morrison-Woodbury formula. In this paper we consider the matrix square root and inverse square root operations. Given a low rank perturbation to a matrix, we argue that a low-rank approximate correction to the (inverse) square root exists. We do so by establishing a geometric decay bound on the true correction's eigenvalues. We then proceed to frame the correction has the solution of an algebraic Ricatti equation, and discuss how a low-rank solution to that equation can be computed. We analyze the approximation error incurred when approximately solving the algebraic Ricatti equation, providing spectral and Frobenius norm forward and backward error bounds. Finally, we describe several applications of our algorithms, and demonstrate their utility in numerical experiments.
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
求解线性系统的迭代方法的收敛速率$ \ mathbf {a} x = b $通常取决于矩阵$ \ mathbf {a} $的条件号。预处理是通过以计算廉价的方式减少该条件号来加速这些方法的常用方式。在本文中,我们通过左或右对角线重构重新审视如何最好地提高$ \ mathbf {a}条件号的数十年。我们在几个方向上取得了这个问题。首先,我们为缩放$ \ mathbf {a} $的经典启发式提供了新的界限(a.k.a.jacobi预处理)。我们证明了这种方法将$ \ MATHBF {a} $的条件号减少到最佳可能缩放的二次因素中。其次,我们为结构化混合包装和覆盖了Semidefinite程序(MPC SDP)提供了一个求解器,它计算$ \ mathbf {a} $ in $ \ widetilde {o}(\ text {nnz}(\ mathbf {a})\ cdot \ text {poly}(\ kappa ^ \ star))$ time;这与在缩放到$ \ widetilde {o}(\ text {poly}(\ kappa ^ \ star))$ factors之后求解线性系统的成本匹配。第三,我们证明了足够一般的宽度无关的MPC SDP求解器将暗示我们考虑的缩放问题的近乎最佳的运行时间,以及与平均调理措施有关的自然变体。最后,我们突出了我们的预处理技术与半随机噪声模型的连接,以及在几种统计回归模型中降低风险的应用。
translated by 谷歌翻译
通常希望通过将其投影到低维子空间来降低大数据集的维度。矩阵草图已成为一种非常有效地执行这种维度降低的强大技术。尽管有关于草图最差的表现的广泛文献,但现有的保证通常与实践中观察到的差异截然不同。我们利用随机矩阵的光谱分析中的最新发展来开发新技术,这些技术为通过素描获得的随机投影矩阵的期望值提供了准确的表达。这些表达式可以用来表征各种常见的机器学习任务中尺寸降低的性能,从低级别近似到迭代随机优化。我们的结果适用于几种流行的草图方法,包括高斯和拉德马赫草图,它们可以根据数据的光谱特性对这些方法进行精确的分析。经验结果表明,我们得出的表达式反映了这些草图方法的实际性能,直到低阶效应甚至不变因素。
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
这项调查旨在提供线性模型及其背后的理论的介绍。我们的目标是对读者进行严格的介绍,并事先接触普通最小二乘。在机器学习中,输出通常是输入的非线性函数。深度学习甚至旨在找到需要大量计算的许多层的非线性依赖性。但是,这些算法中的大多数都基于简单的线性模型。然后,我们从不同视图中描述线性模型,并找到模型背后的属性和理论。线性模型是回归问题中的主要技术,其主要工具是最小平方近似,可最大程度地减少平方误差之和。当我们有兴趣找到回归函数时,这是一个自然的选择,该回归函数可以最大程度地减少相应的预期平方误差。这项调查主要是目的的摘要,即线性模型背后的重要理论的重要性,例如分布理论,最小方差估计器。我们首先从三种不同的角度描述了普通的最小二乘,我们会以随机噪声和高斯噪声干扰模型。通过高斯噪声,该模型产生了可能性,因此我们引入了最大似然估计器。它还通过这种高斯干扰发展了一些分布理论。最小二乘的分布理论将帮助我们回答各种问题并引入相关应用。然后,我们证明最小二乘是均值误差的最佳无偏线性模型,最重要的是,它实际上接近了理论上的极限。我们最终以贝叶斯方法及以后的线性模型结束。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
结构化参数空间的自然梯度下降(NGD)(例如,低级CovariRces)是由于困难的Fisher矩阵计算而在计算上具有挑战性。我们通过使用\ emph {local-parameter坐标}来解决此问题,以获取灵活且高效的NGD方法,适用于各种结构化参数化。我们显示了四个应用程序,我们的方法(1)概括指数自然进化策略,(2)恢复现有的牛顿样算法,(3)通过矩阵组产生新的结构化二阶算法,(4)给出了新的算法高斯和基于Wishart的分布的协方差。我们展示了深度学习,变分推论和进化策略的一系列问题。我们的工作为可扩展结构化几何方法开辟了新的方向。
translated by 谷歌翻译
我们研究基于Krylov子空间的迭代方法,用于在任何Schatten $ p $ Norm中的低级别近似值。在这里,通过矩阵向量产品访问矩阵$ a $ $如此$ \ | a(i -zz^\ top)\ | _ {s_p} \ leq(1+ \ epsilon)\ min_ {u^\ top u = i_k} } $,其中$ \ | m \ | _ {s_p} $表示$ m $的单数值的$ \ ell_p $ norm。对于$ p = 2 $(frobenius norm)和$ p = \ infty $(频谱规范)的特殊情况,musco and Musco(Neurips 2015)获得了基于Krylov方法的算法,该方法使用$ \ tilde {o}(k)(k /\ sqrt {\ epsilon})$ matrix-vector产品,改进na \“ ive $ \ tilde {o}(k/\ epsilon)$依赖性,可以通过功率方法获得,其中$ \ tilde {o} $抑制均可抑制poly $(\ log(dk/\ epsilon))$。我们的主要结果是仅使用$ \ tilde {o}(kp^{1/6}/\ epsilon^{1/3} {1/3})$ matrix $ matrix的算法 - 矢量产品,并为所有$ p \ geq 1 $。为$ p = 2 $工作,我们的限制改进了先前的$ \ tilde {o}(k/\ epsilon^{1/2})$绑定到$ \ tilde {o}(k/\ epsilon^{1/3})$。由于schatten- $ p $和schatten-$ \ infty $ norms在$(1+ \ epsilon)$ pers $ p时相同\ geq(\ log d)/\ epsilon $,我们的界限恢复了Musco和Musco的结果,以$ p = \ infty $。此外,我们证明了矩阵矢量查询$ \ omega的下限(1/\ epsilon^ {1/3})$对于任何固定常数$ p \ geq 1 $,表明令人惊讶的$ \ tilde {\ theta}(1/\ epsilon^{ 1/3})$是常数〜$ k $的最佳复杂性。为了获得我们的结果,我们介绍了几种新技术,包括同时对多个Krylov子空间进行优化,以及针对分区操作员的不平等现象。我们在[1,2] $中以$ p \的限制使用了Araki-lieb-thirring Trace不平等,而对于$ p> 2 $,我们呼吁对安装分区操作员的规范压缩不平等。
translated by 谷歌翻译
作为估计高维网络的工具,图形模型通常应用于钙成像数据以估计功能性神经元连接,即神经元活动之间的关系。但是,在许多钙成像数据集中,没有同时记录整个神经元的人群,而是部分重叠的块。如(Vinci等人2019年)最初引入的,这导致了图形缝问题,在该问题中,目的是在仅观察到功能的子集时推断完整图的结构。在本文中,我们研究了一种新颖的两步方法来绘制缝的方法,该方法首先使用低级协方差完成技术在估计图结构之前使用低级协方差完成技术划分完整的协方差矩阵。我们介绍了三种解决此问题的方法:阻止奇异价值分解,核标准惩罚和非凸低级别分解。尽管先前的工作已经研究了低级别矩阵的完成,但我们解决了阻碍遗失的挑战,并且是第一个在图形学习背景下研究问题的挑战。我们讨论了两步过程的理论特性,通过证明新颖的l无限 - 基 - 误差界的矩阵完成,以块错失性证明了一种提出的方​​法的图选择一致性。然后,我们研究了所提出的方法在模拟和现实世界数据示例上的经验性能,通过该方法,我们显示了这些方法从钙成像数据中估算功能连通性的功效。
translated by 谷歌翻译
我们提出了一个算法框架,用于近距离矩阵上的量子启发的经典算法,概括了Tang的突破性量子启发算法开始的一系列结果,用于推荐系统[STOC'19]。由量子线性代数算法和gily \'en,su,low和wiebe [stoc'19]的量子奇异值转换(SVT)框架[SVT)的动机[STOC'19],我们开发了SVT的经典算法合适的量子启发的采样假设。我们的结果提供了令人信服的证据,表明在相应的QRAM数据结构输入模型中,量子SVT不会产生指数量子加速。由于量子SVT框架基本上概括了量子线性代数的所有已知技术,因此我们的结果与先前工作的采样引理相结合,足以概括所有有关取消量子机器学习算法的最新结果。特别是,我们的经典SVT框架恢复并经常改善推荐系统,主成分分析,监督聚类,支持向量机器,低秩回归和半决赛程序解决方案的取消结果。我们还为汉密尔顿低级模拟和判别分析提供了其他取消化结果。我们的改进来自识别量子启发的输入模型的关键功能,该模型是所有先前量子启发的结果的核心:$ \ ell^2 $ -Norm采样可以及时近似于其尺寸近似矩阵产品。我们将所有主要结果减少到这一事实,使我们的简洁,独立和直观。
translated by 谷歌翻译
在本文中,我们提供了有关Hankel低级近似和完成工作的综述和书目,特别强调了如何将这种方法用于时间序列分析和预测。我们首先描述问题的可能表述,并就获得全球最佳解决方案的相关主题和挑战提供评论。提供了关键定理,并且纸张以一些说明性示例关闭。
translated by 谷歌翻译
kronecker回归是一个高度结构的最小二乘问题$ \ min _ {\ mathbf {x}}} \ lvert \ mathbf {k} \ mathbf {x} - \ mathbf {b} \ rvert_ \ rvert_ {2}^2 $矩阵$ \ mathbf {k} = \ mathbf {a}^{(1)} \ otimes \ cdots \ cdots \ otimes \ mathbf {a}^{(n)} $是因子矩阵的Kronecker产品。这种回归问题是在广泛使用的最小二乘(ALS)算法的每个步骤中都出现的,用于计算张量的塔克分解。我们介绍了第一个用于求解Kronecker回归的子次数算法,以避免在运行时间中避免指数项$ o(\ varepsilon^{ - n})$的$(1+ \ varepsilon)$。我们的技术结合了利用分数抽样和迭代方法。通过扩展我们对一个块是Kronecker产品的块设计矩阵的方法,我们还实现了(1)Kronecker Ridge回归的亚次级时间算法,并且(2)更新ALS中Tucker分解的因子矩阵,这不是一个不是一个纯Kronecker回归问题,从而改善了Tucker ALS的所有步骤的运行时间。我们证明了该Kronecker回归算法在合成数据和现实世界图像张量上的速度和准确性。
translated by 谷歌翻译
大规模监督学习中的共同挑战是如何利用新的增量数据到预先训练的模型,而无需从头开始重新培训模型。受到这个问题的激励,我们重新审视动态最小二乘回归(LSR)的规范问题,其中目标是通过增量训练数据学习线性模型。在此设置,数据和标签$(\ mathbf {a} ^ {(t)},\ mathbf {b} ^ {(t)})\ in \ mathbb {r} ^ {t \ times d} \ times \ MathBB {R} ^ T $以在线方式发展($ t \ gg d $),目标是有效地将(近似)解决方案保持为$ \ min _ {\ mathbf {x} ^ {(t)}} \ | \ mathbf {a} ^ {(t)} \ mathbf {x} ^ {(t)} - \ mathbf {b} ^ {(t)} \ | \ | \ |在$中的所有$ t \。我们的主要结果是一种动态数据结构,它将任意小的恒定近似解,与摊销更新时间$ o(d ^ {1 + o(1)})$,几乎匹配静态的运行时间(草图 - 基于)解决方案。相比之下,对于精确的(甚至$ 1 / \ mathrm {poly}(n)$ - 准确性)解决方案,我们在静态和动态设置之间显示了分离,即动态LSR需要$ \ω(d ^ {2- O(1)})OMV猜想下的摊销更新时间(Henzinger等,STOC'15)。我们的数据结构在概念上简单,易于实施,并且在理论和实践中快速速度,通过对合成和现实世界数据集的实验进行了证实。
translated by 谷歌翻译
最近出现了变异推断,成为大规模贝叶斯推理中古典马尔特·卡洛(MCMC)的流行替代品。变异推断的核心思想是贸易统计准确性以达到计算效率。它旨在近似后部,以降低计算成本,但可能损害其统计准确性。在这项工作中,我们通过推论模型选择中的案例研究研究了这种统计和计算权衡。侧重于具有对角和低级精度矩阵的高斯推论模型(又名变异近似族),我们在两个方面启动了对权衡的理论研究,贝叶斯后期推断误差和频繁的不确定性不确定定量误差。从贝叶斯后推理的角度来看,我们表征了相对于精确后部的变异后部的误差。我们证明,鉴于固定的计算预算,较低的推论模型会产生具有较高统计近似误差的变异后期,但计算误差较低。它减少了随机优化的方差,进而加速收敛。从频繁的不确定性定量角度来看,我们将变异后部的精度矩阵视为不确定性估计值。我们发现,相对于真实的渐近精度,变异近似遭受了来自数据的采样不确定性的附加统计误差。此外,随着计算预算的增加,这种统计误差成为主要因素。结果,对于小型数据集,推论模型不必全等级即可达到最佳估计误差。我们最终证明了在经验研究之间的这些统计和计算权衡推论,从而证实了理论发现。
translated by 谷歌翻译
In many modern applications of deep learning the neural network has many more parameters than the data points used for its training. Motivated by those practices, a large body of recent theoretical research has been devoted to studying overparameterized models. One of the central phenomena in this regime is the ability of the model to interpolate noisy data, but still have test error lower than the amount of noise in that data. arXiv:1906.11300 characterized for which covariance structure of the data such a phenomenon can happen in linear regression if one considers the interpolating solution with minimum $\ell_2$-norm and the data has independent components: they gave a sharp bound on the variance term and showed that it can be small if and only if the data covariance has high effective rank in a subspace of small co-dimension. We strengthen and complete their results by eliminating the independence assumption and providing sharp bounds for the bias term. Thus, our results apply in a much more general setting than those of arXiv:1906.11300, e.g., kernel regression, and not only characterize how the noise is damped but also which part of the true signal is learned. Moreover, we extend the result to the setting of ridge regression, which allows us to explain another interesting phenomenon: we give general sufficient conditions under which the optimal regularization is negative.
translated by 谷歌翻译
Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware. We present an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM). BBMM inference uses a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call. BBMM reduces the asymptotic complexity of exact GP inference from O(n 3 ) to O(n 2 ). Adapting this algorithm to scalable approximations and complex GP models simply requires a routine for efficient matrix-matrix multiplication with the kernel and its derivative. In addition, BBMM uses a specialized preconditioner to substantially speed up convergence. In experiments we show that BBMM effectively uses GPU hardware to dramatically accelerate both exact GP inference and scalable approximations. Additionally, we provide GPyTorch, a software platform for scalable GP inference via BBMM, built on PyTorch.
translated by 谷歌翻译
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models-including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation-which exploits a certain tensor structure in their low-order observable moments (typically, of second-and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
translated by 谷歌翻译
We consider the nonlinear inverse problem of learning a transition operator $\mathbf{A}$ from partial observations at different times, in particular from sparse observations of entries of its powers $\mathbf{A},\mathbf{A}^2,\cdots,\mathbf{A}^{T}$. This Spatio-Temporal Transition Operator Recovery problem is motivated by the recent interest in learning time-varying graph signals that are driven by graph operators depending on the underlying graph topology. We address the nonlinearity of the problem by embedding it into a higher-dimensional space of suitable block-Hankel matrices, where it becomes a low-rank matrix completion problem, even if $\mathbf{A}$ is of full rank. For both a uniform and an adaptive random space-time sampling model, we quantify the recoverability of the transition operator via suitable measures of incoherence of these block-Hankel embedding matrices. For graph transition operators these measures of incoherence depend on the interplay between the dynamics and the graph topology. We develop a suitable non-convex iterative reweighted least squares (IRLS) algorithm, establish its quadratic local convergence, and show that, in optimal scenarios, no more than $\mathcal{O}(rn \log(nT))$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator $\mathbf{A}$ of size $n \times n$. This establishes that spatial samples can be substituted by a comparable number of space-time samples. We provide an efficient implementation of the proposed IRLS algorithm with space complexity of order $O(r n T)$ and per-iteration time complexity linear in $n$. Numerical experiments for transition operators based on several graph models confirm that the theoretical findings accurately track empirical phase transitions, and illustrate the applicability and scalability of the proposed algorithm.
translated by 谷歌翻译
Tensor完成是矩阵完成的自然高阶泛化,其中目标是从其条目的稀疏观察中恢复低级张量。现有算法在没有可证明的担保的情况下是启发式,基于解决运行不切实际的大型半纤维程序,或者需要强大的假设,例如需要因素几乎正交。在本文中,我们介绍了交替最小化的新变型,其又通过了解如何对矩阵设置中的交替最小化的收敛性的进展措施来调整到张量设置的启发。我们展示了强大的可证明的保证,包括表明我们的算法即使当因素高度相关时,我们的算法也会在真正的张量线上会聚,并且可以在几乎线性的时间内实现。此外,我们的算法也非常实用,我们表明我们可以完成具有千维尺寸的三阶张量,从观察其条目的微小一部分。相比之下,有些令人惊讶的是,我们表明,如果没有我们的新扭曲,则表明交替最小化的标准版本可以在实践中以急剧速度收敛。
translated by 谷歌翻译