高维非正交掺入张量的CP分解是许多学科的广泛应用的重要问题。然而,以前的理论保证的工作通常在CP组分的基础载体上承担限制性的不连贯条件。在本文中,我们提出了新的计算高效的复合PCA和并发正交化算法,以便在轻度不连结条件下的理论保证。复合PCA将主成分或奇异值分解应用于张量数据的矩阵,以获得奇异矢量,然后在第一步骤中获得的奇异载体的基质折叠。它可以用作Tensor CP分解的任何迭代优化方案的初始化。并发正交化算法通过将突起同时施加到其他模式中的其他模式所产生的空格的正交补充,迭代地估计张量的每个模式的基础向量。旨在改善具有低或中等高CP等级的张量的交替的最小二乘估计器和其他形式的高阶正交迭代,并且当任何给定的初始估计器的错误被小常数界定时,它保证快速收敛。我们的理论调查为两种提出的算法提供了估算准确性和收敛速率。我们对合成数据的实施表明了我们对现有方法的方法的显着实际优势。
translated by 谷歌翻译
In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We consider an efficient Riemannian Gauss-Newton (RGN) method for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first local quadratic convergence guarantee of RGN for low-rank tensor estimation in the noisy setting under some regularity conditions and provide the corresponding estimation error upper bounds. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.
translated by 谷歌翻译
我们研究了张量张量的回归,其中的目标是将张量的响应与张量协变量与塔克等级参数张量/矩阵连接起来,而没有其内在等级的先验知识。我们提出了Riemannian梯度下降(RGD)和Riemannian Gauss-Newton(RGN)方法,并通过研究等级过度参数化的影响来应对未知等级的挑战。我们通过表明RGD和RGN分别线性地和四边形地收敛到两个等级的统计最佳估计值,从而为一般的张量调节回归提供了第一个收敛保证。我们的理论揭示了一种有趣的现象:Riemannian优化方法自然地适应了过度参数化,而无需修改其实施。我们还为低度多项式框架下的标量调整回归中的统计计算差距提供了第一个严格的证据。我们的理论证明了``统计计算差距的祝福''现象:在张张量的张量回归中,对于三个或更高的张紧器,在张张量的张量回归中,计算所需的样本量与中等级别相匹配的计算量相匹配。在考虑计算可行的估计器时,虽然矩阵设置没有此类好处。这表明中等等级的过度参数化本质上是``在张量调整的样本量三分或更高的样本大小上,三分或更高的样本量。最后,我们进行仿真研究以显示我们提出的方法的优势并证实我们的理论发现。
translated by 谷歌翻译
Higher-order multiway data is ubiquitous in machine learning and statistics and often exhibits community-like structures, where each component (node) along each different mode has a community membership associated with it. In this paper we propose the tensor mixed-membership blockmodel, a generalization of the tensor blockmodel positing that memberships need not be discrete, but instead are convex combinations of latent communities. We establish the identifiability of our model and propose a computationally efficient estimation procedure based on the higher-order orthogonal iteration algorithm (HOOI) for tensor SVD composed with a simplex corner-finding algorithm. We then demonstrate the consistency of our estimation procedure by providing a per-node error bound, which showcases the effect of higher-order structures on estimation accuracy. To prove our consistency result, we develop the $\ell_{2,\infty}$ tensor perturbation bound for HOOI under independent, possibly heteroskedastic, subgaussian noise that may be of independent interest. Our analysis uses a novel leave-one-out construction for the iterates, and our bounds depend only on spectral properties of the underlying low-rank tensor under nearly optimal signal-to-noise ratio conditions such that tensor SVD is computationally feasible. Whereas other leave-one-out analyses typically focus on sequences constructed by analyzing the output of a given algorithm with a small part of the noise removed, our leave-one-out analysis constructions use both the previous iterates and the additional tensor structure to eliminate a potential additional source of error. Finally, we apply our methodology to real and simulated data, including applications to two flight datasets and a trade network dataset, demonstrating some effects not identifiable from the model with discrete community memberships.
translated by 谷歌翻译
在本文中,我们提出{\ it \下划线{r} ecursive} {\ it \ usef \ undesline {i} mortance} {\ it \ it \ usew supsline {s} ketching} algorithM squares {\ it \下划线{o} ptimization}(risro)。 Risro的关键步骤是递归重要性草图,这是一个基于确定性设计的递归投影的新素描框架,它与文献中的随机素描\ Citep {Mahoney2011 randomized,Woodruff2014sketching}有很大不同。在这个新的素描框架下,可以重新解释文献中的几种现有算法,而Risro比它们具有明显的优势。 Risro易于实现,并在计算上有效,其中每次迭代中的核心过程是解决降低尺寸最小二乘问题的问题。我们在某些轻度条件下建立了Risro的局部二次线性和二次收敛速率。我们还发现了Risro与Riemannian Gauss-Newton算法在固定等级矩阵上的联系。在机器学习和统计数据中的两种应用中,RISRO的有效性得到了证明:低级别矩阵痕量回归和相位检索。仿真研究证明了Risro的出色数值性能。
translated by 谷歌翻译
网络数据通常在各种应用程序中收集,代表感兴趣的功能之间直接测量或统计上推断的连接。在越来越多的域中,这些网络会随着时间的流逝而收集,例如不同日子或多个主题之间的社交媒体平台用户之间的交互,例如在大脑连接性的多主体研究中。在分析多个大型网络时,降低降低技术通常用于将网络嵌入更易于处理的低维空间中。为此,我们通过专门的张量分解来开发用于网络集合的主组件分析(PCA)的框架,我们将半对称性张量PCA或SS-TPCA术语。我们得出计算有效的算法来计算我们提出的SS-TPCA分解,并在标准的低级别信号加噪声模型下建立方法的统计效率。值得注意的是,我们表明SS-TPCA具有与经典矩阵PCA相同的估计精度,并且与网络中顶点数的平方根成正比,而不是预期的边缘数。我们的框架继承了古典PCA的许多优势,适用于广泛的无监督学习任务,包括识别主要网络,隔离有意义的更改点或外出观察,以及表征最不同边缘的“可变性网络”。最后,我们证明了我们的提案对模拟数据的有效性以及经验法律研究的示例。用于建立我们主要一致性结果的技术令人惊讶地简单明了,可能会在其他各种网络分析问题中找到使用。
translated by 谷歌翻译
提供了一种强大而灵活的模型,可用于代表多属数据和多种方式相互作用,在科学和工程中的各个领域中发挥着现代数据科学中的不可或缺的作用。基本任务是忠实地以统计和计算的有效方式从高度不完整的测量中恢复张量。利用Tucker分解中的张量的低级别结构,本文开发了一个缩放的梯度下降(Scaledgd)算法,可以直接恢复具有定制频谱初始化的张量因子,并表明它以与条件号无关的线性速率收敛对于两个规范问题的地面真理张量 - 张量完成和张量回归 - 一旦样本大小高于$ n ^ {3/2} $忽略其他参数依赖项,$ n $是维度张量。这导致与现有技术相比的低秩张力估计的极其可扩展的方法,这些方法具有以下至少一个缺点:对记忆和计算方面的对不良,偏移成本高的极度敏感性,或差样本复杂性保证。据我们所知,Scaledgd是第一算法,它可以同时实现近最佳统计和计算复杂性,以便与Tucker分解进行低级张力完成。我们的算法突出了加速非耦合统计估计在加速非耦合统计估计中的适当预处理的功率,其中迭代改复的预处理器促进轨迹的所需的不变性属性相对于低级张量分解中的底层对称性。
translated by 谷歌翻译
素描和项目是一个框架,它统一了许多已知的迭代方法来求解线性系统及其变体,并进一步扩展了非线性优化问题。它包括流行的方法,例如随机kaczmarz,坐标下降,凸优化的牛顿方法的变体等。在本文中,我们通过新的紧密频谱边界为预期的草图投影矩阵获得了素描和项目的收敛速率的敏锐保证。我们的估计值揭示了素描和项目的收敛率与另一个众所周知但看似无关的算法家族的近似误差之间的联系,这些算法使用草图加速了流行的矩阵因子化,例如QR和SVD。这种连接使我们更接近准确量化草图和项目求解器的性能如何取决于其草图大小。我们的分析不仅涵盖了高斯和次高斯的素描矩阵,还涵盖了一个有效的稀疏素描方法,称为较少的嵌入方法。我们的实验备份了理论,并证明即使极稀疏的草图在实践中也显示出相同的收敛属性。
translated by 谷歌翻译
我们的目标是在沿着张量模式的协变量信息存在中可获得稀疏和高度缺失的张量。我们的动机来自在线广告,在各种设备上的广告上的用户点击率(CTR)形成了大约96%缺失条目的CTR张量,并且在非缺失条目上有许多零,这使得独立的张量完井方法不满意。除了CTR张量旁边,额外的广告功能或用户特性通常可用。在本文中,我们提出了协助协助的稀疏张力完成(Costco),以合并复苏恢复稀疏张量的协变量信息。关键思想是共同提取来自张量和协变矩阵的潜伏组分以学习合成表示。从理论上讲,我们导出了恢复的张量组件的错误绑定,并明确地量化了由于协变量引起的显露概率条件和张量恢复精度的改进。最后,我们将Costco应用于由CTR张量和广告协变矩阵组成的广告数据集,从而通过基线的23%的准确性改进。重要的副产品是来自Costco的广告潜在组件显示有趣的广告集群,这对于更好的广告目标是有用的。
translated by 谷歌翻译
随机奇异值分解(RSVD)是用于计算大型数据矩阵截断的SVD的一类计算算法。给定A $ n \ times n $对称矩阵$ \ mathbf {m} $,原型RSVD算法输出通过计算$ \ mathbf {m mathbf {m} $的$ k $引导singular vectors的近似m}^{g} \ mathbf {g} $;这里$ g \ geq 1 $是一个整数,$ \ mathbf {g} \ in \ mathbb {r}^{n \ times k} $是一个随机的高斯素描矩阵。在本文中,我们研究了一般的“信号加上噪声”框架下的RSVD的统计特性,即,观察到的矩阵$ \ hat {\ mathbf {m}} $被认为是某种真实但未知的加法扰动信号矩阵$ \ mathbf {m} $。我们首先得出$ \ ell_2 $(频谱规范)和$ \ ell_ {2 \ to \ infty} $(最大行行列$ \ ell_2 $ norm)$ \ hat {\ hat {\ Mathbf {M}} $和信号矩阵$ \ Mathbf {M} $的真实单数向量。这些上限取决于信噪比(SNR)和功率迭代$ g $的数量。观察到一个相变现象,其中较小的SNR需要较大的$ g $值以保证$ \ ell_2 $和$ \ ell_ {2 \ to \ fo \ infty} $ distances的收敛。我们还表明,每当噪声矩阵满足一定的痕量生长条件时,这些相变发生的$ g $的阈值都会很清晰。最后,我们得出了近似奇异向量的行波和近似矩阵的进入波动的正常近似。我们通过将RSVD的几乎最佳性能保证在应用于三个统计推断问题的情况下,即社区检测,矩阵完成和主要的组件分析,并使用缺失的数据来说明我们的理论结果。
translated by 谷歌翻译
本文研究了聚类基质值观测值的计算和统计限制。我们提出了一个低级别的混合模型(LRMM),该模型适用于经典的高斯混合模型(GMM)来处理基质值观测值,该观测值假设人口中心矩阵的低级别。通过集成Lloyd算法和低级近似值设计了一种计算有效的聚类方法。一旦定位良好,该算法将快速收敛并达到最小值最佳的指数型聚类错误率。同时,我们表明一种基于张量的光谱方法可提供良好的初始聚类。与GMM相当,最小值最佳聚类错误率是由分离强度(即种群中心矩阵之间的最小距离)决定的。通过利用低级度,提出的算法对分离强度的要求较弱。但是,与GMM不同,LRMM的统计难度和计算难度的特征是信号强度,即最小的人口中心矩阵的非零奇异值。提供了证据表明,即使信号强度不够强,即使分离强度很强,也没有多项式时间算法是一致的。在高斯以下噪声下进一步证明了我们低级劳埃德算法的性能。讨论了LRMM下估计和聚类之间的有趣差异。通过全面的仿真实验证实了低级劳埃德算法的优点。最后,我们的方法在现实世界数据集的文献中优于其他方法。
translated by 谷歌翻译
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models-including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation-which exploits a certain tensor structure in their low-order observable moments (typically, of second-and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
translated by 谷歌翻译
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N -way array. Decompositions of higher-order tensors (i.e., N -way arrays with N ≥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
translated by 谷歌翻译
作为一种特殊的无限级矢量自回旋(VAR)模型,矢量自回归移动平均值(VARMA)模型比广泛使用的有限级var模型可以捕获更丰富的时间模式。然而,长期以来,其实用性一直受到其不可识别性,计算疾病性和解释相对难度的阻碍。本文介绍了一种新颖的无限级VAR模型,该模型不仅避免了VARMA模型的缺点,而且继承了其有利的时间模式。作为另一个有吸引力的特征,可以单独解释该模型的时间和横截面依赖性结构,因为它们的特征是不同的参数集。对于高维时间序列,这种分离激发了我们对确定横截面依赖性的参数施加稀疏性。结果,可以在不牺牲任何时间信息的情况下实现更高的统计效率和可解释性。我们为提出的模型引入了一个$ \ ell_1 $调查估计量,并得出相应的非反应误差边界。开发了有效的块坐标下降算法和一致的模型顺序选择方法。拟议方法的优点得到了模拟研究和现实世界的宏观经济数据分析的支持。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
We study inductive matrix completion (matrix completion with side information) under an i.i.d. subgaussian noise assumption at a low noise regime, with uniform sampling of the entries. We obtain for the first time generalization bounds with the following three properties: (1) they scale like the standard deviation of the noise and in particular approach zero in the exact recovery case; (2) even in the presence of noise, they converge to zero when the sample size approaches infinity; and (3) for a fixed dimension of the side information, they only have a logarithmic dependence on the size of the matrix. Differently from many works in approximate recovery, we present results both for bounded Lipschitz losses and for the absolute loss, with the latter relying on Talagrand-type inequalities. The proofs create a bridge between two approaches to the theoretical analysis of matrix completion, since they consist in a combination of techniques from both the exact recovery literature and the approximate recovery literature.
translated by 谷歌翻译
本文研究了在存在重尾且可能是不对称噪声的情况下,低级矩阵的完成,我们旨在估计一组高度不完整的噪声条目,以估算一个基础的低级矩阵。尽管在过去的十年中,矩阵的完成问题吸引了很多关注,但是当观察结果被重尾噪音污染时,仍然缺乏理论上的理解。先前的理论缺乏解释经验结果,无法捕获估计误差对噪声水平的最佳依赖性。在本文中,我们采用自适应的Huber损失来容纳重尾噪声,当损失函数中的参数经过精心设计以平衡异常值的大偏差和稳健性时,这是对大型且可能不对称的误差的鲁棒性。然后,我们通过平衡的低级数burer-monteiro矩阵分解和梯度不错,并具有稳健的光谱初始化,提出了有效的非凸算法。我们证明,在仅在误差分布上的第二刻条件下,而不是次高斯的假设下,由提议的算法生成的迭代元素的欧几里得误差会快速减少几何,直到达到最小值 - 最佳统计估计误差,这具有相同的相同在次级案件中订购。这一重大进步背后的关键技术是一个强大的一对一分析框架。我们的模拟研究证实了理论结果。
translated by 谷歌翻译
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.
translated by 谷歌翻译
我们调查与高斯的混合的数据分享共同但未知,潜在虐待协方差矩阵的数据。我们首先考虑具有两个等级大小的组件的高斯混合,并根据最大似然估计导出最大切割整数程序。当样品的数量在维度下线性增长时,我们证明其解决方案实现了最佳的错误分类率,直到对数因子。但是,解决最大切割问题似乎是在计算上棘手的。为了克服这一点,我们开发了一种高效的频谱算法,该算法达到最佳速率,但需要一种二次样本量。虽然这种样本复杂性比最大切割问题更差,但我们猜测没有多项式方法可以更好地执行。此外,我们收集了支持统计计算差距存在的数值和理论证据。最后,我们将MAX-CUT程序概括为$ k $ -means程序,该程序处理多组分混合物的可能性不平等。它享有相似的最优性保证,用于满足运输成本不平等的分布式的混合物,包括高斯和强烈的对数的分布。
translated by 谷歌翻译
找到给定矩阵的独特低维分解的问题是许多领域的基本和经常发生的问题。在本文中,我们研究了寻求一个唯一分解的问题,以\ mathbb {r} ^ {p \ times n} $ in \ mathbb {p \ time n} $。具体来说,我们考虑$ y = ax \ in \ mathbb {r} ^ {p \ time n} $,其中矩阵$ a \ in \ mathbb {r} ^ {p \ times r} $具有全列等级,带有$ r <\ min \ {n,p \} $,矩阵$ x \ in \ mathbb {r} ^ {r \ times n} $是元素 - 方向稀疏。我们证明,可以唯一确定$ y $的稀疏分解,直至某些内在签名排列。我们的方法依赖于解决在单位球体上限制的非凸优化问题。我们对非透露优化景观的几何分析表明,任何{\ em strict}本地解决方案靠近地面真相解决方案,可以通过任何二阶序列算法遵循的简单数据驱动初始化恢复。最后,我们用数值实验证实了这些理论结果。
translated by 谷歌翻译