We propose a message passing algorithm, based on variational Bayesian inference, for low-rank tensor completion with automatic rank determination in the canonical polyadic format when additional side information (SI) is given. The SI comes in the form of lowdimensional subspaces the contain the fiber spans of the tensor (columns, rows, tubes, etc.). We validate the regularization properties induced by SI with extensive numerical experiments on synthetic and real-world data and present the results about tensor recovery and rank determination. The results show that the number of samples required for successful completion is significantly reduced in the presence of SI. We also discuss the origin of a bump in the phase transition curves that exists when the dimensionality of SI is comparable with that of the tensor.
translated by 谷歌翻译
在这项工作中,我们估计具有高概率的张量的随机选择元素的数量,保证了黎曼梯度下降的局部收敛性,以便张力列车完成。基于展开奇异值的谐波平均值,我们从正交投影的正交投影推导出一个新的界限,并引入张力列车的核心相干概念。我们还将结果扩展到张力列车完成与侧面信息,并获得相应的本地收敛保证。
translated by 谷歌翻译
我们考虑具有某些约束的矩阵分解(MF),在各个领域找到广泛的应用。利用变异推理(VI)和单一近似消息传递(UAMP),我们通过有效的消息传递实现(称为UAMPMF)开发了MF的贝叶斯方法。通过对因子矩阵施加的适当先验,UAMPMF可用于解决许多可以表达为MF的问题,例如非负基质分解,词典学习,具有矩阵不确定性的压缩感,可靠的主成分分析和稀疏矩阵分解。提供了广泛的数值示例,以表明UAMPMF在恢复精度,鲁棒性和计算复杂性方面显着优于最先进的算法。
translated by 谷歌翻译
最近,通过双段正则化的镜头,基于基于低矩阵完成的无监督学习的兴趣复兴,这显着改善了多学科机器学习任务的性能,例如推荐系统,基因型插图和图像插入。虽然双颗粒正则化贡献了成功的主要部分,但通常涉及计算昂贵的超参数调谐。为了避免这样的缺点并提高完成性能,我们提出了一种新颖的贝叶斯学习算法,该算法会自动学习与双重正规化相关的超参数,同时保证矩阵完成的低级别。值得注意的是,设计出一个小说的先验是为了促进矩阵的低级别并同时编码双电图信息,这比单圈对应物更具挑战性。然后探索所提出的先验和可能性函数之间的非平凡条件偶联性,以使有效算法在变化推理框架下得出。使用合成和现实世界数据集的广泛实验证明了针对各种数据分析任务的拟议学习算法的最先进性能。
translated by 谷歌翻译
我们使用张量奇异值分解(T-SVD)代数框架提出了一种新的快速流算法,用于抵抗缺失的低管级张量的缺失条目。我们展示T-SVD是三阶张量的研究型块术语分解的专业化,我们在该模型下呈现了一种算法,可以跟踪从不完全流2-D数据的可自由子模块。所提出的算法使用来自子空间的基层歧管的增量梯度下降的原理,以解决线性复杂度和时间样本的恒定存储器的张量完成问题。我们为我们的算法提供了局部预期的线性收敛结果。我们的经验结果在精确态度上具有竞争力,但在计算时间内比实际应用上的最先进的张量完成算法更快,以在有限的采样下恢复时间化疗和MRI数据。
translated by 谷歌翻译
通过变分贝叶斯近似来提出框架,用于拟合逆问题模型。与标准马尔可夫链蒙特卡罗方法相比,这种方法可确保对广泛的应用,良好的应用,良好的精度性能和降低的模型拟合时间来灵活。我们描述的变分贝叶斯的消息传递和因子图片段方法促进了简化的近似推理算法的实现,并形成软件开发的基础。这种方法允许将许多响应分布和惩罚抑制到逆问题模型中。尽管我们的工作被赋予了一个和二维响应变量,但我们展示了一个基础设施,其中还可以导出基于变量之间的无效弱交互的有效算法更新,以便在更高维度中的逆问题。通过生物医学和考古问题激励的图像处理应用程序作为图示。
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
多维时空数据的概率建模对于许多现实世界应用至关重要。然而,现实世界时空数据通常表现出非平稳性的复杂依赖性,即相关结构随位置/时间而变化,并且在空间和时间之间存在不可分割的依赖性,即依赖关系。开发有效和计算有效的统计模型,以适应包含远程和短期变化的非平稳/不可分割的过程,成为一项艰巨的任务,尤其是对于具有各种腐败/缺失结构的大规模数据集。在本文中,我们提出了一个新的统计框架 - 贝叶斯互补内核学习(BCKL),以实现多维时空数据的可扩展概率建模。为了有效地描述复杂的依赖性,BCKL与短距离时空高斯过程(GP)相结合的内核低级分解(GP),其中两个组件相互补充。具体而言,我们使用多线性低级分组组件来捕获数据中的全局/远程相关性,并基于紧凑的核心函数引入加法短尺度GP,以表征其余的局部变异性。我们为模型推断开发了有效的马尔可夫链蒙特卡洛(MCMC)算法,并在合成和现实世界时空数据集上评估了所提出的BCKL框架。我们的结果证实了BCKL在提供准确的后均值和高质量不确定性估计方面的出色表现。
translated by 谷歌翻译
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N -way array. Decompositions of higher-order tensors (i.e., N -way arrays with N ≥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
大型神经回路的全面突触接线图的出现已经创造了连接组学领域,并引起了许多开放研究问题。一个问题是,鉴于其突触连接矩阵,是否可以重建存储在神经元网络中的信息。在这里,我们通过确定在特定的吸引力网络模型中可以解决这种推理问题何时解决这个问题,并提供一种实用算法来解决这个问题。该算法基于从统计物理学到进行近似贝叶斯推论的思想,并且可以进行精确的分析。我们在三种不同模型上研究了它的性能,将算法与PCA等标准算法进行比较,并探讨了从突触连通性中重建存储模式的局限性。
translated by 谷歌翻译
具有伽马超高提升的分层模型提供了一个灵活,稀疏的促销框架,用于桥接$ l ^ 1 $和$ l ^ 2 $ scalalizations在贝叶斯的配方中致正问题。尽管对这些模型具有贝叶斯动机,但现有的方法仅限于\ Textit {最大后验}估计。尚未实现执行不确定性量化的可能性。本文介绍了伽马超高图的分层逆问题的变分迭代交替方案。所提出的变分推理方法产生精确的重建,提供有意义的不确定性量化,易于实施。此外,它自然地引入了用于选择超参数的模型选择。我们说明了我们在几个计算的示例中的方法的性能,包括从时间序列数据的动态系统的解卷积问题和稀疏识别。
translated by 谷歌翻译
We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by a constrained non-convex optimization problem involving the Grassmann manifold. We propose two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme, both of which include first- and second-order variants. Both sets of algorithms have theoretical guarantees. In particular, for the alternating minimization, we establish global convergence and worst-case complexity bounds. Additionally, using the Kurdyka-Lojasiewicz property, we show that the alternating minimization converges to a unique limit point. We provide extensive numerical results for the recovery of union of subspaces and clustering under entry sampling and dense Gaussian sampling. Our methods are competitive with existing approaches and, in particular, high accuracy is achieved in the recovery using Riemannian second-order methods.
translated by 谷歌翻译
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M . Can we complete the matrix and recover the entries that we have not seen?We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
本文介绍了使用基于补丁的先前分布的图像恢复的新期望传播(EP)框架。虽然Monte Carlo技术典型地用于从难以处理的后分布中进行采样,但它们可以在诸如图像恢复之类的高维推论问题中遭受可扩展性问题。为了解决这个问题,这里使用EP来使用多元高斯密度的产品近似后分布。此外,对这些密度的协方差矩阵施加结构约束允许更大的可扩展性和分布式计算。虽然该方法自然适于处理添加剂高斯观察噪声,但它也可以扩展到非高斯噪声。用于高斯和泊松噪声的去噪,染色和去卷积问题进行的实验说明了这种柔性近似贝叶斯方法的潜在益处,以实现与采样技术相比降低的计算成本。
translated by 谷歌翻译
在本文中,我们提出了一个参数化因素,该因子可以对随机变量之间存在线性依赖性的高斯网络进行推理。我们的因素表示有效地是对传统高斯参数化的概括,在这种情况下,协方差矩阵的正定限制已被放松。为此,我们得出了各种统计操作和结果(例如,随机变量的边缘化,乘法和仿射转换)将高斯因子的能力扩展到这些退化设置。通过使用此原则性因素定义,可以以几乎没有额外的计算成本来准确,自动适应退化。作为例证,我们将方法应用于一个代表性的示例,该示例涉及合作移动机器人的递归状态估计。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
我们认为越来越复杂的矩阵去噪和贝叶斯最佳设置中的文章学习模型,在挑战性的政权中,在矩阵推断出与系统尺寸线性的排名增加。这与大多数现有的文献相比,与低秩(即常数级别)制度相关的文献相反。我们首先考虑一类旋转不变的矩阵去噪,使用来自随机矩阵理论的标准技术来计算的互动信息和最小均方误差。接下来,我们分析了字典学习的更具挑战性模式。为此,我们将复制方法与随机矩阵理论一起介绍了复制品方法的新组合,共同矩阵理论,Coined光谱副本方法。它允许我们猜测隐藏表示与字典学习问题的嘈杂数据之间的相互信息的变分形式,以及定量最佳重建误差的重叠。所提出的方法从$ \ theta(n ^ 2)$(矩阵条目)到$ \ theta(n)$(特征值或奇异值)减少自由度的数量,并产生的互信息的库仑气体表示让人想起物理学中的矩阵模型。主要成分是使用Harishchandra-Itzykson-Zuber球形积分,结合新的复制对称解耦Ansatz,在特定重叠矩阵的特征值(或奇异值)的概率分布的水平上。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译