我们提出了一个健壮的主成分分析(RPCA)框架,以从时间观察中恢复低级别和稀疏矩阵。我们开发了批处理时间算法的在线版本,以处理较大的数据集或流数据。我们从经验上将提出的方法与不同的RPCA框架进行比较,并在实际情况下显示出其有效性。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
We study a multi-factor block model for variable clustering and connect it to the regularized subspace clustering by formulating a distributionally robust version of the nodewise regression. To solve the latter problem, we derive a convex relaxation, provide guidance on selecting the size of the robust region, and hence the regularization weighting parameter, based on the data, and propose an ADMM algorithm for implementation. We validate our method in an extensive simulation study. Finally, we propose and apply a variant of our method to stock return data, obtain interpretable clusters that facilitate portfolio selection and compare its out-of-sample performance with other clustering methods in an empirical study.
translated by 谷歌翻译
流量数据长期遭受缺失和腐败的困扰,从而导致随后的智能运输系统(ITS)应用程序的准确性和效用降低。注意到流量数据的固有低级属性,大量研究将缺少的流量数据恢复为低级张量完成(LRTC)问题。由于LRTC中的秩最小化的非跨性别性和离散性,现有方法要么用凸面替代等级代替等级替代等级函数,要么以涉及许多参数的非convex替代物,或近似等级。在这项研究中,我们提出了一个用于交通数据恢复的无参数的非凸张量完成模型(TC-PFNC),其中设计了基于日志的松弛项以近似张量代数级别。此外,以前的研究通常认为观察结果是可靠的,没有任何异常值。因此,我们通过对潜在的流量数据异常值进行建模,将TC-PFNC扩展到了强大的版本(RTC-PFNC),该数据可以从部分和损坏的观测值中恢复缺失的值并在观测中删除异常。基于交替的方向乘数法(ADMM)详细阐述了TC-PFNC和RTC-PFNC的数值解。在四个现实世界流量数据集上进行的广泛实验结果表明,所提出的方法在缺失和损坏的数据恢复中都优于其他最先进的方法。本文使用的代码可在以下网址获得:https://github.com/younghe49/t-ITSPFNC。
translated by 谷歌翻译
最近,刘和张研究了从压缩传感的角度研究了时间序列预测的相当具有挑战性的问题。他们提出了一个没有学习的方法,名为卷积核规范最小化(CNNM),并证明了CNNM可以完全从其观察到的部分恢复一系列系列的部分,只要该系列是卷积的低级。虽然令人印象深刻,但是每当系列远离季节性时可能不满足卷积的低秩条件,并且实际上是脆弱的趋势和动态的存在。本文试图通过将学习,正常的转换集成到CNNM中,以便将一系列渐开线结构转换为卷积低等级的常规信号的目的。我们证明,由于系列的变换是卷积低级的转换,所以,所产生的模型是基于学习的基于学习的CNNM(LBCNM),严格成功地识别了一个系列的未来部分。为了学习可能符合所需成功条件的适当转换,我们设计了一种基于主成分追求(PCP)的可解释方法。配备了这种学习方法和一些精心设计的数据论证技巧,LBCNM不仅可以处理时间序列的主要组成部分(包括趋势,季节性和动态),还可以利用其他一些预测方法提供的预测;这意味着LBCNNM可以用作模型组合的一般工具。从时间序列数据库(TSDL)和M4竞争(M4)的100,452个现实世界时间序列的大量实验证明了LBCNNM的卓越性能。
translated by 谷歌翻译
近年来,健壮的主成分分析(PCA)受到了广泛关注。它的目的是从其总和中恢复一个低级别矩阵和稀疏矩阵。本文提出了一种新型的非凸强壮的PCA算法,即Riemannian Cur(Riecur),它利用了Riemannian优化和强大的CUR分解观念。该算法与迭代的鲁棒cur具有相同的计算复杂性,后者目前是最新的,但对离群值更强。Riecur还能够忍受大量的异常值,并且与加速的交替预测相媲美,该预测具有很高的离群公差,但计算复杂性比提议的方法差。因此,所提出的算法在计算复杂性和异常耐受性方面都可以在鲁棒PCA上实现最新性能。
translated by 谷歌翻译
我们使用张量奇异值分解(T-SVD)代数框架提出了一种新的快速流算法,用于抵抗缺失的低管级张量的缺失条目。我们展示T-SVD是三阶张量的研究型块术语分解的专业化,我们在该模型下呈现了一种算法,可以跟踪从不完全流2-D数据的可自由子模块。所提出的算法使用来自子空间的基层歧管的增量梯度下降的原理,以解决线性复杂度和时间样本的恒定存储器的张量完成问题。我们为我们的算法提供了局部预期的线性收敛结果。我们的经验结果在精确态度上具有竞争力,但在计算时间内比实际应用上的最先进的张量完成算法更快,以在有限的采样下恢复时间化疗和MRI数据。
translated by 谷歌翻译
库存记录不正确,经常发生,某些措施的年销售额约为4%。手动检测库存不准确性的成本较高,现有算法解决方案几乎完全依赖于从纵向数据中学习,这在现代零售操作引起的动态环境中不足。取而代之的是,我们提出了基于商店和SKU上的横截面数据的解决方案,观察到检测库存不准确性可以被视为识别(低级别)泊松矩阵中异常的问题。在低级别矩阵中检测到的最先进的方法显然不足。具体而言,从理论的角度来看,这些方法的恢复保证要求需要观察到无反对的条目,而噪音消失了(在我们的问题中,在许多应用中都不是这种情况)。如此有动力,我们提出了一种在概念上简单的入门方法,以在低级别的泊松矩阵中进行异常检测。我们的方法适合一类概率异常模型。我们表明,我们的算法所产生的成本以最低最佳最佳速率接近最佳算法。使用来自消费品零售商的合成数据和真实数据,我们表明我们的方法可提供超过现有检测方法的10倍成本降低。在此过程中,我们建立了最新的工作,该工作寻求矩阵完成的入门错误保证,并为次指定矩阵确定此类保证,这是独立利益的结果。
translated by 谷歌翻译
我们介绍和分析了多元奇异频谱分析(MSSA)的变体,这是一种流行的时间序列方法,用于启用和预测多元时间序列。在我们介绍的时空因素模型下,给定$ n $时间序列和$ t $观测时间序列,我们为插补和样本外预测均有效地扩展为$ 1 / \ sqrt,为预测和样本预测有效地缩放均值{\ min(n,t)t} $。这是一个改进:(i)$ 1 /\ sqrt {t} $ SSA的错误缩放,MSSA限制对单变量时间序列; (ii)$ 1/\ min(n,t)$对于不利用数据中时间结构的矩阵估计方法的错误缩放。我们引入的时空模型包括:谐波,多项式,可区分的周期函数和持有人连续函数的任何有限总和和产物。在时空因素模型下,我们的样本外预测结果可能对在线学习具有独立的兴趣。从经验上讲,在基准数据集上,我们的MSSA变体通过最先进的神经网络时间序列方法(例如,DEEPAR,LSTM)竞争性能,并且明显优于诸如矢量自动化(VAR)之类的经典方法。最后,我们提出了MSSA的扩展:(i)估计时间序列的时变差异的变体; (ii)一种张量变体,对于$ n $和$ t $的某些制度具有更好的样本复杂性。
translated by 谷歌翻译
越来越多的数据科学和机器学习问题依赖于张量的计算,这些计算比矩阵更好地捕获数据的多路关系和相互作用。当利用这一关键优势时,一个关键的挑战是开发计算上有效的算法,以从张量数据中提取有用的信息,这些信息同时构成腐败和不良条件。本文解决了张量强大的主成分分析(RPCA),该分析旨在从塔克分解下的稀疏腐败污染的观察结果中回收低排名的张量。为了最大程度地减少计算和内存足迹,我们建议通过缩放梯度下降(scaledgd)直接恢复低维张量因子(从量身定制的光谱初始化开始),并与迭代变化的阈值操作相结合腐败。从理论上讲,我们确定所提出的算法以恒定的速率与真实的低级张量线性收敛,而恒定的速率与其条件编号无关,只要损坏的水平不大。从经验上讲,我们证明,通过合成实验和现实世界应用,提出的算法比最先进的矩阵和张量RPCA算法更好,更可扩展的性能。
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
In this paper, we study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations, and corrupted by additive Gaussian noise and sparse noise simultaneously. By stacking these images as the frontal slices of a third-order tensor, we propose to utilize the tensor factorization method via transformed tensor-tensor product to explore the low-rankness of the underlying tensor, which is factorized into the product of two smaller tensors via transformed tensor-tensor product under any unitary transformation. The main advantage of transformed tensor-tensor product is that its computational complexity is lower compared with the existing literature based on transformed tensor nuclear norm. Moreover, the tensor $\ell_p$ $(0<p<1)$ norm is employed to characterize the sparsity of sparse noise and the tensor Frobenius norm is adopted to model additive Gaussian noise. A generalized Gauss-Newton algorithm is designed to solve the resulting model by linearizing the domain transformations and a proximal Gauss-Seidel algorithm is developed to solve the corresponding subproblem. Furthermore, the convergence of the proximal Gauss-Seidel algorithm is established, whose convergence rate is also analyzed based on the Kurdyka-$\L$ojasiewicz property. Extensive numerical experiments on real-world image datasets are carried out to demonstrate the superior performance of the proposed method as compared to several state-of-the-art methods in both accuracy and computational time.
translated by 谷歌翻译
Deep autoencoders, and other deep neural networks, have demonstrated their e ectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders' ability to discover high quality, non-linear features but can also eliminate outliers and noise without access to any clean training data. Our model is inspired by Robust Principal Component Analysis, and we split the input data X into two parts, X = L D + S, where L D can be e ectively reconstructed by a deep autoencoder and S contains the outliers and noise in the original data X . Since such spli ing increases the robustness of standard deep autoencoders, we name our model a "Robust Deep Autoencoder (RDA)". Further, we present generalizations of our results to grouped sparsity norms which allow one to distinguish random anomalies from other types of structured corruptions, such as a collection of features being corrupted across many instances or a collection of instances having more corruptions than their fellows. Such "Group Robust Deep Autoencoders (GRDA)" give rise to novel anomaly detection approaches whose superior performance we demonstrate on a selection of benchmark problems.
translated by 谷歌翻译
我们考虑将矢量时间序列信号分解为具有不同特征(例如平滑,周期性,非负或稀疏)的组件的充分研究的问题。我们描述了一个简单而通用的框架,其中组件由损耗函数(包括约束)定义,并通过最大程度地减少组件损耗之和(受约束)来执行信号分解。当每个损耗函数是信号分量密度的负模样时,该框架与最大后验概率(MAP)估计相吻合;但这也包括许多其他有趣的案例。总结和澄清先前的结果,我们提供了两种分布式优化方法来计算分解,当组件类损失函数是凸的时,它们找到了最佳分解,并且在没有时是良好的启发式方法。两种方法都仅需要每个组件损耗函数的掩盖近端操作员,这是对其参数中缺少条目的众所周知近端操作员的概括。两种方法均分布,即分别处理每个组件。我们得出可拖动的方法来评估某些损失函数的掩盖近端操作员,据我们所知,这些损失函数尚未出现在文献中。
translated by 谷歌翻译
网络分析一直是揭示大量对象之间关系和交互的强大工具。然而,它在准确识别重要节点节点相互作用的有效性受到快速增长的网络规模的挑战,数据以空前的粒度和规模收集。克服这种高维度的共同智慧是将节点崩溃成较小的群体,并在小组级别进行连通性分析。将努力分为两个阶段不可避免地打开了一致性的差距,并降低了效率。共识学习是通用知识发现的新常态,并具有多个可用的数据源。为此,本文以组合多个数据源来开发同时分组和连接分析的统一框架。该算法还保证了统计上最佳的估计器。
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
Spatiotemporal traffic data imputation is of great significance in intelligent transportation systems and data-driven decision-making processes. To make an accurate reconstruction on partially observed traffic data, we assert the importance of characterizing both global and local trends in traffic time series. In the literature, substantial prior works have demonstrated the effectiveness of utilizing low-rankness property of traffic data by matrix/tensor completion models. In this study, we first introduce a Laplacian kernel to temporal regularization for characterizing local trends in traffic time series, which can be formulated in the form of circular convolution. Then, we develop a low-rank Laplacian convolutional representation (LCR) model by putting the nuclear norm of a circulant matrix and the Laplacian temporal regularization together, which is proved to meet a unified framework that takes a fast Fourier transform solution in a relatively low time complexity. Through extensive experiments on some traffic datasets, we demonstrate the superiority of LCR for imputing traffic time series of various time series behaviors (e.g., data noises and strong/weak periodicity). The proposed LCR model is an efficient and effective solution to large-scale traffic data imputation over the existing baseline models. The adapted datasets and Python implementation are publicly available at https://github.com/xinychen/transdim.
translated by 谷歌翻译
作为估计高维网络的工具,图形模型通常应用于钙成像数据以估计功能性神经元连接,即神经元活动之间的关系。但是,在许多钙成像数据集中,没有同时记录整个神经元的人群,而是部分重叠的块。如(Vinci等人2019年)最初引入的,这导致了图形缝问题,在该问题中,目的是在仅观察到功能的子集时推断完整图的结构。在本文中,我们研究了一种新颖的两步方法来绘制缝的方法,该方法首先使用低级协方差完成技术在估计图结构之前使用低级协方差完成技术划分完整的协方差矩阵。我们介绍了三种解决此问题的方法:阻止奇异价值分解,核标准惩罚和非凸低级别分解。尽管先前的工作已经研究了低级别矩阵的完成,但我们解决了阻碍遗失的挑战,并且是第一个在图形学习背景下研究问题的挑战。我们讨论了两步过程的理论特性,通过证明新颖的l无限 - 基 - 误差界的矩阵完成,以块错失性证明了一种提出的方​​法的图选择一致性。然后,我们研究了所提出的方法在模拟和现实世界数据示例上的经验性能,通过该方法,我们显示了这些方法从钙成像数据中估算功能连通性的功效。
translated by 谷歌翻译
长序列中的子序列异常检测是在广泛域中应用的重要问题。但是,迄今为止文献中提出的方法具有严重的局限性:它们要么需要用于设计异常发现算法的先前领域知识,要么在与相同类型的复发异常情况下使用繁琐且昂贵。在这项工作中,我们解决了这些问题,并提出了一种适用于域的不可知论次序列异常检测的方法。我们的方法series2graph基于新型低维嵌入子序列的图表。 Series2Graph不需要标记的实例(例如监督技术)也不需要无异常的数据(例如零阳性学习技术),也不需要识别长度不同的异常。在迄今为止使用的最大合成和真实数据集的实验结果表明,所提出的方法正确地识别了单一和复发异常,而无需任何先验的特征,以优于多种差距的准确性,同时提高了几种竞争的方法,同时又表现出色更快的数量级。本文出现在VLDB 2020中。
translated by 谷歌翻译
It is known that the decomposition in low-rank and sparse matrices (\textbf{L+S} for short) can be achieved by several Robust PCA techniques. Besides the low rankness, the local smoothness (\textbf{LSS}) is a vitally essential prior for many real-world matrix data such as hyperspectral images and surveillance videos, which makes such matrices have low-rankness and local smoothness properties at the same time. This poses an interesting question: Can we make a matrix decomposition in terms of \textbf{L\&LSS +S } form exactly? To address this issue, we propose in this paper a new RPCA model based on three-dimensional correlated total variation regularization (3DCTV-RPCA for short) by fully exploiting and encoding the prior expression underlying such joint low-rank and local smoothness matrices. Specifically, using a modification of Golfing scheme, we prove that under some mild assumptions, the proposed 3DCTV-RPCA model can decompose both components exactly, which should be the first theoretical guarantee among all such related methods combining low rankness and local smoothness. In addition, by utilizing Fast Fourier Transform (FFT), we propose an efficient ADMM algorithm with a solid convergence guarantee for solving the resulting optimization problem. Finally, a series of experiments on both simulations and real applications are carried out to demonstrate the general validity of the proposed 3DCTV-RPCA model.
translated by 谷歌翻译