Despite many empirical successes of spectral clustering methodsalgorithms that cluster points using eigenvectors of matrices derived from the data-there are several unresolved issues. First, there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
大图通常出现在社交网络,知识图,推荐系统,生命科学和决策问题中。通过其高级别属性总结大图有助于解决这些设置中的问题。在光谱聚类中,我们旨在确定大多数边缘落在簇内的节点簇,而在簇之间只有很少的边缘。此任务对于许多下游应用和探索性分析很重要。光谱聚类的核心步骤是执行相应图的拉普拉斯矩阵(或等效地,奇异值分解,SVD)的特征分类。迭代奇异值分解方法的收敛取决于给定矩阵的光谱的特征,即连续特征值之间的差异。对于对应于群集图的图形的图形拉普拉斯,特征值将是非负的,但很小(小于$ 1 $)的减慢收敛性。本文引入了一种可行的方法,用于扩张光谱以加速SVD求解器,然后又是光谱群集。这是通过对矩阵操作的多项式近似来实现的,矩阵操作有利地改变矩阵的光谱而不更改其特征向量。实验表明,这种方法显着加速了收敛,我们解释了如何并行化和随机近似于可用的计算。
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
马尔可夫链是一类概率模型,在定量科学中已广泛应用。这部分是由于它们的多功能性,但是可以通过分析探测的便利性使其更加复杂。本教程为马尔可夫连锁店提供了深入的介绍,并探索了它们与图形和随机步行的联系。我们利用从线性代数和图形论的工具来描述不同类型的马尔可夫链的过渡矩阵,特别着眼于探索与这些矩阵相对应的特征值和特征向量的属性。提出的结果与机器学习和数据挖掘中的许多方法有关,我们在各个阶段描述了这些方法。本文并没有本身就成为一项新颖的学术研究,而是提出了一些已知结果的集合以及一些新概念。此外,该教程的重点是向读者提供直觉,而不是正式的理解,并且仅假定对线性代数和概率理论的概念的基本曝光。因此,来自各种学科的学生和研究人员可以访问它。
translated by 谷歌翻译
这是针对非线性维度和特征提取方法的教程和调查论文,该方法基于数据图的拉普拉斯语。我们首先介绍邻接矩阵,拉普拉斯矩阵的定义和拉普拉斯主义的解释。然后,我们涵盖图形和光谱聚类的切割,该谱图应用于数据子空间。解释了Laplacian征收及其样本外扩展的不同优化变体。此后,我们将保留投影的局部性及其内核变体作为拉普拉斯征本征的线性特殊案例。然后解释了图嵌入的版本,这些版本是Laplacian eigenmap和局部保留投影的广义版本。最后,引入了扩散图,这是基于数据图和随机步行的方法。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
光谱聚类在从业者和理论家中都很受欢迎。尽管对光谱聚类的性能保证有充分的了解,但最近的研究集中于在群集中执行``公平'',要求它们在分类敏感的节点属性方面必须``平衡''人口中的种族分布)。在本文中,我们考虑了一个设置,其中敏感属性间接表现在辅助\ textit {表示图}中,而不是直接观察到。该图指定了可以相对于敏感属性互相表示的节点对,除了通常的\ textit {相似性图}外,还可以观察到。我们的目标是在相似性图中找到簇,同时尊重由表示图编码的新个人公平性约束。我们为此任务开发了不均衡和归一化光谱聚类的变体,并在代表图诱导的种植分区模型下分析其性能。该模型同时使用节点的群集成员身份和表示图的结构来生成随机相似性图。据我们所知,这些是在个人级别的公平限制下受约束光谱聚类的第一个一致性结果。数值结果证实了我们的理论发现。
translated by 谷歌翻译
Selecting subsets of features that differentiate between two conditions is a key task in a broad range of scientific domains. In many applications, the features of interest form clusters with similar effects on the data at hand. To recover such clusters we develop DiSC, a data-driven approach for detecting groups of features that differentiate between conditions. For each condition, we construct a graph whose nodes correspond to the features and whose weights are functions of the similarity between them for that condition. We then apply a spectral approach to compute subsets of nodes whose connectivity differs significantly between the condition-specific feature graphs. On the theoretical front, we analyze our approach with a toy example based on the stochastic block model. We evaluate DiSC on a variety of datasets, including MNIST, hyperspectral imaging, simulated scRNA-seq and task fMRI, and demonstrate that DiSC uncovers features that better differentiate between conditions compared to competing methods.
translated by 谷歌翻译
光谱方法通过图矩阵上的特征向量计算在图中提供了一个可拖动的全局框架。 HyperGraph数据(其中实体在任意大小的边缘上相互作用)对矩阵表示构成了挑战,因此对光谱聚类构成了挑战。我们研究了基于超透明型非背带操作员的非均匀超图的光谱聚类。在审查了该操作员及其基本属性的定义之后,我们证明了Ihara-Bass类型的定理,该定理允许在较小的矩阵上进行特征Pair计算,通常可以更快地计算。然后,我们通过线性化信念传播提出了一种交替的算法,用于在超图随机块模型中推断,该算法涉及光谱聚类的步骤,再次使用非背部跟踪操作员。我们提供与该算法相关的证明,这些算法既正式又扩展了几个先前的结果。我们对光谱方法的极限和超图随机块模型中的可检测性提出了几种猜想,并通过对我们研究的操作员的特征因的不接受分析来支持它们。我们在真实和合成数据中执行实验,这些实验证明了当不同尺寸的相互作用带有有关群集结构的不同信息时,超图方法比基于图的方法的好处。
translated by 谷歌翻译
我们考虑了在高维度中平均分离的高斯聚类混合物的问题。我们是从$ k $身份协方差高斯的混合物提供的样本,使任何两对手段之间的最小成对距离至少为$ \ delta $,对于某些参数$ \ delta> 0 $,目标是恢复这些样本的地面真相聚类。它是分离$ \ delta = \ theta(\ sqrt {\ log k})$既有必要且足以理解恢复良好的聚类。但是,实现这种担保的估计值效率低下。我们提供了在多项式时间内运行的第一算法,几乎符合此保证。更确切地说,我们给出了一种算法,它需要多项式许多样本和时间,并且可以成功恢复良好的聚类,只要分离为$ \ delta = \ oomega(\ log ^ {1/2 + c} k)$ ,任何$ c> 0 $。以前,当分离以k $的分离和可以容忍$ \ textsf {poly}(\ log k)$分离所需的quasi arynomial时间时,才知道该问题的多项式时间算法。我们还将我们的结果扩展到分布的分布式的混合物,该分布在额外的温和假设下满足Poincar \ {e}不等式的分布。我们认为我们相信的主要技术工具是一种新颖的方式,可以隐含地代表和估计分配的​​高度时刻,这使我们能够明确地提取关于高度时刻的重要信息而没有明确地缩小全瞬间张量。
translated by 谷歌翻译
我们通过证明PABM是GRDPG的一种特殊情况,其中社区对应于潜在矢量的相互正交子空间,我们连接两个随机图模型,即受欢迎程度调整块模型(PABM)和广义随机点产品图(GRDPG)。这种见解使我们能够为PABM构建用于社区检测和参数估计的新算法,并改善了依赖稀疏子空间聚类的现有算法。利用邻接光谱嵌入GRDPG的渐近特性,我们得出了这些算法的渐近特性。特别是,我们证明,随着图形顶点的数量倾向于无穷大,社区检测误差的绝对数量趋于零。仿真实验说明了这些特性。
translated by 谷歌翻译
We review clustering as an analysis tool and the underlying concepts from an introductory perspective. What is clustering and how can clusterings be realised programmatically? How can data be represented and prepared for a clustering task? And how can clustering results be validated? Connectivity-based versus prototype-based approaches are reflected in the context of several popular methods: single-linkage, spectral embedding, k-means, and Gaussian mixtures are discussed as well as the density-based protocols (H)DBSCAN, Jarvis-Patrick, CommonNN, and density-peaks.
translated by 谷歌翻译
社交网络通常是使用签名图对社交网络进行建模的,其中顶点与用户相对应,并且边缘具有一个指示用户之间的交互作用的符号。出现的签名图通常包含一个清晰的社区结构,因为该图可以分配到少数极化社区中,每个群落都定义了稀疏切割,并且不可分割地分为较小的极化亚共同体。我们为具有如此清晰的社区结构的签名图提供了本地聚类甲骨文图的小部分。正式地,当图形具有最高度且社区数量最多为$ o(\ log n)$时,则使用$ \ tilde {o}(\ sqrt {n} \ sqrt {n} \ propatatorName {poly}(1/\ varepsilon) )$预处理时间,我们的Oracle可以回答$ \ tilde {o}(\ sqrt {n} \ operatorname {poly}(1/\ varepsilon))$ time的每个成员查询,并且它正确地分类了$(1--1-(1-) \ varepsilon)$ - 顶点W.R.T.的分数一组隐藏的种植地面真实社区。我们的Oracle在仅需要少数顶点需要的聚类信息的应用中是可取的。以前,此类局部聚类牙齿仅因无符号图而闻名。我们对签名图的概括需要许多新的想法,并对随机步行的行为进行了新的光谱分析。我们评估了我们的算法,用于在合成和现实世界数据集上构建这种甲骨文和回答成员资格查询,从而在实践中验证其性能。
translated by 谷歌翻译
社区检测和正交组同步是科学和工程中各种重要应用的基本问题。在这项工作中,我们考虑了社区检测和正交组同步的联合问题,旨在恢复社区并同时执行同步。为此,我们提出了一种简单的算法,该算法由频谱分解步骤组成,然后是彼此枢转的QR分解(CPQR)。所提出的算法与数据点数线性有效且缩放。我们还利用最近开发的“休闲一淘汰”技术来建立近乎最佳保证,以确切地恢复集群成员资格,并稳定地恢复正交变换。数值实验证明了我们算法的效率和功效,并确认了我们的理论表征。
translated by 谷歌翻译
由于其数值益处增加及其坚实的数学背景,光谱聚类方法的非线性重构近来的关注。我们在$ p $ -norm中提出了一种新的直接多道谱聚类算法,以$ p \ in(1,2] $。计算图表的多个特征向量的问题$ p $ -laplacian,标准的非线性概括Graph Laplacian,被重用作为Grassmann歧管的无约束最小化问题。$ P $的价值以伪连续的方式减少,促进对应于最佳图形的稀疏解决方案载体作为$ P $接近。监测单调减少平衡图削减了我们从$ P $ -Levels获得的最佳可用解决方案的保证。我们展示了我们算法在各种人工测试案件中的算法的有效性和准确性。我们的数值和比较结果具有各种状态-Art聚类方法表明,所提出的方法在均衡的图形剪切度量和标签分配的准确性方面取得高质量的集群。此外,我们进行S面部图像和手写字符分类的束缚,以展示现实数据集中的适用性。
translated by 谷歌翻译
随着大型网络在重要领域的相关领域的相关性,例如对疾病传播的联系网络的研究,或社交网络对地缘政治的影响,已经有必要研究可扩展到非常大的网络的机器学习工具,通常包含数百万节点。一种主要类别可扩展算法称为网络表示学习或网络嵌入。这些算法尝试通过首次运行多个随机散步,然后使用观察到的随机步行段中的每对节点的共同数量来学习网络功能(例如〜节点)的表示,以获得一些节点的低维表示欧几里德空间。本文的目的是严格地了解两个主要算法,深途化和Node2VEC的性能,以恢复与地面真理社区的规范网络模型的社区。根据图的稀疏性,我们发现所需的随机步道段的长度,使得相应的观察到的共生窗口能够对底层社区分配的几乎精确恢复。我们证明,考虑到一些固定的共同发生窗口,使用随机散步的Node2Vec与低横向概率的随机散步可以相比,与使用简单随机散步的深度扫视相比,稀疏网络可以成功。此外,如果稀疏参数低,我们提供了证据表明这些算法几乎完全恢复可能不会成功。该分析需要开发用于对具有底层低级结构的随机网络计数的通用工具,这与独立兴趣。
translated by 谷歌翻译
这项工作研究了经典的光谱群集算法,该算法嵌入了某些图$ g =(v_g,e_g)$的顶点,使用$ g $的某些矩阵的$ k $ eigenVectors纳入$ \ m athbb {r}^k $k $ - 分区$ v_g $ to $ k $簇。我们的第一个结果是对光谱聚类的性能进行更严格的分析,并解释了为什么它在某些条件下的作用比文献中研究的弱点要弱得多。对于第二个结果,我们表明,通过应用少于$ k $的特征向量来构建嵌入,光谱群集能够在许多实际情况下产生更好的输出;该结果是光谱聚类中的第一个结果。除了其概念性和理论意义外,我们工作的实际影响还通过对合成和现实世界数据集的经验分析证明,其中光谱聚类会产生可比或更好的结果,而较少$ k $ k $ eigenVectors。
translated by 谷歌翻译
K-Subspaces(KSS)方法是用于子空间聚类的K-均值方法的概括。在这项工作中,我们介绍了KSS的本地收敛分析和恢复保证,假设数据是由Smari-random的子空间模型生成的,其中$ n $点是从$ k \ ge 2 $重叠子空间随机采样的。我们表明,如果KSS方法的初始分配位于真实聚类的邻域内,则它以高等的速率收敛,并在$ \ theta(\ log \ log \ log n)$迭代中找到正确的群集。此外,我们提出了一种基于阈值的基于内部产品的光谱方法来初始化,并证明它在该社区中产生了一个点。我们还提出了研究方法的数值结果,以支持我们的理论发展。
translated by 谷歌翻译