由于其数值益处增加及其坚实的数学背景,光谱聚类方法的非线性重构近来的关注。我们在$ p $ -norm中提出了一种新的直接多道谱聚类算法,以$ p \ in(1,2] $。计算图表的多个特征向量的问题$ p $ -laplacian,标准的非线性概括Graph Laplacian,被重用作为Grassmann歧管的无约束最小化问题。$ P $的价值以伪连续的方式减少,促进对应于最佳图形的稀疏解决方案载体作为$ P $接近。监测单调减少平衡图削减了我们从$ P $ -Levels获得的最佳可用解决方案的保证。我们展示了我们算法在各种人工测试案件中的算法的有效性和准确性。我们的数值和比较结果具有各种状态-Art聚类方法表明,所提出的方法在均衡的图形剪切度量和标签分配的准确性方面取得高质量的集群。此外,我们进行S面部图像和手写字符分类的束缚,以展示现实数据集中的适用性。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
在本文中,我们提出了一种新方法来检测具有归因顶点的无向图中的簇。目的是将不仅在结构连接性方面,而且在属性值方面相似的顶点分组。我们通过创建[6,38]中提出的其他顶点和边缘,将顶点之间的结构和属性相似。然后将增强图嵌入到与其拉普拉斯式相关的欧几里得空间中,在该空间中,应用了修改的K-均值算法以识别簇。修改后的k均值依赖于矢量距离度量,根据每个原始顶点,我们分配了合适的矢量值坐标集,这取决于结构连接性和属性相似性,因此每个原始图顶点都被认为是$ M+1的代表增强图的$顶点,如果$ m $是顶点属性的数量。为了定义坐标矢量,我们基于自适应AMG(代数多机)方法采用了我们最近提出的算法,该方法识别了嵌入欧几里得空间中的坐标方向,以代数平滑的矢量相对于我们的增强图Laplacian,从而扩展了laplacian,从而扩展了坐标。没有属性的图形的先前结果。我们通过与一些知名方法进行比较,分析了我们提出的聚类方法的有效性,这些方法可以免费获得软件实现,并与文献中报告的结果相比,在两种不同类型的广泛使用的合成图上以及在某些现实世界中的图形上。
translated by 谷歌翻译
超图允许使用多向高阶关系建模问题。然而,大多数现有超图的算法的计算成本可能严重取决于输入的超图尺寸。为了解决不断增加的计算挑战,可以通过积极聚合其顶点(节点)来预先处理给定的超图来促进图表粗化。然而,未经纳入启发式图粗化技术的最先进的超图分区(聚类)方法未得到优化,以保留超图的结构(全局)属性。在这项工作中,我们提出了一种有效的光谱超图粗化方案(HypersF),以保持超图的原始光谱(结构)特性。我们的方法利用了最近的强烈局部最大流量的聚类算法,用于检测最小化比例的超图形顶点集。为了进一步提高算法效率,我们通过利用与原始超图对应的二分形图的光谱聚类来提出分频和征服方案。我们从现实世界VLSI设计基准提取的各种超图的实验结果表明,与现有最先进的现有技术相比,所提出的超图粗略化算法可以显着提高超图和运行时效率的多线电导算法。
translated by 谷歌翻译
随机块模型(SBM)是一个随机图模型,其连接不同的顶点组不同。它被广泛用作研究聚类和社区检测的规范模型,并提供了肥沃的基础来研究组合统计和更普遍的数据科学中出现的信息理论和计算权衡。该专着调查了最近在SBM中建立社区检测的基本限制的最新发展,无论是在信息理论和计算方案方面,以及各种恢复要求,例如精确,部分和弱恢复。讨论的主要结果是在Chernoff-Hellinger阈值中进行精确恢复的相转换,Kesten-Stigum阈值弱恢复的相变,最佳的SNR - 单位信息折衷的部分恢复以及信息理论和信息理论之间的差距计算阈值。该专着给出了在寻求限制时开发的主要算法的原则推导,特别是通过绘制绘制,半定义编程,(线性化)信念传播,经典/非背带频谱和图形供电。还讨论了其他块模型的扩展,例如几何模型和一些开放问题。
translated by 谷歌翻译
这是针对非线性维度和特征提取方法的教程和调查论文,该方法基于数据图的拉普拉斯语。我们首先介绍邻接矩阵,拉普拉斯矩阵的定义和拉普拉斯主义的解释。然后,我们涵盖图形和光谱聚类的切割,该谱图应用于数据子空间。解释了Laplacian征收及其样本外扩展的不同优化变体。此后,我们将保留投影的局部性及其内核变体作为拉普拉斯征本征的线性特殊案例。然后解释了图嵌入的版本,这些版本是Laplacian eigenmap和局部保留投影的广义版本。最后,引入了扩散图,这是基于数据图和随机步行的方法。
translated by 谷歌翻译
近年来,在平衡(超级)图分配算法的设计和评估中取得了重大进展。我们调查了过去十年的实用算法的趋势,用于平衡(超级)图形分区以及未来的研究方向。我们的工作是对先前有关该主题的调查的更新。特别是,该调查还通过涵盖了超图形分区和流算法来扩展先前的调查,并额外关注并行算法。
translated by 谷歌翻译
We review clustering as an analysis tool and the underlying concepts from an introductory perspective. What is clustering and how can clusterings be realised programmatically? How can data be represented and prepared for a clustering task? And how can clustering results be validated? Connectivity-based versus prototype-based approaches are reflected in the context of several popular methods: single-linkage, spectral embedding, k-means, and Gaussian mixtures are discussed as well as the density-based protocols (H)DBSCAN, Jarvis-Patrick, CommonNN, and density-peaks.
translated by 谷歌翻译
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
translated by 谷歌翻译
在机器学习中调用多种假设需要了解歧管的几何形状和维度,理论决定了需要多少样本。但是,在应用程序数据中,采样可能不均匀,歧管属性是未知的,并且(可能)非纯化;这意味着社区必须适应本地结构。我们介绍了一种用于推断相似性内核提供数据的自适应邻域的算法。从本地保守的邻域(Gabriel)图开始,我们根据加权对应物进行迭代率稀疏。在每个步骤中,线性程序在全球范围内产生最小的社区,并且体积统计数据揭示了邻居离群值可能违反了歧管几何形状。我们将自适应邻域应用于非线性维度降低,地球计算和维度估计。与标准算法的比较,例如使用K-Nearest邻居,证明了它们的实用性。
translated by 谷歌翻译
光谱聚类在从业者和理论家中都很受欢迎。尽管对光谱聚类的性能保证有充分的了解,但最近的研究集中于在群集中执行``公平'',要求它们在分类敏感的节点属性方面必须``平衡''人口中的种族分布)。在本文中,我们考虑了一个设置,其中敏感属性间接表现在辅助\ textit {表示图}中,而不是直接观察到。该图指定了可以相对于敏感属性互相表示的节点对,除了通常的\ textit {相似性图}外,还可以观察到。我们的目标是在相似性图中找到簇,同时尊重由表示图编码的新个人公平性约束。我们为此任务开发了不均衡和归一化光谱聚类的变体,并在代表图诱导的种植分区模型下分析其性能。该模型同时使用节点的群集成员身份和表示图的结构来生成随机相似性图。据我们所知,这些是在个人级别的公平限制下受约束光谱聚类的第一个一致性结果。数值结果证实了我们的理论发现。
translated by 谷歌翻译
Selecting subsets of features that differentiate between two conditions is a key task in a broad range of scientific domains. In many applications, the features of interest form clusters with similar effects on the data at hand. To recover such clusters we develop DiSC, a data-driven approach for detecting groups of features that differentiate between conditions. For each condition, we construct a graph whose nodes correspond to the features and whose weights are functions of the similarity between them for that condition. We then apply a spectral approach to compute subsets of nodes whose connectivity differs significantly between the condition-specific feature graphs. On the theoretical front, we analyze our approach with a toy example based on the stochastic block model. We evaluate DiSC on a variety of datasets, including MNIST, hyperspectral imaging, simulated scRNA-seq and task fMRI, and demonstrate that DiSC uncovers features that better differentiate between conditions compared to competing methods.
translated by 谷歌翻译
大图通常出现在社交网络,知识图,推荐系统,生命科学和决策问题中。通过其高级别属性总结大图有助于解决这些设置中的问题。在光谱聚类中,我们旨在确定大多数边缘落在簇内的节点簇,而在簇之间只有很少的边缘。此任务对于许多下游应用和探索性分析很重要。光谱聚类的核心步骤是执行相应图的拉普拉斯矩阵(或等效地,奇异值分解,SVD)的特征分类。迭代奇异值分解方法的收敛取决于给定矩阵的光谱的特征,即连续特征值之间的差异。对于对应于群集图的图形的图形拉普拉斯,特征值将是非负的,但很小(小于$ 1 $)的减慢收敛性。本文引入了一种可行的方法,用于扩张光谱以加速SVD求解器,然后又是光谱群集。这是通过对矩阵操作的多项式近似来实现的,矩阵操作有利地改变矩阵的光谱而不更改其特征向量。实验表明,这种方法显着加速了收敛,我们解释了如何并行化和随机近似于可用的计算。
translated by 谷歌翻译
通过内核矩阵或图形laplacian矩阵代表数据点的光谱方法已成为无监督数据分析的主要工具。在许多应用程序场景中,可以通过神经网络嵌入的光谱嵌入可以在数据样本上进行训练,这为实现自动样本外扩展以及计算可扩展性提供了一种有希望的方法。在Spectralnet的原始论文中采用了这种方法(Shaham等人,2018年),我们称之为Specnet1。当前的论文引入了一种名为SpecNet2的新神经网络方法,以计算光谱嵌入,该方法优化了特征问题的等效目标,并删除了SpecNet1中的正交层。 SpecNet2还允许通过通过梯度公式跟踪每个数据点的邻居来分离图形亲和力矩阵的行采样和列。从理论上讲,我们证明了新的无正交物质目标的任何局部最小化均显示出领先的特征向量。此外,证明了使用基于批处理的梯度下降法的这种新的无正交目标的全局收敛。数值实验证明了在模拟数据和图像数据集上Specnet2的性能和计算效率的提高。
translated by 谷歌翻译
This paper introduces a scalable algorithmic framework (HyperEF) for spectral coarsening (decomposition) of large-scale hypergraphs by exploiting hyperedge effective resistances. Motivated by the latest theoretical framework for low-resistance-diameter decomposition of simple graphs, HyperEF aims at decomposing large hypergraphs into multiple node clusters with only a few inter-cluster hyperedges. The key component in HyperEF is a nearly-linear time algorithm for estimating hyperedge effective resistances, which allows incorporating the latest diffusion-based non-linear quadratic operators defined on hypergraphs. To achieve good runtime scalability, HyperEF searches within the Krylov subspace (or approximate eigensubspace) for identifying the nearly-optimal vectors for approximating the hyperedge effective resistances. In addition, a node weight propagation scheme for multilevel spectral hypergraph decomposition has been introduced for achieving even greater node coarsening ratios. When compared with state-of-the-art hypergraph partitioning (clustering) methods, extensive experiment results on real-world VLSI designs show that HyperEF can more effectively coarsen (decompose) hypergraphs without losing key structural (spectral) properties of the original hypergraphs, while achieving over $70\times$ runtime speedups over hMetis and $20\times$ speedups over HyperSF.
translated by 谷歌翻译
There are synergies of research interests and industrial efforts in modeling fairness and correcting algorithmic bias in machine learning. In this paper, we present a scalable algorithm for spectral clustering (SC) with group fairness constraints. Group fairness is also known as statistical parity where in each cluster, each protected group is represented with the same proportion as in the entirety. While FairSC algorithm (Kleindessner et al., 2019) is able to find the fairer clustering, it is compromised by high costs due to the kernels of computing nullspaces and the square roots of dense matrices explicitly. We present a new formulation of underlying spectral computation by incorporating nullspace projection and Hotelling's deflation such that the resulting algorithm, called s-FairSC, only involves the sparse matrix-vector products and is able to fully exploit the sparsity of the fair SC model. The experimental results on the modified stochastic block model demonstrate that s-FairSC is comparable with FairSC in recovering fair clustering. Meanwhile, it is sped up by a factor of 12 for moderate model sizes. s-FairSC is further demonstrated to be scalable in the sense that the computational costs of s-FairSC only increase marginally compared to the SC without fairness constraints.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
通常,通过聚类或订购将标签分配给每个元素,通常可以分析关系数据集。尽管通过聚类和排序方法可以实现数据集的类似表征,但前者比后者更积极地研究了数据集,尤其是对于表示为图的数据。这项研究通过研究几种聚类和订购方法之间的方法学关系来填补这一空白,重点是光谱技术。此外,我们评估了聚类和订购方法的结果性能。为此,我们提出了一种称为标签连续性误差的度量,该度量通常量化了一组元素的序列和分区之间的一致性程度。基于合成和现实世界数据集,我们评估了订购方法标识模块结构和聚类方法标识带状结构的范围。
translated by 谷歌翻译