Graph summarization via node grouping is a popular method to build concise graph representations by grouping nodes from the original graph into supernodes and encoding edges into superedges such that the loss of adjacency information is minimized. Such summaries have immense applications in large-scale graph analytics due to their small size and high query processing efficiency. In this paper, we reformulate the loss minimization problem for summarization into an equivalent integer maximization problem. By initially allowing relaxed (fractional) solutions for integer maximization, we analytically expose the underlying connections to the spectral properties of the adjacency matrix. Consequently, we design an algorithm called SpecSumm that consists of two phases. In the first phase, motivated by spectral graph theory, we apply k-means clustering on the k largest (in magnitude) eigenvectors of the adjacency matrix to assign nodes to supernodes. In the second phase, we propose a greedy heuristic that updates the initial assignment to further improve summary quality. Finally, via extensive experiments on 11 datasets, we show that SpecSumm efficiently produces high-quality summaries compared to state-of-the-art summarization algorithms and scales to graphs with millions of nodes.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
光谱聚类是网络中广泛使用的社区检测方法之一。然而,大型网络为其中的特征值分解带来了计算挑战。在本文中,我们研究了从统计角度使用随机草图算法的光谱聚类,在那里我们通常假设网络数据是从随机块模型生成的,这些模型不一定是完整等级的。为此,我们首先使用最近开发的草图算法来获得两个随机谱聚类算法,即基于随机投影和基于随机采样的光谱聚类。然后,我们在群体邻接矩阵的近似误差,错误分类误差和链路概率矩阵的估计误差方面研究得到的算法的理论界限。事实证明,在温和条件下,随机谱聚类算法导致与原始光谱聚类算法相同的理论界。我们还将结果扩展到校正的程度校正的随机块模型。数值实验支持我们的理论发现并显示随机化方法的效率。一个名为rclusct的新R包是开发的,并提供给公众。
translated by 谷歌翻译
光谱聚类在从业者和理论家中都很受欢迎。尽管对光谱聚类的性能保证有充分的了解,但最近的研究集中于在群集中执行``公平'',要求它们在分类敏感的节点属性方面必须``平衡''人口中的种族分布)。在本文中,我们考虑了一个设置,其中敏感属性间接表现在辅助\ textit {表示图}中,而不是直接观察到。该图指定了可以相对于敏感属性互相表示的节点对,除了通常的\ textit {相似性图}外,还可以观察到。我们的目标是在相似性图中找到簇,同时尊重由表示图编码的新个人公平性约束。我们为此任务开发了不均衡和归一化光谱聚类的变体,并在代表图诱导的种植分区模型下分析其性能。该模型同时使用节点的群集成员身份和表示图的结构来生成随机相似性图。据我们所知,这些是在个人级别的公平限制下受约束光谱聚类的第一个一致性结果。数值结果证实了我们的理论发现。
translated by 谷歌翻译
There are synergies of research interests and industrial efforts in modeling fairness and correcting algorithmic bias in machine learning. In this paper, we present a scalable algorithm for spectral clustering (SC) with group fairness constraints. Group fairness is also known as statistical parity where in each cluster, each protected group is represented with the same proportion as in the entirety. While FairSC algorithm (Kleindessner et al., 2019) is able to find the fairer clustering, it is compromised by high costs due to the kernels of computing nullspaces and the square roots of dense matrices explicitly. We present a new formulation of underlying spectral computation by incorporating nullspace projection and Hotelling's deflation such that the resulting algorithm, called s-FairSC, only involves the sparse matrix-vector products and is able to fully exploit the sparsity of the fair SC model. The experimental results on the modified stochastic block model demonstrate that s-FairSC is comparable with FairSC in recovering fair clustering. Meanwhile, it is sped up by a factor of 12 for moderate model sizes. s-FairSC is further demonstrated to be scalable in the sense that the computational costs of s-FairSC only increase marginally compared to the SC without fairness constraints.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
translated by 谷歌翻译
This article explores and analyzes the unsupervised clustering of large partially observed graphs. We propose a scalable and provable randomized framework for clustering graphs generated from the stochastic block model. The clustering is first applied to a sub-matrix of the graph's adjacency matrix associated with a reduced graph sketch constructed using random sampling. Then, the clusters of the full graph are inferred based on the clusters extracted from the sketch using a correlation-based retrieval step. Uniform random node sampling is shown to improve the computational complexity over clustering of the full graph when the cluster sizes are balanced. A new random degree-based node sampling algorithm is presented which significantly improves upon the performance of the clustering algorithm even when clusters are unbalanced. This framework improves the phase transitions for matrix-decomposition-based clustering with regard to computational complexity and minimum cluster size, which are shown to be nearly dimension-free in the low inter-cluster connectivity regime. A third sampling technique is shown to improve balance by randomly sampling nodes based on spatial distribution. We provide analysis and numerical results using a convex clustering algorithm based on matrix completion.
translated by 谷歌翻译
由于其数值益处增加及其坚实的数学背景,光谱聚类方法的非线性重构近来的关注。我们在$ p $ -norm中提出了一种新的直接多道谱聚类算法,以$ p \ in(1,2] $。计算图表的多个特征向量的问题$ p $ -laplacian,标准的非线性概括Graph Laplacian,被重用作为Grassmann歧管的无约束最小化问题。$ P $的价值以伪连续的方式减少,促进对应于最佳图形的稀疏解决方案载体作为$ P $接近。监测单调减少平衡图削减了我们从$ P $ -Levels获得的最佳可用解决方案的保证。我们展示了我们算法在各种人工测试案件中的算法的有效性和准确性。我们的数值和比较结果具有各种状态-Art聚类方法表明,所提出的方法在均衡的图形剪切度量和标签分配的准确性方面取得高质量的集群。此外,我们进行S面部图像和手写字符分类的束缚,以展示现实数据集中的适用性。
translated by 谷歌翻译
社区检测和正交组同步是科学和工程中各种重要应用的基本问题。在这项工作中,我们考虑了社区检测和正交组同步的联合问题,旨在恢复社区并同时执行同步。为此,我们提出了一种简单的算法,该算法由频谱分解步骤组成,然后是彼此枢转的QR分解(CPQR)。所提出的算法与数据点数线性有效且缩放。我们还利用最近开发的“休闲一淘汰”技术来建立近乎最佳保证,以确切地恢复集群成员资格,并稳定地恢复正交变换。数值实验证明了我们算法的效率和功效,并确认了我们的理论表征。
translated by 谷歌翻译
匹配问题的图表寻求在两个图形的节点之间找到对齐,这最小化了邻接分歧的数量。解决图表匹配越来越重要,因为它在运营研究,计算机视觉,神经科学等中的应用程序。然而,当前最先进的算法效率低,匹配非常大的图形,尽管它们产生了良好的准确性。这些算法的主要计算瓶颈是线性分配问题,必须在每次迭代时解决。在本文中,我们利用最近的最佳运输领域的进步来取代接受的线性分配算法的使用。我们呈现山羊,对最先进的图形匹配近似算法“常见问题”(Vogelstein,2015)的修改,用CuSuri(2013)的“光速最优传输”方法替换其线性和分配步骤。该修改提供了对速度和经验匹配精度的改进。在模拟和实际数据示例中匹配图表中对该方法的有效性进行了说明。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
随机块模型(SBM)是一个随机图模型,其连接不同的顶点组不同。它被广泛用作研究聚类和社区检测的规范模型,并提供了肥沃的基础来研究组合统计和更普遍的数据科学中出现的信息理论和计算权衡。该专着调查了最近在SBM中建立社区检测的基本限制的最新发展,无论是在信息理论和计算方案方面,以及各种恢复要求,例如精确,部分和弱恢复。讨论的主要结果是在Chernoff-Hellinger阈值中进行精确恢复的相转换,Kesten-Stigum阈值弱恢复的相变,最佳的SNR - 单位信息折衷的部分恢复以及信息理论和信息理论之间的差距计算阈值。该专着给出了在寻求限制时开发的主要算法的原则推导,特别是通过绘制绘制,半定义编程,(线性化)信念传播,经典/非背带频谱和图形供电。还讨论了其他块模型的扩展,例如几何模型和一些开放问题。
translated by 谷歌翻译
The stochastic block model (SBM) is a fundamental model for studying graph clustering or community detection in networks. It has received great attention in the last decade and the balanced case, i.e., assuming all clusters have large size, has been well studied. However, our understanding of SBM with unbalanced communities (arguably, more relevant in practice) is still very limited. In this paper, we provide a simple SVD-based algorithm for recovering the communities in the SBM with communities of varying sizes. We improve upon a result of Ailon, Chen and Xu [ICML 2013] by removing the assumption that there is a large interval such that the sizes of clusters do not fall in. Under the planted clique conjecture, the size of the clusters that can be recovered by our algorithm is nearly optimal (up to polylogarithmic factors) when the probability parameters are constant. As a byproduct, we obtain a polynomial-time algorithm with sublinear query complexity for a clustering problem with a faulty oracle, which finds all clusters of size larger than $\tilde{\Omega}({\sqrt{n}})$ even if $\Omega(n)$ small clusters co-exist in the graph. In contrast, all the previous efficient algorithms that makes sublinear number of queries cannot recover any large cluster, if there are more than $\tilde{\Omega}(n^{2/5})$ small clusters.
translated by 谷歌翻译
我们研究基于Krylov子空间的迭代方法,用于在任何Schatten $ p $ Norm中的低级别近似值。在这里,通过矩阵向量产品访问矩阵$ a $ $如此$ \ | a(i -zz^\ top)\ | _ {s_p} \ leq(1+ \ epsilon)\ min_ {u^\ top u = i_k} } $,其中$ \ | m \ | _ {s_p} $表示$ m $的单数值的$ \ ell_p $ norm。对于$ p = 2 $(frobenius norm)和$ p = \ infty $(频谱规范)的特殊情况,musco and Musco(Neurips 2015)获得了基于Krylov方法的算法,该方法使用$ \ tilde {o}(k)(k /\ sqrt {\ epsilon})$ matrix-vector产品,改进na \“ ive $ \ tilde {o}(k/\ epsilon)$依赖性,可以通过功率方法获得,其中$ \ tilde {o} $抑制均可抑制poly $(\ log(dk/\ epsilon))$。我们的主要结果是仅使用$ \ tilde {o}(kp^{1/6}/\ epsilon^{1/3} {1/3})$ matrix $ matrix的算法 - 矢量产品,并为所有$ p \ geq 1 $。为$ p = 2 $工作,我们的限制改进了先前的$ \ tilde {o}(k/\ epsilon^{1/2})$绑定到$ \ tilde {o}(k/\ epsilon^{1/3})$。由于schatten- $ p $和schatten-$ \ infty $ norms在$(1+ \ epsilon)$ pers $ p时相同\ geq(\ log d)/\ epsilon $,我们的界限恢复了Musco和Musco的结果,以$ p = \ infty $。此外,我们证明了矩阵矢量查询$ \ omega的下限(1/\ epsilon^ {1/3})$对于任何固定常数$ p \ geq 1 $,表明令人惊讶的$ \ tilde {\ theta}(1/\ epsilon^{ 1/3})$是常数〜$ k $的最佳复杂性。为了获得我们的结果,我们介绍了几种新技术,包括同时对多个Krylov子空间进行优化,以及针对分区操作员的不平等现象。我们在[1,2] $中以$ p \的限制使用了Araki-lieb-thirring Trace不平等,而对于$ p> 2 $,我们呼吁对安装分区操作员的规范压缩不平等。
translated by 谷歌翻译
随着大型网络在重要领域的相关领域的相关性,例如对疾病传播的联系网络的研究,或社交网络对地缘政治的影响,已经有必要研究可扩展到非常大的网络的机器学习工具,通常包含数百万节点。一种主要类别可扩展算法称为网络表示学习或网络嵌入。这些算法尝试通过首次运行多个随机散步,然后使用观察到的随机步行段中的每对节点的共同数量来学习网络功能(例如〜节点)的表示,以获得一些节点的低维表示欧几里德空间。本文的目的是严格地了解两个主要算法,深途化和Node2VEC的性能,以恢复与地面真理社区的规范网络模型的社区。根据图的稀疏性,我们发现所需的随机步道段的长度,使得相应的观察到的共生窗口能够对底层社区分配的几乎精确恢复。我们证明,考虑到一些固定的共同发生窗口,使用随机散步的Node2Vec与低横向概率的随机散步可以相比,与使用简单随机散步的深度扫视相比,稀疏网络可以成功。此外,如果稀疏参数低,我们提供了证据表明这些算法几乎完全恢复可能不会成功。该分析需要开发用于对具有底层低级结构的随机网络计数的通用工具,这与独立兴趣。
translated by 谷歌翻译
Kronecker产品的自然概括是Kronecker产品的张量Kronecker产品,在多个研究社区中独立出现。像它们的矩阵对应物一样,张量的概括为隐式乘法和分解定理提供了结构。我们提出了一个定理,该定理将张量kronecker产品的主要特征向量分解,这是从矩阵理论到张量特征向量的罕见概括。该定理意味着在kronecker产品的张量功率方法的迭代中应该存在低级结构。我们研究了网络对齐算法TAME中的低等级结构,这是一种功率方法启发式方法。直接或通过新的启发式嵌入方法使用低级结构,我们生成的新算法在提高或保持准确性的同时更快,并扩展到无法通过现有技术实际处理的问题。
translated by 谷歌翻译
我们通过证明PABM是GRDPG的一种特殊情况,其中社区对应于潜在矢量的相互正交子空间,我们连接两个随机图模型,即受欢迎程度调整块模型(PABM)和广义随机点产品图(GRDPG)。这种见解使我们能够为PABM构建用于社区检测和参数估计的新算法,并改善了依赖稀疏子空间聚类的现有算法。利用邻接光谱嵌入GRDPG的渐近特性,我们得出了这些算法的渐近特性。特别是,我们证明,随着图形顶点的数量倾向于无穷大,社区检测误差的绝对数量趋于零。仿真实验说明了这些特性。
translated by 谷歌翻译
随机奇异值分解(RSVD)是用于计算大型数据矩阵截断的SVD的一类计算算法。给定A $ n \ times n $对称矩阵$ \ mathbf {m} $,原型RSVD算法输出通过计算$ \ mathbf {m mathbf {m} $的$ k $引导singular vectors的近似m}^{g} \ mathbf {g} $;这里$ g \ geq 1 $是一个整数,$ \ mathbf {g} \ in \ mathbb {r}^{n \ times k} $是一个随机的高斯素描矩阵。在本文中,我们研究了一般的“信号加上噪声”框架下的RSVD的统计特性,即,观察到的矩阵$ \ hat {\ mathbf {m}} $被认为是某种真实但未知的加法扰动信号矩阵$ \ mathbf {m} $。我们首先得出$ \ ell_2 $(频谱规范)和$ \ ell_ {2 \ to \ infty} $(最大行行列$ \ ell_2 $ norm)$ \ hat {\ hat {\ Mathbf {M}} $和信号矩阵$ \ Mathbf {M} $的真实单数向量。这些上限取决于信噪比(SNR)和功率迭代$ g $的数量。观察到一个相变现象,其中较小的SNR需要较大的$ g $值以保证$ \ ell_2 $和$ \ ell_ {2 \ to \ fo \ infty} $ distances的收敛。我们还表明,每当噪声矩阵满足一定的痕量生长条件时,这些相变发生的$ g $的阈值都会很清晰。最后,我们得出了近似奇异向量的行波和近似矩阵的进入波动的正常近似。我们通过将RSVD的几乎最佳性能保证在应用于三个统计推断问题的情况下,即社区检测,矩阵完成和主要的组件分析,并使用缺失的数据来说明我们的理论结果。
translated by 谷歌翻译
我们提出了一种凸锥程序,可推断随机点产品图(RDPG)的潜在概率矩阵。优化问题最大化Bernoulli最大似然函数,增加核规范正则化术语。双重问题具有特别良好的形式,与众所周知的SemideFinite程序放松MaxCut问题有关。使用原始双功率条件,我们绑定了原始和双解决方案的条目和等级。此外,我们在轻微的技术假设下绑定了最佳目标值并证明了略微修改模型的概率估计的渐近一致性。我们对合成RDPG的实验不仅恢复了自然集群,而且还揭示了原始数据的下面的低维几何形状。我们还证明该方法在空手道俱乐部图表和合成美国参议图中恢复潜在结构,并且可以扩展到最多几百个节点的图表。
translated by 谷歌翻译