我们提出了一种新颖的随机网络模型,称为分形高斯网络(FGN),体现了明确定义和分析的分形结构。在不同的应用中经过经验观察了这种分形结构。 FGN在流行的纯粹随机几何图(A.K.A.Poirson Boolean网络)之间连续插入,以及具有越来越分形行为的随机图。事实上,它们形成了一个参数族的稀疏随机几何图,这是由条形参数化的,该参数化为分形结构的强度。 FGN由高斯乘法混沌(GMC)的潜在空间几何形状,其自身右边的分数正常的规范模型。我们在FGN中渐近地表征了FGN中的预期边缘,三角形,群体和轮辐型图案,揭示了与网络的大小参数的缩放中的不同模式。然后,我们除了作为随机图模型的基本属性之外,还基于观察到的网络数据检测变形的存在和基于观察到的网络数据的参数估计问题的自然问题。我们还通过在FGN的设置中揭开自然随机块模型来探讨社区结构的性别性。最后,我们将我们的结果与FGN的现象学分析证实了可用的科学文献中的空中性的现场,包括用于现实世界大规模网络数据的应用。
translated by 谷歌翻译
随机块模型(SBM)是一个随机图模型,其连接不同的顶点组不同。它被广泛用作研究聚类和社区检测的规范模型,并提供了肥沃的基础来研究组合统计和更普遍的数据科学中出现的信息理论和计算权衡。该专着调查了最近在SBM中建立社区检测的基本限制的最新发展,无论是在信息理论和计算方案方面,以及各种恢复要求,例如精确,部分和弱恢复。讨论的主要结果是在Chernoff-Hellinger阈值中进行精确恢复的相转换,Kesten-Stigum阈值弱恢复的相变,最佳的SNR - 单位信息折衷的部分恢复以及信息理论和信息理论之间的差距计算阈值。该专着给出了在寻求限制时开发的主要算法的原则推导,特别是通过绘制绘制,半定义编程,(线性化)信念传播,经典/非背带频谱和图形供电。还讨论了其他块模型的扩展,例如几何模型和一些开放问题。
translated by 谷歌翻译
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
translated by 谷歌翻译
A common approach to modeling networks assigns each node to a position on a low-dimensional manifold where distance is inversely proportional to connection likelihood. More positive manifold curvature encourages more and tighter communities; negative curvature induces repulsion. We consistently estimate manifold type, dimension, and curvature from simply connected, complete Riemannian manifolds of constant curvature. We represent the graph as a noisy distance matrix based on the ties between cliques, then develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We apply our approach to data-sets from economics and neuroscience.
translated by 谷歌翻译
为了捕获许多社区检测问题的固有几何特征,我们建议使用一个新的社区随机图模型,我们称之为\ emph {几何块模型}。几何模型建立在\ emph {随机几何图}(Gilbert,1961)上,这是空间网络的随机图的基本模型之一,就像在ERD \ H上建立的良好的随机块模型一样{o} s-r \'{en} yi随机图。它也是受到社区发现中最新的理论和实际进步启发的随机社区模型的自然扩展。为了分析几何模型,我们首先为\ emph {Random Annulus图}提供新的连接结果,这是随机几何图的概括。自引入以来,已经研究了几何图的连通性特性,并且由于相关的边缘形成而很难分析它们。然后,我们使用随机环形图的连接结果来提供必要的条件,以有效地为几何块模型恢复社区。我们表明,一种简单的三角计数算法来检测几何模型中的社区几乎是最佳的。为此,我们考虑了两个图密度方案。在图表的平均程度随着顶点的对数增长的状态中,我们表明我们的算法在理论上和实际上都表现出色。相比之下,三角计数算法对于对数学度方案中随机块模型远非最佳。我们还查看了图表的平均度与顶点$ n $的数量线性增长的状态,因此要存储一个需要$ \ theta(n^2)$内存的图表。我们表明,我们的算法需要在此制度中仅存储$ o(n \ log n)$边缘以恢复潜在社区。
translated by 谷歌翻译
我们提出了对学度校正随机块模型(DCSBM)的合适性测试。该测试基于调整后的卡方统计量,用于测量$ n $多项式分布的组之间的平等性,该分布具有$ d_1,\ dots,d_n $观测值。在网络模型的背景下,多项式的数量($ n $)的数量比观测值数量($ d_i $)快得多,与节点$ i $的度相对应,因此设置偏离了经典的渐近学。我们表明,只要$ \ {d_i \} $的谐波平均值生长到无穷大,就可以使统计量在NULL下分配。顺序应用时,该测试也可以用于确定社区数量。该测试在邻接矩阵的压缩版本上进行操作,因此在学位上有条件,因此对大型稀疏网络具有高度可扩展性。我们结合了一个新颖的想法,即在测试$ K $社区时根据$(k+1)$ - 社区分配来压缩行。这种方法在不牺牲计算效率的情况下增加了顺序应用中的力量,我们证明了它在恢复社区数量方面的一致性。由于测试统计量不依赖于特定的替代方案,因此其效用超出了顺序测试,可用于同时测试DCSBM家族以外的各种替代方案。特别是,我们证明该测试与具有社区结构的潜在可变性网络模型的一般家庭一致。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
Network data are ubiquitous in modern machine learning, with tasks of interest including node classification, node clustering and link prediction. A frequent approach begins by learning an Euclidean embedding of the network, to which algorithms developed for vector-valued data are applied. For large networks, embeddings are learned using stochastic gradient methods where the sub-sampling scheme can be freely chosen. Despite the strong empirical performance of such methods, they are not well understood theoretically. Our work encapsulates representation methods using a subsampling approach, such as node2vec, into a single unifying framework. We prove, under the assumption that the graph is exchangeable, that the distribution of the learned embedding vectors asymptotically decouples. Moreover, we characterize the asymptotic distribution and provided rates of convergence, in terms of the latent parameters, which includes the choice of loss function and the embedding dimension. This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks. Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
translated by 谷歌翻译
本文研究了基于Laplacian Eigenmaps(Le)的基于Laplacian EIGENMAPS(PCR-LE)的主要成分回归的统计性质,这是基于Laplacian Eigenmaps(Le)的非参数回归的方法。 PCR-LE通过投影观察到的响应的向量$ {\ bf y} =(y_1,\ ldots,y_n)$ to to changbood图表拉普拉斯的某些特征向量跨越的子空间。我们表明PCR-Le通过SoboLev空格实现了随机设计回归的最小收敛速率。在设计密度$ P $的足够平滑条件下,PCR-le达到估计的最佳速率(其中已知平方$ l ^ 2 $ norm的最佳速率为$ n ^ { - 2s /(2s + d) )} $)和健美的测试($ n ^ { - 4s /(4s + d)$)。我们还表明PCR-LE是\ EMPH {歧管Adaptive}:即,我们考虑在小型内在维度$ M $的歧管上支持设计的情况,并为PCR-LE提供更快的界限Minimax估计($ n ^ { - 2s /(2s + m)$)和测试($ n ^ { - 4s /(4s + m)$)收敛率。有趣的是,这些利率几乎总是比图形拉普拉斯特征向量的已知收敛率更快;换句话说,对于这个问题的回归估计的特征似乎更容易,统计上讲,而不是估计特征本身。我们通过经验证据支持这些理论结果。
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
决定性点过程(A.K.A.DPP)最近成为对数据建模的流行工具,或者在数据中建立消极依赖或排斥的现象。然而,我们对这类模型的古典参数统计理论的模拟的理解是有限的。在这项工作中,我们研究了高斯DPP的参数系列,并在观察点对参数调制的明显解释作用。我们表明参数调制通过在其排斥结构中引入方向性来影响观察点,并且主方向对应于最大(即最长范围)依赖的方向。该模型容易产生本发明的组件分析(PCA)作为尺寸减少工具的新颖和可行的替代,这些工具涉及数据最多地展开的方向。这种方法论贡献是通过对类似于协方差矩阵的尖刺模型作为研究PCA的框架的统计分析补充。这些理论调查揭示了在随机矩阵理论,随机几何和相关主题中进一步检查的兴趣问题。
translated by 谷歌翻译
我们提出了一种谱聚类算法,用于分析多元极端的依赖性结构。更具体地,我们专注于多元极端的渐近依赖性,其特征在于极值理论中的角度或光谱测量。我们的工作研究了基于从极端样本构成的随机价值的随机价值的频谱聚类的理论性能,即,半径超过大阈值的随机载体的角部分。特别是,我们得出了线性因子模型产生的极端的渐近分布,并证明,在某些条件下,光谱聚类可以一致地识别该模型中产生的极端簇。利用这一结果,我们提出了一种简单的一致估计策略,用于学习角度测量。我们的理论发现与说明我们方法的有限样本性能的数值实验相辅相成。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
Network-based analyses of dynamical systems have become increasingly popular in climate science. Here we address network construction from a statistical perspective and highlight the often ignored fact that the calculated correlation values are only empirical estimates. To measure spurious behaviour as deviation from a ground truth network, we simulate time-dependent isotropic random fields on the sphere and apply common network construction techniques. We find several ways in which the uncertainty stemming from the estimation procedure has major impact on network characteristics. When the data has locally coherent correlation structure, spurious link bundle teleconnections and spurious high-degree clusters have to be expected. Anisotropic estimation variance can also induce severe biases into empirical networks. We validate our findings with ERA5 reanalysis data. Moreover we explain why commonly applied resampling procedures are inappropriate for significance evaluation and propose a statistically more meaningful ensemble construction framework. By communicating which difficulties arise in estimation from scarce data and by presenting which design decisions increase robustness, we hope to contribute to more reliable climate network construction in the future.
translated by 谷歌翻译
在稀疏制度中,我们迈出了概括随机块模型理论的第一步,该模型将离散的社区结构被基本几何形状取代。我们考虑在均匀度量空间上的几何随机图,其中要连接两个顶点的概率是距离的任意函数。我们提供了足够的条件,在稀疏制度中,可以回收位置(最多是空间的同构)。此外,我们根据苔藓和佩雷斯(Mossel and Peres)定义了信息流模型的几何对应物,在该模型中,人们认为在球面上考虑了分支随机行走,目标是根据基于树叶。我们给出了一些足够的条件,可以在此模型中提供渗透和不变信息。
translated by 谷歌翻译
我们开发了一种高效的随机块模型中的弱恢复算法。该算法与随机块模型的Vanilla版本的最佳已知算法的统计保证匹配。从这个意义上讲,我们的结果表明,随机块模型没有稳健性。我们的工作受到最近的银行,Mohanty和Raghavendra(SODA 2021)的工作,为相应的区别问题提供了高效的算法。我们的算法及其分析显着脱离了以前的恢复。关键挑战是我们算法的特殊优化景观:种植的分区可能远非最佳意义,即完全不相关的解决方案可以实现相同的客观值。这种现象与PCA的BBP相转变的推出效应有关。据我们所知,我们的算法是第一个在非渐近设置中存在这种推出效果的鲁棒恢复。我们的算法是基于凸优化的框架的实例化(与平方和不同的不同),这对于其他鲁棒矩阵估计问题可能是有用的。我们的分析的副产物是一种通用技术,其提高了任意强大的弱恢复算法的成功(输入的随机性)从恒定(或缓慢消失)概率以指数高概率。
translated by 谷歌翻译
对于从分布$ \ mu $采样的图值数据,根据选择度量计算样品矩。在这项工作中,我们为图表集合了由$ \ ell_2 $规范定义的伪金属,相应的邻接矩阵的特征值之间。我们使用此伪度量标准和图值数据集的各个样本矩来推断分布的参数$ \ hat {\ mu} $,并将其解释为$ \ mu $的近似值。我们通过实验验证复杂的分布$ \ mu $可以很好地近似地使用这种方法。
translated by 谷歌翻译
高维统计数据的一个基本目标是检测或恢复嘈杂数据中隐藏的种植结构(例如低级别矩阵)。越来越多的工作研究低级多项式作为此类问题的计算模型的限制模型:在各种情况下,数据的低级多项式可以与最知名的多项式时间算法的统计性能相匹配。先前的工作已经研究了低度多项式的力量,以检测隐藏结构的存在。在这项工作中,我们将这些方法扩展到解决估计和恢复问题(而不是检测)。对于大量的“信号加噪声”问题,我们给出了一个用户友好的下限,以获得最佳的均衡误差。据我们所知,这些是建立相关检测问题的恢复问题低度硬度的第一个结果。作为应用,我们对种植的子静脉和种植的密集子图问题的低度最小平方误差进行了严格的特征,在两种情况下都解决了有关恢复的计算复杂性的开放问题(在低度框架中)。
translated by 谷歌翻译