随机点产品图(RDPG)是网络的生成模型,其中顶点对应于潜像欧几里德空间中的位置,并且由潜在位置的点产品确定。我们考虑从潜在空间的未知$ 1 $ 1多维二维子段中随机采样潜在位置的RDPG。原则上,限制推理,即利用子苗条结构的程序,应该比不受限制的推断更有效;然而,当子苗条未知时,尚不清楚如何进行限制推理。我们提出了用于歧管学习的技术可用于学习空气的未知子多种,以实现从受限推断的益处。为了说明,我们使用完整的一组顶点来测试1美元的FR \'{e} CHET手段的1美元 - 和2美元的假设,以推断潜伏结构。我们建议测试统计数据,用于使用从估计的潜在位置构造的邻域图上的最短路径距离来部署ISOMAP过程,以估计未知$ 1 $ -dimenmanifold上的弧长。与ISOMAP的常规应用不同,估计的潜在位置不介绍感兴趣的子群。我们将现有的收敛结果扩展到ISOMAP到此设置,并使用它们来证明,随着辅助顶点的数量增加,我们的测试的功率会收敛于当已知子纤维的相应测试的功率。最后,我们将方法应用于推理问题,这是在研究果蝇幼虫蘑菇体的结核时。单变量学习歧管测试拒绝($ P <0.05 $),而多变量环境空间测试没有($ p \ gg0.05 $),说明了识别和利用后续推断的低维结构的值。
translated by 谷歌翻译
给定图形或相似性矩阵,我们考虑了恢复节点之间真实距离的概念以及它们的真实位置的问题。我们证明这可以通过两个步骤完成:矩阵分解,然后进行非线性尺寸降低。这种组合之所以有效,是因为在第一步中获得的点云一直生活在歧管上,其中潜在距离被编码为地球距离。因此,一个非线性降低尺寸的工具,即近似地球距离,可以恢复潜在位置,直至简单的转换。我们详细说明了使用光谱嵌入,其次是ISOMAP的情况,并为其他技术组合提供了令人鼓舞的实验证据。
translated by 谷歌翻译
光谱嵌入是可用于获得图形节点的矢量表示的过程。本文提出了称为随机点产品图的潜在网络模型的概括,以允许将这些载体表示的解释为潜在位置估计。需要泛化异化连接(例如,“对立面”)并更普遍地应对负特征值。我们表明,是否使用邻接或归一化的拉普拉斯矩阵,光谱嵌入产生均匀一致的潜在估计,渐近高斯误差(最高可识别性)。标准和混合会员随机块模型是特殊情况,其中潜在的位置只需要k $不同的向量值,代表社区,或以$(k-1)$ - simplex与那些顶点一起生活。在随机块模型下,我们的理论建议使用高斯混合模型(而不是$ k $ -means),并且根据混合成员资格,拟合封闭单纯x的最小卷,此前仅在非负面明确假设下支持的现有建议。在网络安全示例中,在网络安全示例中证明了链路预测(在随机点产品图中)的经验改进(在随机点产品图中),以及露出更丰富的潜在结构(比标准或混合隶属块模型的位置)。
translated by 谷歌翻译
A common approach to modeling networks assigns each node to a position on a low-dimensional manifold where distance is inversely proportional to connection likelihood. More positive manifold curvature encourages more and tighter communities; negative curvature induces repulsion. We consistently estimate manifold type, dimension, and curvature from simply connected, complete Riemannian manifolds of constant curvature. We represent the graph as a noisy distance matrix based on the ties between cliques, then develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We apply our approach to data-sets from economics and neuroscience.
translated by 谷歌翻译
网络邻接矩阵的光谱嵌入通常产生大约围绕低维子纤维结构的节点表示。特别地,当从潜在位置模型产生图表时,期望隐藏的子结构出现。此外,网络内的社区存在可能在嵌入中生成特定的特定社区的子多种结构,但是在网络的大多数统计模型中,这不明确地解释。在本文中,提出了一类称为潜在结构块模型(LSBM)的模型来解决这种情况,允许在存在社区特定的一维歧管结构时允许图形聚类。 LSBMS专注于特定的潜伏空间模型,随机点产品图(RDPG),并为每个社区的潜在位置分配潜在的子多种。讨论了来自LSBMS引起的嵌入式的贝叶斯模型,并显示在模拟和现实世界网络数据上具有良好的性能。该模型能够正确地恢复生活在一维歧管中的底层社区,即使当底层曲线的参数形式未知,也可以在各种实际数据上实现显着的结果。
translated by 谷歌翻译
我们通过证明PABM是GRDPG的一种特殊情况,其中社区对应于潜在矢量的相互正交子空间,我们连接两个随机图模型,即受欢迎程度调整块模型(PABM)和广义随机点产品图(GRDPG)。这种见解使我们能够为PABM构建用于社区检测和参数估计的新算法,并改善了依赖稀疏子空间聚类的现有算法。利用邻接光谱嵌入GRDPG的渐近特性,我们得出了这些算法的渐近特性。特别是,我们证明,随着图形顶点的数量倾向于无穷大,社区检测误差的绝对数量趋于零。仿真实验说明了这些特性。
translated by 谷歌翻译
我们将最初在多维扩展和降低多元数据的降低领域发展为功能设置。我们专注于经典缩放和ISOMAP - 在这些领域中起重要作用的原型方法 - 并在功能数据分析的背景下展示它们的使用。在此过程中,我们强调了环境公制扮演的关键作用。
translated by 谷歌翻译
许多两个样本网络假设检验方法在隐式假设下运行,即跨网络的顶点对应关系是先验的。在本文中,当跨网络跨越未对准/标记的拆分顶点时,我们考虑了两个样本图假设测试中的功率降解。在随机块模型网络的背景下,我们从理论上探索了基于估计的边缘概率矩阵或邻接矩阵之间的Frobenius Norm差异的一对假设检验引起的功率损失。在随机块模型和随机DOT产品图模型中,我们将测试功率的损失进一步增强,在其中,我们比较了文献中多个最近提出的测试的功率损失。最后,我们证明了改组在神经科学和社交网络分析的示例中可以在实际数据测试中产生的影响。
translated by 谷歌翻译
我们提出了对学度校正随机块模型(DCSBM)的合适性测试。该测试基于调整后的卡方统计量,用于测量$ n $多项式分布的组之间的平等性,该分布具有$ d_1,\ dots,d_n $观测值。在网络模型的背景下,多项式的数量($ n $)的数量比观测值数量($ d_i $)快得多,与节点$ i $的度相对应,因此设置偏离了经典的渐近学。我们表明,只要$ \ {d_i \} $的谐波平均值生长到无穷大,就可以使统计量在NULL下分配。顺序应用时,该测试也可以用于确定社区数量。该测试在邻接矩阵的压缩版本上进行操作,因此在学位上有条件,因此对大型稀疏网络具有高度可扩展性。我们结合了一个新颖的想法,即在测试$ K $社区时根据$(k+1)$ - 社区分配来压缩行。这种方法在不牺牲计算效率的情况下增加了顺序应用中的力量,我们证明了它在恢复社区数量方面的一致性。由于测试统计量不依赖于特定的替代方案,因此其效用超出了顺序测试,可用于同时测试DCSBM家族以外的各种替代方案。特别是,我们证明该测试与具有社区结构的潜在可变性网络模型的一般家庭一致。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
Network data are ubiquitous in modern machine learning, with tasks of interest including node classification, node clustering and link prediction. A frequent approach begins by learning an Euclidean embedding of the network, to which algorithms developed for vector-valued data are applied. For large networks, embeddings are learned using stochastic gradient methods where the sub-sampling scheme can be freely chosen. Despite the strong empirical performance of such methods, they are not well understood theoretically. Our work encapsulates representation methods using a subsampling approach, such as node2vec, into a single unifying framework. We prove, under the assumption that the graph is exchangeable, that the distribution of the learned embedding vectors asymptotically decouples. Moreover, we characterize the asymptotic distribution and provided rates of convergence, in terms of the latent parameters, which includes the choice of loss function and the embedding dimension. This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks. Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
在本文中,我们提出了一种多个内核测试程序,以推断几个因素(例如不同的治疗组,性别,病史)及其相互作用同时引起了人们的兴趣。我们的方法能够处理复杂的数据,并且当假设诸如相称性不能合理时,可以看作是无所不在的COX模型的替代方法。我们的方法结合了来自生存分析,机器学习和多次测试的众所周知的概念:加权的对数秩检验,内核方法和多个对比度测试。这样,可以检测到超出经典比例危害设置以外的复杂危险替代方案。此外,通过充分利用单个测试程序的依赖性结构以避免功率损失来进行多个比较。总的来说,这为阶乘生存设计提供了灵活而强大的程序,其理论有效性通过Martingale论证和$ v $统计的理论证明。我们在广泛的仿真研究中评估了方法的性能,并通过真实的数据分析对其进行了说明。
translated by 谷歌翻译
我们基于拉普拉斯 - 贝特拉米操作员的图形laplacian估计值介绍了紧凑型riemannian歧管M中距离的估计量。我们在非歧管距离上的估计量比的比率上限,或者在非交通性几何形状中的歧管距离的近似值(参见[Connes and Suijelekom,2020]),就光谱误差而言))。图形拉普拉斯(Laplacian)的估计值和隐含的歧管几何特性。因此,我们为从M和图拉普拉奇人的严格正密度等分的样品获得估计器的一致性结果,这些样品从M和图形laplacians上的频谱从适当的意义上汇聚到Laplace-Beltrami操作员。估计器类似于其收敛性能,它源自kontorovic双重重新印度的特殊情况,称为Connes的距离公式。
translated by 谷歌翻译
本文研究了基于Laplacian Eigenmaps(Le)的基于Laplacian EIGENMAPS(PCR-LE)的主要成分回归的统计性质,这是基于Laplacian Eigenmaps(Le)的非参数回归的方法。 PCR-LE通过投影观察到的响应的向量$ {\ bf y} =(y_1,\ ldots,y_n)$ to to changbood图表拉普拉斯的某些特征向量跨越的子空间。我们表明PCR-Le通过SoboLev空格实现了随机设计回归的最小收敛速率。在设计密度$ P $的足够平滑条件下,PCR-le达到估计的最佳速率(其中已知平方$ l ^ 2 $ norm的最佳速率为$ n ^ { - 2s /(2s + d) )} $)和健美的测试($ n ^ { - 4s /(4s + d)$)。我们还表明PCR-LE是\ EMPH {歧管Adaptive}:即,我们考虑在小型内在维度$ M $的歧管上支持设计的情况,并为PCR-LE提供更快的界限Minimax估计($ n ^ { - 2s /(2s + m)$)和测试($ n ^ { - 4s /(4s + m)$)收敛率。有趣的是,这些利率几乎总是比图形拉普拉斯特征向量的已知收敛率更快;换句话说,对于这个问题的回归估计的特征似乎更容易,统计上讲,而不是估计特征本身。我们通过经验证据支持这些理论结果。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
统计推断中的主要范式取决于I.I.D.的结构。来自假设的无限人群的数据。尽管它取得了成功,但在复杂的数据结构下,即使在清楚无限人口所代表的内容的情况下,该框架在复杂的数据结构下仍然不灵活。在本文中,我们探讨了一个替代框架,在该框架中,推断只是对模型误差的不变性假设,例如交换性或符号对称性。作为解决这个不变推理问题的一般方法,我们提出了一个基于随机的过程。我们证明了该过程的渐近有效性的一般条件,并在许多数据结构中说明了,包括单向和双向布局中的群集误差。我们发现,通过残差随机化的不变推断具有三个吸引人的属性:(1)在弱且可解释的条件下是有效的,可以解决重型数据,有限聚类甚至一些高维设置的问题。 (2)它在有限样品中是可靠的,因为它不依赖经典渐近学所需的规律性条件。 (3)它以适应数据结构的统一方式解决了推断问题。另一方面,诸如OLS或Bootstrap之类的经典程序以I.I.D.为前提。结构,只要实际问题结构不同,就需要修改。经典框架中的这种不匹配导致了多种可靠的误差技术和自举变体,这些变体经常混淆应用研究。我们通过广泛的经验评估证实了这些发现。残留随机化对许多替代方案的表现有利,包括可靠的误差方法,自举变体和分层模型。
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$\alpha$ test, one usually selects the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
translated by 谷歌翻译