We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in O(m 2 ) time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.
translated by 谷歌翻译
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$\alpha$ test, one usually selects the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
translated by 谷歌翻译
我们提出了一种基于最大平均差异(MMD)的新型非参数两样本测试,该测试是通过具有不同核带宽的聚合测试来构建的。这种称为MMDAGG的聚合过程可确保对所使用的内核的收集最大化测试能力,而无需持有核心选择的数据(这会导致测试能力损失)或任意内核选择,例如中位数启发式。我们在非反应框架中工作,并证明我们的聚集测试对Sobolev球具有最小自适应性。我们的保证不仅限于特定的内核,而是符合绝对可集成的一维翻译不变特性内核的任何产品。此外,我们的结果适用于流行的数值程序来确定测试阈值,即排列和野生引导程序。通过对合成数据集和现实世界数据集的数值实验,我们证明了MMDAGG优于MMD内核适应的替代方法,用于两样本测试。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
In nonparametric independence testing, we observe i.i.d.\ data $\{(X_i,Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$. Modern test statistics such as the kernel Hilbert-Schmidt Independence Criterion (HSIC) and Distance Covariance (dCov) have intractable null distributions due to the degeneracy of the underlying U-statistics. Thus, in practice, one often resorts to using permutation testing, which provides a nonasymptotic guarantee at the expense of recalculating the quadratic-time statistics (say) a few hundred times. This paper provides a simple but nontrivial modification of HSIC and dCov (called xHSIC and xdCov, pronounced ``cross'' HSIC/dCov) so that they have a limiting Gaussian distribution under the null, and thus do not require permutations. This requires building on the newly developed theory of cross U-statistics by Kim and Ramdas (2020), and in particular developing several nontrivial extensions of the theory in Shekhar et al. (2022), which developed an analogous permutation-free kernel two-sample test. We show that our new tests, like the originals, are consistent against fixed alternatives, and minimax rate optimal against smooth local alternatives. Numerical simulations demonstrate that compared to the full dCov or HSIC, our variants have the same power up to a $\sqrt 2$ factor, giving practitioners a new option for large problems or data-analysis pipelines where computation, not sample size, could be the bottleneck.
translated by 谷歌翻译
矢量值随机变量的矩序列可以表征其定律。我们通过使用所谓的稳健签名矩来研究路径值随机变量(即随机过程)的类似问题。这使我们能够为随机过程定律得出最大平均差异类型的度量,并研究其在随机过程定律方面引起的拓扑。可以使用签名内核对该度量进行内核,从而有效地计算它。作为应用程序,我们为随机过程定律提供了非参数的两样本假设检验。
translated by 谷歌翻译
基于内核的测试提供了一个简单而有效的框架,该框架使用繁殖内核希尔伯特空间的理论设计非参数测试程序。在本文中,我们提出了新的理论工具,可用于在几种数据方案以及许多不同的测试问题中研究基于内核测试的渐近行为。与当前的方法不同,我们的方法避免使用冗长的$ u $和$ v $统计信息扩展并限制定理,该定理通常出现在文献中,并直接与希尔伯特空格上的随机功能合作。因此,我们的框架会导致对内核测试的简单明了的分析,只需要轻度的规律条件。此外,我们表明,通常可以通过证明我们方法所需的规律条件既足够又需要进行必要的规律条件来改进我们的分析。为了说明我们的方法的有效性,我们为有条件的独立性测试问题提供了一项新的内核测试,以及针对已知的基于内核测试的新分析。
translated by 谷歌翻译
我们使用最大平均差异(MMD),Hilbert Schmidt独立标准(HSIC)和内核Stein差异(KSD),,提出了一系列针对两样本,独立性和合适性问题的计算效率,非参数测试,用于两样本,独立性和合适性问题。分别。我们的测试统计数据是不完整的$ u $统计信息,其计算成本与与经典$ u $ u $统计测试相关的样本数量和二次时间之间的线性时间之间的插值。这三个提出的测试在几个内核带宽上汇总,以检测各种尺度的零件:我们称之为结果测试mmdagginc,hsicagginc和ksdagginc。对于测试阈值,我们得出了一个针对野生引导不完整的$ U $ - 统计数据的分位数,该统计是独立的。我们得出了MMDagginc和Hsicagginc的均匀分离率,并准确量化了计算效率和可实现速率之间的权衡:据我们所知,该结果是基于不完整的$ U $统计学的测试新颖的。我们进一步表明,在二次时间案例中,野生引导程序不会对基于更广泛的基于置换的方法进行测试功率,因为​​两者都达到了相同的最小最佳速率(这反过来又与使用Oracle分位数的速率相匹配)。我们通过数值实验对计算效率和测试能力之间的权衡进行数字实验来支持我们的主张。在三个测试框架中,我们观察到我们提出的线性时间聚合测试获得的功率高于当前最新线性时间内核测试。
translated by 谷歌翻译
我们在右审查的生存时间和协变量之间介绍一般的非参数独立测试,这可能是多变量的。我们的测试统计数据具有双重解释,首先是潜在无限的重量索引日志秩检验的超级索引,具有属于函数的再现内核HILBERT空间(RKHS)的重量函数;其次,作为某些有限措施的嵌入差异的规范,与Hilbert-Schmidt独立性标准(HSIC)测试统计类似。我们研究了测试的渐近性质,找到了足够的条件,以确保我们的测试在任何替代方案下正确拒绝零假设。可以直截了当地计算测试统计,并且通过渐近总体的野外自注程序进行拒绝阈值。对模拟和实际数据的广泛调查表明,我们的测试程序通常比检测复杂的非线性依赖的竞争方法更好。
translated by 谷歌翻译
概率分布之间的差异措施,通常被称为统计距离,在概率理论,统计和机器学习中普遍存在。为了在估计这些距离的距离时,对维度的诅咒,最近的工作已经提出了通过带有高斯内核的卷积在测量的分布中平滑局部不规则性。通过该框架的可扩展性至高维度,我们研究了高斯平滑$ P $ -wassersein距离$ \ mathsf {w} _p ^ {(\ sigma)} $的结构和统计行为,用于任意$ p \ GEQ 1 $。在建立$ \ mathsf {w} _p ^ {(\ sigma)} $的基本度量和拓扑属性之后,我们探索$ \ mathsf {w} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,其中$ \ hat {\ mu} _n $是$ n $独立观察的实证分布$ \ mu $。我们证明$ \ mathsf {w} _p ^ {(\ sigma)} $享受$ n ^ { - 1/2} $的参数经验融合速率,这对比$ n ^ { - 1 / d} $率对于未平滑的$ \ mathsf {w} _p $ why $ d \ geq 3 $。我们的证明依赖于控制$ \ mathsf {w} _p ^ {(\ sigma)} $ by $ p $ th-sting spoollow sobolev restion $ \ mathsf {d} _p ^ {(\ sigma)} $并导出限制$ \ sqrt {n} \,\ mathsf {d} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,适用于所有尺寸$ d $。作为应用程序,我们提供了使用$ \ mathsf {w} _p ^ {(\ sigma)} $的两个样本测试和最小距离估计的渐近保证,使用$ p = 2 $的实验使用$ \ mathsf {d} _2 ^ {(\ sigma)} $。
translated by 谷歌翻译
比较概率分布是许多机器学习算法的关键。最大平均差异(MMD)和最佳运输距离(OT)是在过去几年吸引丰富的关注的概率措施之间的两类距离。本文建立了一些条件,可以通过MMD规范控制Wassersein距离。我们的作品受到压缩统计学习(CSL)理论的推动,资源有效的大规模学习的一般框架,其中训练数据总结在单个向量(称为草图)中,该训练数据捕获与所考虑的学习任务相关的信息。在CSL中的现有结果启发,我们介绍了H \“较旧的较低限制的等距属性(H \”较旧的LRIP)并表明这家属性具有有趣的保证对压缩统计学习。基于MMD与Wassersein距离之间的关系,我们通过引入和研究学习任务的Wassersein可读性的概念来提供压缩统计学习的保证,即概率分布之间的某些特定于特定的特定度量,可以由Wassersein界定距离。
translated by 谷歌翻译
Over the last decade, an approach that has gained a lot of popularity to tackle non-parametric testing problems on general (i.e., non-Euclidean) domains is based on the notion of reproducing kernel Hilbert space (RKHS) embedding of probability distributions. The main goal of our work is to understand the optimality of two-sample tests constructed based on this approach. First, we show that the popular MMD (maximum mean discrepancy) two-sample test is not optimal in terms of the separation boundary measured in Hellinger distance. Second, we propose a modification to the MMD test based on spectral regularization by taking into account the covariance information (which is not captured by the MMD test) and prove the proposed test to be minimax optimal with a smaller separation boundary than that achieved by the MMD test. Third, we propose an adaptive version of the above test which involves a data-driven strategy to choose the regularization parameter and show the adaptive test to be almost minimax optimal up to a logarithmic factor. Moreover, our results hold for the permutation variant of the test where the test threshold is chosen elegantly through the permutation of the samples. Through numerical experiments on synthetic and real-world data, we demonstrate the superior performance of the proposed test in comparison to the MMD test.
translated by 谷歌翻译
我们提出了一种统一的技术,用于顺序估计分布之间的凸面分歧,包括内核最大差异等积分概率度量,$ \ varphi $ - 像Kullback-Leibler发散,以及最佳运输成本,例如Wassersein距离的权力。这是通过观察到经验凸起分歧(部分有序)反向半角分离的实现来实现的,而可交换过滤耦合,其具有这些方法的最大不等式。这些技术似乎是对置信度序列和凸分流的现有文献的互补和强大的补充。我们构建一个离线到顺序设备,将各种现有的离线浓度不等式转换为可以连续监测的时间均匀置信序列,在任意停止时间提供有效的测试或置信区间。得到的顺序边界仅在相应的固定时间范围内支付迭代对数价格,保留对问题参数的相同依赖性(如适用的尺寸或字母大小)。这些结果也适用于更一般的凸起功能,如负差分熵,实证过程的高度和V型统计。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
广义贝叶斯推理使用损失函数而不是可能性的先前信仰更新,因此可以用于赋予鲁棒性,以防止可能的错误规范的可能性。在这里,我们认为广泛化的贝叶斯推论斯坦坦差异作为损失函数的损失,由应用程序的可能性含有难治性归一化常数。在这种情况下,斯坦因差异来避免归一化恒定的评估,并产生封闭形式或使用标准马尔可夫链蒙特卡罗的通用后出版物。在理论层面上,我们显示了一致性,渐近的正常性和偏见 - 稳健性,突出了这些物业如何受到斯坦因差异的选择。然后,我们提供关于一系列棘手分布的数值实验,包括基于内核的指数家庭模型和非高斯图形模型的应用。
translated by 谷歌翻译
本文介绍了一种新的基于仿真的推理程序,以对访问I.I.D. \ samples的多维概率分布进行建模和样本,从而规避明确建模密度函数或设计Markov Chain Monte Carlo的通常方法。我们提出了一个称为可逆的Gromov-monge(RGM)距离的新概念的距离和同构的动机,并研究了RGM如何用于设计新的转换样本,以执行基于模拟的推断。我们的RGM采样器还可以估计两个异质度量度量空间之间的最佳对齐$(\ cx,\ mu,c _ {\ cx})$和$(\ cy,\ cy,\ nu,c _ {\ cy})$从经验数据集中,估计的地图大约将一个量度$ \ mu $推向另一个$ \ nu $,反之亦然。我们研究了RGM距离的分析特性,并在轻度条件下得出RGM等于经典的Gromov-Wasserstein距离。奇怪的是,与Brenier的两极分解结合了连接,我们表明RGM采样器以$ C _ {\ cx} $和$ C _ {\ cy} $的正确选择诱导了强度同构的偏见。研究了有关诱导采样器的收敛,表示和优化问题的统计率。还展示了展示RGM采样器有效性的合成和现实示例。
translated by 谷歌翻译
内核平均值嵌入是一种强大的工具,可以代表任意空间上的概率分布作为希尔伯特空间中的单个点。然而,计算和存储此类嵌入的成本禁止其在大规模设置中的直接使用。我们提出了一个基于NyStr \“ Om方法的有效近似过程,该过程利用了数据集的一个小随机子集。我们的主要结果是该过程的近似误差的上限。它在子样本大小上产生足够的条件以获得足够的条件。降低计算成本的同时,标准的$ n^{ - 1/2} $。我们讨论了此结果的应用,以近似的最大平均差异和正交规则,并通过数值实验说明了我们的理论发现。
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
我们研究了基于内核Stein差异(KSD)的合适性测试的特性。我们介绍了一种构建一个名为KSDAGG的测试的策略,该测试与不同的核聚集了多个测试。 KSDAGG避免将数据分开以执行内核选择(这会导致测试能力损失),并最大程度地提高了核集合的测试功率。我们提供有关KSDAGG的力量的理论保证:我们证明它达到了收集最小的分离率,直到对数期限。可以在实践中准确计算KSDAGG,因为它依赖于参数bootstrap或野生引导程序来估计分位数和级别校正。特别是,对于固定核的带宽至关重要的选择,它避免了诉诸于任意启发式方法(例如中位数或标准偏差)或数据拆分。我们在合成数据和现实世界中发现KSDAGG优于其他基于自适应KSD的拟合优度测试程序。
translated by 谷歌翻译