Independence testing is a fundamental and classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) allow stopping earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. It is well known that classical batch tests are not tailored for streaming data settings, since valid inference after data peeking requires correcting for multiple testing, but such corrections generally result in low power. In this paper, we design sequential kernelized independence tests (SKITs) that overcome such shortcomings based on the principle of testing by betting. We exemplify our broad framework using bets inspired by kernelized dependence measures such as the Hilbert-Schmidt independence criterion (HSIC) and the constrained-covariance criterion (COCO). Importantly, we also generalize the framework to non-i.i.d. time-varying settings, for which there exist no batch tests. We demonstrate the power of our approaches on both simulated and real data.
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
In nonparametric independence testing, we observe i.i.d.\ data $\{(X_i,Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$. Modern test statistics such as the kernel Hilbert-Schmidt Independence Criterion (HSIC) and Distance Covariance (dCov) have intractable null distributions due to the degeneracy of the underlying U-statistics. Thus, in practice, one often resorts to using permutation testing, which provides a nonasymptotic guarantee at the expense of recalculating the quadratic-time statistics (say) a few hundred times. This paper provides a simple but nontrivial modification of HSIC and dCov (called xHSIC and xdCov, pronounced ``cross'' HSIC/dCov) so that they have a limiting Gaussian distribution under the null, and thus do not require permutations. This requires building on the newly developed theory of cross U-statistics by Kim and Ramdas (2020), and in particular developing several nontrivial extensions of the theory in Shekhar et al. (2022), which developed an analogous permutation-free kernel two-sample test. We show that our new tests, like the originals, are consistent against fixed alternatives, and minimax rate optimal against smooth local alternatives. Numerical simulations demonstrate that compared to the full dCov or HSIC, our variants have the same power up to a $\sqrt 2$ factor, giving practitioners a new option for large problems or data-analysis pipelines where computation, not sample size, could be the bottleneck.
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
我们提出了一种基于最大平均差异(MMD)的新型非参数两样本测试,该测试是通过具有不同核带宽的聚合测试来构建的。这种称为MMDAGG的聚合过程可确保对所使用的内核的收集最大化测试能力,而无需持有核心选择的数据(这会导致测试能力损失)或任意内核选择,例如中位数启发式。我们在非反应框架中工作,并证明我们的聚集测试对Sobolev球具有最小自适应性。我们的保证不仅限于特定的内核,而是符合绝对可集成的一维翻译不变特性内核的任何产品。此外,我们的结果适用于流行的数值程序来确定测试阈值,即排列和野生引导程序。通过对合成数据集和现实世界数据集的数值实验,我们证明了MMDAGG优于MMD内核适应的替代方法,用于两样本测试。
translated by 谷歌翻译
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$\alpha$ test, one usually selects the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
translated by 谷歌翻译
基于内核的测试提供了一个简单而有效的框架,该框架使用繁殖内核希尔伯特空间的理论设计非参数测试程序。在本文中,我们提出了新的理论工具,可用于在几种数据方案以及许多不同的测试问题中研究基于内核测试的渐近行为。与当前的方法不同,我们的方法避免使用冗长的$ u $和$ v $统计信息扩展并限制定理,该定理通常出现在文献中,并直接与希尔伯特空格上的随机功能合作。因此,我们的框架会导致对内核测试的简单明了的分析,只需要轻度的规律条件。此外,我们表明,通常可以通过证明我们方法所需的规律条件既足够又需要进行必要的规律条件来改进我们的分析。为了说明我们的方法的有效性,我们为有条件的独立性测试问题提供了一项新的内核测试,以及针对已知的基于内核测试的新分析。
translated by 谷歌翻译
我们在右审查的生存时间和协变量之间介绍一般的非参数独立测试,这可能是多变量的。我们的测试统计数据具有双重解释,首先是潜在无限的重量索引日志秩检验的超级索引,具有属于函数的再现内核HILBERT空间(RKHS)的重量函数;其次,作为某些有限措施的嵌入差异的规范,与Hilbert-Schmidt独立性标准(HSIC)测试统计类似。我们研究了测试的渐近性质,找到了足够的条件,以确保我们的测试在任何替代方案下正确拒绝零假设。可以直截了当地计算测试统计,并且通过渐近总体的野外自注程序进行拒绝阈值。对模拟和实际数据的广泛调查表明,我们的测试程序通常比检测复杂的非线性依赖的竞争方法更好。
translated by 谷歌翻译
随着混凝剂的数量增加,因果推理越来越复杂。给定护理$ x $,混淆器$ z $和结果$ y $,我们开发一个非参数方法来测试\ texit {do-null}假设$ h_0:\; p(y | \ text {\它do}(x = x))= p(y)$违反替代方案。在Hilbert Schmidt独立性标准(HSIC)上进行边缘独立性测试,我们提出了后门 - HSIC(BD-HSIC)并证明它被校准,并且在大量混淆下具有二元和连续治疗的力量。此外,我们建立了BD-HSIC中使用的协方差运算符的估计的收敛性质。我们研究了BD-HSIC对参数测试的优点和缺点以及与边缘独立测试或有条件独立测试相比使用DO-NULL测试的重要性。可以在\超链接{https:/github.com/mrhuff/kgformula} {\ texttt {https://github.com/mrhuff/kgformula}}完整的实现。
translated by 谷歌翻译
本文衍生了置信区间(CI)和时间统一的置信序列(CS),用于从有限观测值中估算未知平均值的经典问题。我们提出了一种衍生浓度界限的一般方法,可以看作是著名的切尔诺夫方法的概括(和改进)。它的核心是基于推导一类新的复合非负胸腔,通过投注和混合方法与测试的连接很强。我们展示了如何将这些想法扩展到无需更换的情况下,这是另一个经过深入研究的问题。在所有情况下,我们的界限都适应未知的差异,并且基于Hoeffding或经验的Bernstein不平等及其最近的Supermartingale概括,经验上大大优于现有方法。简而言之,我们为四个基本问题建立了一个新的最先进的问题:在有或没有替换的情况下进行采样时,CS和CI进行有限的手段。
translated by 谷歌翻译
部署在现实世界中时,机器学习模型不可避免地遇到数据分布的变化,并且某些 - 但不是全部分布班可能导致显着的性能下降。在实践中,忽略良性移位可能是有意义的,在该频率下,部署模型的性能不会显着降低,不必要地制作人类专家(或模型再培训)的干预。虽然有几种作品已经开发了用于分发班次的测试,但这些通常使用非顺序方法,或者检测任意班次(良性或有害)或两者。我们认为,用于解雇警告的明智方法(a)检测有害移位,同时忽略良性换档,并且(b)允许连续监测模型性能,而不会增加误报率。在这项工作中,我们设计了简单的顺序工具,用于测试源(训练)和目标(测试)分布之间的差异导致感兴趣的风险函数的显着增加,如准确性或校准。构建时均匀置信度序列的最新进展允许在跟踪过程中积累的统计证据进行高效聚合。设计的框架适用于在执行预测之后(某些)真正标签的设置中,或者当批次以延迟的方式获得时批次。我们通过对模拟和真实数据集的集合的广泛实证研究展示了拟议的框架的功效。
translated by 谷歌翻译
考虑两个或更多的预测员,每个预测员都会随着时间的推移为不同的事件进行一系列预测。我们问一个相对基本的问题:我们如何将这些预测员进行比较,无论是在线还是Hoc,同时避免了对如何生成预测或结果的无可助消的假设?这项工作提出了对这个问题的新颖答案。我们设计了一种顺序推理过程,用于估计预测质量的时变差异,通过相对大类的适当评分规则(具有线性等同物的有界分数)来衡量的。得到的置信区间是非溶解有效的,并且可以连续地监测以在任意数据相关的停止时间(“随时有效”)来产生统计上有效的比较;这是通过调整方差 - 自适应Supermartingales,置信度序列和电子过程来实现这一点。由于Shafer和Vovk的游戏理论概率,我们的覆盖担保也是无意义的,因此它们没有对预测或结果的分布假设。与Henzi和Ziegel最近的工作形成鲜明对比,我们的工具可以顺序地测试一个弱null假设关于一个预测器是否平均过度地越过另一个。我们通过比较主要联赛棒球(MLB)游戏和统计后处理方法的预测来展示其有效性。
translated by 谷歌翻译
在本文中,我们提出了一种多个内核测试程序,以推断几个因素(例如不同的治疗组,性别,病史)及其相互作用同时引起了人们的兴趣。我们的方法能够处理复杂的数据,并且当假设诸如相称性不能合理时,可以看作是无所不在的COX模型的替代方法。我们的方法结合了来自生存分析,机器学习和多次测试的众所周知的概念:加权的对数秩检验,内核方法和多个对比度测试。这样,可以检测到超出经典比例危害设置以外的复杂危险替代方案。此外,通过充分利用单个测试程序的依赖性结构以避免功率损失来进行多个比较。总的来说,这为阶乘生存设计提供了灵活而强大的程序,其理论有效性通过Martingale论证和$ v $统计的理论证明。我们在广泛的仿真研究中评估了方法的性能,并通过真实的数据分析对其进行了说明。
translated by 谷歌翻译
我们解决了在没有观察到的混杂的存在下的因果效应估计的问题,但是观察到潜在混杂因素的代理。在这种情况下,我们提出了两种基于内核的方法,用于非线性因果效应估计:(a)两阶段回归方法,以及(b)最大矩限制方法。我们专注于近端因果学习设置,但是我们的方法可以用来解决以弗雷霍尔姆积分方程为特征的更广泛的逆问题。特别是,我们提供了在非线性环境中解决此问题的两阶段和矩限制方法的统一视图。我们为每种算法提供一致性保证,并证明这些方法在合成数据和模拟现实世界任务的数据上获得竞争结果。特别是,我们的方法优于不适合利用代理变量的早期方法。
translated by 谷歌翻译
我们使用最大平均差异(MMD),Hilbert Schmidt独立标准(HSIC)和内核Stein差异(KSD),,提出了一系列针对两样本,独立性和合适性问题的计算效率,非参数测试,用于两样本,独立性和合适性问题。分别。我们的测试统计数据是不完整的$ u $统计信息,其计算成本与与经典$ u $ u $统计测试相关的样本数量和二次时间之间的线性时间之间的插值。这三个提出的测试在几个内核带宽上汇总,以检测各种尺度的零件:我们称之为结果测试mmdagginc,hsicagginc和ksdagginc。对于测试阈值,我们得出了一个针对野生引导不完整的$ U $ - 统计数据的分位数,该统计是独立的。我们得出了MMDagginc和Hsicagginc的均匀分离率,并准确量化了计算效率和可实现速率之间的权衡:据我们所知,该结果是基于不完整的$ U $统计学的测试新颖的。我们进一步表明,在二次时间案例中,野生引导程序不会对基于更广泛的基于置换的方法进行测试功率,因为​​两者都达到了相同的最小最佳速率(这反过来又与使用Oracle分位数的速率相匹配)。我们通过数值实验对计算效率和测试能力之间的权衡进行数字实验来支持我们的主张。在三个测试框架中,我们观察到我们提出的线性时间聚合测试获得的功率高于当前最新线性时间内核测试。
translated by 谷歌翻译
最佳运输(OT)及其熵正则后代最近在机器学习和AI域中获得了很多关注。特别地,最优传输已被用于在概率分布之间开发概率度量。我们在本文中介绍了基于熵正常的最佳运输的独立性标准。我们的标准可用于测试两个样本之间的独立性。我们为测试统计制定非渐近界,研究其在零和替代假设下的统计行为。我们的理论结果涉及来自U-Process理论和最佳运输理论的工具。我们在现有的基准上提出了实验结果,说明了所提出的标准的兴趣。
translated by 谷歌翻译
尽管U统计量在现代概率和统计学中存在着无处不在的,但其在依赖框架中的非反应分析可能被忽略了。在最近的一项工作中,已经证明了对统一的马尔可夫链的U级统计数据的新浓度不平等。在本文中,我们通过在三个不同的研究领域中进一步推动了当前知识状态,将这一理论突破付诸实践。首先,我们为使用MCMC方法估算痕量类积分运算符光谱的新指数不平等。新颖的是,这种结果适用于具有正征和负征值的内核,据我们所知,这是新的。此外,我们研究了使用成对损失函数和马尔可夫链样品的在线算法的概括性能。我们通过展示如何从任何在线学习者产生的假设序列中提取低风险假设来提供在线到批量转换结果。我们最终对马尔可夫链的不变度度量的密度进行了拟合优度测试的非反应分析。我们确定了一些类别的替代方案,基于$ L_2 $距离的测试具有规定的功率。
translated by 谷歌翻译
我们提出了一项新的条件依赖度量和有条件独立性的统计检验。该度量基于在有限位置评估的两个合理分布的分析内嵌入之间的差异。我们在条件独立性的无效假设下获得其渐近分布,并从中设计一致的统计检验。我们进行了一系列实验,表明我们的新测试在I型和类型II误差方面都超过了最先进的方法,即使在高维设置中也是如此。
translated by 谷歌翻译
我们提出了一种统一的技术,用于顺序估计分布之间的凸面分歧,包括内核最大差异等积分概率度量,$ \ varphi $ - 像Kullback-Leibler发散,以及最佳运输成本,例如Wassersein距离的权力。这是通过观察到经验凸起分歧(部分有序)反向半角分离的实现来实现的,而可交换过滤耦合,其具有这些方法的最大不等式。这些技术似乎是对置信度序列和凸分流的现有文献的互补和强大的补充。我们构建一个离线到顺序设备,将各种现有的离线浓度不等式转换为可以连续监测的时间均匀置信序列,在任意停止时间提供有效的测试或置信区间。得到的顺序边界仅在相应的固定时间范围内支付迭代对数价格,保留对问题参数的相同依赖性(如适用的尺寸或字母大小)。这些结果也适用于更一般的凸起功能,如负差分熵,实证过程的高度和V型统计。
translated by 谷歌翻译