两种样本测试评估两个样品是否是相同分布(零假设)或两种不同分布(替代假设)的实现。在传统的本问题的制定中,统计学家可以访问测量(特征变量)和组变量(标签变量)。但是,在几个重要的应用程序中,可以轻松测量特征变量,但二进制标签变量是未知的并且获得昂贵的。在本文中,我们考虑了经典的两个样本测试问题的这一重要变化,并将其构成,作为在执行两个样本测试的服务中仅获得少量样品的标签的问题。我们设计了一个标签高效的三阶段框架:首先,分类器培训,采用均匀标记为模拟标签的后验概率;其次,将一个创新的查询计划被称为\ emph {bimodal查询}用于查询来自两个类别的样本标签,最大的后验概率,最后,对查询样本进行了经典的弗里德曼-RAFSKY(FR)两样测试。我们的理论分析表明,在合理的条件下,双峰查询对于FR测试是最佳的,并且三阶段框架控制I误差。对合成,基准和应用程序特定数据集进行的广泛实验表明,三阶段框架在控制I错误的统一查询和确定的基于标签上的统一查询和确定性的查询中的II型误差减少。
translated by 谷歌翻译
分类器的性能通常是根据测试数据的平均准确性来衡量的。尽管是标准措施,但平均准确性未能表征模型对标签的基本条件定律的拟合度,鉴于特征向量($ y | x $),例如由于模型错误指定,拟合和高维度。在本文中,我们考虑了评估通用二元分类器的拟合优点的基本问题。我们的框架对条件定律$ y | x $没有任何参数假设,并且将其视为黑匣子甲骨文模型,只能通过查询访问。我们将拟合优度评估问题提出作为表格\ [h_0:\ mathbb {e} \ big [d_f \ big({\ sf bern}(\ esta(x))\ | {\ | {\ | {\ | { sf bern}(\ hat {\ eta}(x))\ big)\ big] \ leq \ tau \ ,, \],其中$ d_f $代表$ f $ -DDIVERGENCE函数,$ \ eta(x)$ ,$ \ hat {\ eta}(x)$分别表示功能向量$ x $的真实和估计可能性。我们提出了一个新颖的测试,称为\ grasp用于测试$ H_0 $,无论功能如何(无分配)在有限的样品设置中起作用。我们还提出了为模型-X设置设计的Model-X \ Grasp,其中已知特征向量的联合分布。 Model-X \ Grasp使用此分配信息来实现更好的功率。我们通过广泛的数值实验评估测试的性能。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$\alpha$ test, one usually selects the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
translated by 谷歌翻译
It is widely believed that given the same labeling budget, active learning algorithms like uncertainty sampling achieve better predictive performance than passive learning (i.e. uniform sampling), albeit at a higher computational cost. Recent empirical evidence suggests that this added cost might be in vain, as uncertainty sampling can sometimes perform even worse than passive learning. While existing works offer different explanations in the low-dimensional regime, this paper shows that the underlying mechanism is entirely different in high dimensions: we prove for logistic regression that passive learning outperforms uncertainty sampling even for noiseless data and when using the uncertainty of the Bayes optimal classifier. Insights from our proof indicate that this high-dimensional phenomenon is exacerbated when the separation between the classes is small. We corroborate this intuition with experiments on 20 high-dimensional datasets spanning a diverse range of applications, from finance and histology to chemistry and computer vision.
translated by 谷歌翻译
由于其出色的经验表现,随机森林是过去十年中使用的机器学习方法之一。然而,由于其黑框的性质,在许多大数据应用中很难解释随机森林的结果。量化各个特征在随机森林中的实用性可以大大增强其解释性。现有的研究表明,一些普遍使用的特征对随机森林的重要性措施遭受了偏见问题。此外,对于大多数现有方法,缺乏全面的规模和功率分析。在本文中,我们通过假设检验解决了问题,并提出了一个自由化特征 - 弥散性相关测试(事实)的框架,以评估具有偏见性属性的随机森林模型中给定特征的重要性,我们零假设涉及该特征是否与所有其他特征有条件地独立于响应。关于高维随机森林一致性的一些最新发展,对随机森林推断的这种努力得到了赋予的能力。在存在功能依赖性的情况下,我们的事实测试的香草版可能会遇到偏见问题。我们利用偏置校正的不平衡和调节技术。我们通过增强功率的功能转换将合奏的想法进一步纳入事实统计范围。在相当普遍的具有依赖特征的高维非参数模型设置下,我们正式确定事实可以提供理论上合理的随机森林具有P值,并通过非催化分析享受吸引人的力量。新建议的方法的理论结果和有限样本优势通过几个模拟示例和与Covid-19的经济预测应用进行了说明。
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
我们提出了一种基于最大平均差异(MMD)的新型非参数两样本测试,该测试是通过具有不同核带宽的聚合测试来构建的。这种称为MMDAGG的聚合过程可确保对所使用的内核的收集最大化测试能力,而无需持有核心选择的数据(这会导致测试能力损失)或任意内核选择,例如中位数启发式。我们在非反应框架中工作,并证明我们的聚集测试对Sobolev球具有最小自适应性。我们的保证不仅限于特定的内核,而是符合绝对可集成的一维翻译不变特性内核的任何产品。此外,我们的结果适用于流行的数值程序来确定测试阈值,即排列和野生引导程序。通过对合成数据集和现实世界数据集的数值实验,我们证明了MMDAGG优于MMD内核适应的替代方法,用于两样本测试。
translated by 谷歌翻译
大多数现有的分类方法旨在最大限度地减少整体错误分类错误率,但是,在应用程序中,不同类型的错误可能具有不同的后果。要考虑到这种不对称问题,已经开发了两个流行的范式,即Neyman-Pearson(NP)范式和成本敏感(CS)范式。与CS范例相比,NP PARADIGM不需要提高成本规范。最先前的NP Paradigm的作品集中在二进制案例上。在这项工作中,我们通过将其连接到CS问题并提出两种算法来研究多级NP问题。我们将NP Oracle不等式扩展到二进制案例到多级案例的一致性,并显示我们的两种算法在某些条件下享受这些属性。模拟和实际数据研究表明了我们算法的有效性。据我们所知,这是第一个通过具有理论保证的成本敏感的学习技术来解决多级NP问题的工作。所提出的算法在CRAN上的R包“NPCS”中实现。
translated by 谷歌翻译
本文开发了新型的保形方法,以测试是否从与参考集相同的分布中采样了新的观察结果。以创新的方式将感应性和偏置的共形推断融合,所描述的方法可以以原则性的方式基于已知的分布式数据的依赖侧信息重新权重标准p值,并且可以自动利用最强大的优势来自任何一级和二进制分类器的模型。该解决方案可以通过样品分裂或通过新颖的转置交叉验证+方案来实现,该方案与现有的交叉验证方法相比,由于更严格的保证,这也可能在共形推理的其他应用中有用。在研究错误的发现率控制和在具有几个可能的离群值的多个测试框架内的虚假发现率控制和功率之后,提出的解决方案被证明通过模拟以及用于图像识别和表格数据的应用超过了标准的共形P值。
translated by 谷歌翻译
基于内核的测试提供了一个简单而有效的框架,该框架使用繁殖内核希尔伯特空间的理论设计非参数测试程序。在本文中,我们提出了新的理论工具,可用于在几种数据方案以及许多不同的测试问题中研究基于内核测试的渐近行为。与当前的方法不同,我们的方法避免使用冗长的$ u $和$ v $统计信息扩展并限制定理,该定理通常出现在文献中,并直接与希尔伯特空格上的随机功能合作。因此,我们的框架会导致对内核测试的简单明了的分析,只需要轻度的规律条件。此外,我们表明,通常可以通过证明我们方法所需的规律条件既足够又需要进行必要的规律条件来改进我们的分析。为了说明我们的方法的有效性,我们为有条件的独立性测试问题提供了一项新的内核测试,以及针对已知的基于内核测试的新分析。
translated by 谷歌翻译
许多实际优化问题涉及不确定的参数,这些参数具有概率分布,可以使用上下文特征信息来估算。与首先估计不确定参数的分布然后基于估计优化目标的标准方法相反,我们提出了一个\ textIt {集成条件估计 - 优化}(ICEO)框架,该框架估计了随机参数的潜在条件分布同时考虑优化问题的结构。我们将随机参数的条件分布与上下文特征之间的关系直接建模,然后以与下游优化问题对齐的目标估算概率模型。我们表明,我们的ICEO方法在适度的规律性条件下渐近一致,并以概括范围的形式提供有限的性能保证。在计算上,使用ICEO方法执行估计是一种非凸面且通常是非差异的优化问题。我们提出了一种通用方法,用于近似从估计的条件分布到通过可区分函数的最佳决策的潜在非差异映射,这极大地改善了应用于非凸问题的基于梯度的算法的性能。我们还提供了半代理案例中的多项式优化解决方案方法。还进行了数值实验,以显示我们在不同情况下的方法的经验成功,包括数据样本和模型不匹配。
translated by 谷歌翻译
我们提出了对学度校正随机块模型(DCSBM)的合适性测试。该测试基于调整后的卡方统计量,用于测量$ n $多项式分布的组之间的平等性,该分布具有$ d_1,\ dots,d_n $观测值。在网络模型的背景下,多项式的数量($ n $)的数量比观测值数量($ d_i $)快得多,与节点$ i $的度相对应,因此设置偏离了经典的渐近学。我们表明,只要$ \ {d_i \} $的谐波平均值生长到无穷大,就可以使统计量在NULL下分配。顺序应用时,该测试也可以用于确定社区数量。该测试在邻接矩阵的压缩版本上进行操作,因此在学位上有条件,因此对大型稀疏网络具有高度可扩展性。我们结合了一个新颖的想法,即在测试$ K $社区时根据$(k+1)$ - 社区分配来压缩行。这种方法在不牺牲计算效率的情况下增加了顺序应用中的力量,我们证明了它在恢复社区数量方面的一致性。由于测试统计量不依赖于特定的替代方案,因此其效用超出了顺序测试,可用于同时测试DCSBM家族以外的各种替代方案。特别是,我们证明该测试与具有社区结构的潜在可变性网络模型的一般家庭一致。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
translated by 谷歌翻译
上下文的强盗和强化学习算法已成功用于各种交互式学习系统,例如在线广告,推荐系统和动态定价。但是,在高风险应用领域(例如医疗保健)中,它们尚未被广泛采用。原因之一可能是现有方法假定基本机制是静态的,因为它们不会在不同的环境上改变。但是,在许多现实世界中,这些机制可能会跨环境变化,这可能使静态环境假设无效。在本文中,考虑到离线上下文匪徒的框架,我们迈出了解决环境转变问题的一步。我们认为环境转移问题通过因果关系的角度,并提出了多种环境的背景匪徒,从而可以改变基本机制。我们采用因果关系文献的不变性概念,并介绍了政策不变性的概念。我们认为,仅当存在未观察到的变量时,政策不变性才有意义,并表明在这种情况下,保证在适当假设下跨环境概括最佳不变政策。我们的结果建立了因果关系,不变性和上下文土匪之间的具体联系。
translated by 谷歌翻译
这篇综述的目的是将读者介绍到图表内,以将其应用于化学信息学中的分类问题。图内核是使我们能够推断分子的化学特性的功能,可以帮助您完成诸如寻找适合药物设计的化合物等任务。内核方法的使用只是一种特殊的两种方式量化了图之间的相似性。我们将讨论限制在这种方法上,尽管近年来已经出现了流行的替代方法,但最著名的是图形神经网络。
translated by 谷歌翻译
我们在分类的背景下研究公平,其中在接收器的曲线下的区域(AUC)下的区域测量的性能。当I型(误报)和II型(假阴性)错误都很重要时,通常使用AUC。然而,相同的分类器可以针对不同的保护组具有显着变化的AUC,并且在现实世界中,通常希望减少这种交叉组差异。我们解决如何选择其他功能,以便最大地改善弱势群体的AUC。我们的结果表明,功能的无条件方差不会通知我们关于AUC公平,而是类条件方差。使用此连接,我们基于功能增强(添加功能)来开发一种新颖的方法Fairauc,以减轻可识别组之间的偏差。我们评估综合性和现实世界(Compas)数据集的Fairauc,并发现它对于相对于基准,最大限度地提高了总体AUC并最大限度地减少了组之间的偏见的基准,它显着改善了弱势群体的AUC。
translated by 谷歌翻译
我们提出和分析了一种新颖的统计程序,即创建的Agrasst,以评估可能以明确形式可用的图形生成器的质量。特别是,Agrasst可用于确定学习的图生成过程是否能够生成类似给定输入图的图。受到随机图的Stein运算符的启发,Agrasst的关键思想是基于从图生成器获得的操作员的内核差异的构建。Agrasst可以为图形生成器培训程序提供可解释的批评,并帮助确定可靠的下游任务样品批次。使用Stein的方法,我们为广泛的随机图模型提供了理论保证。我们在两个合成输入图上提供了经验结果,并具有已知的图生成过程,以及对图形最新的(深)生成模型进行训练的现实输入图。
translated by 谷歌翻译
Sequential testing, always-valid $p$-values, and confidence sequences promise flexible statistical inference and on-the-fly decision making. However, unlike fixed-$n$ inference based on asymptotic normality, existing sequential tests either make parametric assumptions and end up under-covering/over-rejecting when these fail or use non-parametric but conservative concentration inequalities and end up over-covering/under-rejecting. To circumvent these issues, we sidestep exact at-least-$\alpha$ coverage and focus on asymptotically exact coverage and asymptotic optimality. That is, we seek sequential tests whose probability of ever rejecting a true hypothesis asymptotically approaches $\alpha$ and whose expected time to reject a false hypothesis approaches a lower bound on all tests with asymptotic coverage at least $\alpha$, both under an appropriate asymptotic regime. We permit observations to be both non-parametric and dependent and focus on testing whether the observations form a martingale difference sequence. We propose the universal sequential probability ratio test (uSPRT), a slight modification to the normal-mixture sequential probability ratio test, where we add a burn-in period and adjust thresholds accordingly. We show that even in this very general setting, the uSPRT is asymptotically optimal under mild generic conditions. We apply the results to stabilized estimating equations to test means, treatment effects, etc. Our results also provide corresponding guarantees for the implied confidence sequences. Numerical simulations verify our guarantees and the benefits of the uSPRT over alternatives.
translated by 谷歌翻译