We develop a general framework for distribution-free predictive inference in regression, using conformal inference. The proposed methodology allows for the construction of a prediction band for the response variable using any estimator of the regression function. The resulting prediction band preserves the consistency properties of the original estimator under standard assumptions, while guaranteeing finite-sample marginal coverage even when these assumptions do not hold. We analyze and compare, both empirically and theoretically, the two major variants of our conformal framework: full conformal inference and split conformal inference, along with a related jackknife method. These methods offer different tradeoffs between statistical accuracy (length of resulting prediction intervals) and computational efficiency. As extensions, we develop a method for constructing valid in-sample prediction intervals called rank-one-out conformal inference, which has essentially the same computational efficiency as split conformal inference. We also describe an extension of our procedures for producing prediction bands with locally varying length, in order to adapt to heteroskedascity in the data. Finally, we propose a model-free notion of variable importance, called leave-one-covariate-out or LOCO inference. Accompanying this paper is an R package conformalInference that implements all of the proposals we have introduced. In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
translated by 谷歌翻译
套索是一种高维回归的方法,当时,当协变量$ p $的订单数量或大于观测值$ n $时,通常使用它。由于两个基本原因,经典的渐近态性理论不适用于该模型:$(1)$正规风险是非平滑的; $(2)$估算器$ \ wideHat {\ boldsymbol {\ theta}} $与true参数vector $ \ boldsymbol {\ theta}^*$无法忽略。结果,标准的扰动论点是渐近正态性的传统基础。另一方面,套索估计器可以精确地以$ n $和$ p $大,$ n/p $的订单为一。这种表征首先是在使用I.I.D的高斯设计的情况下获得的。协变量:在这里,我们将其推广到具有非偏差协方差结构的高斯相关设计。这是根据更简单的``固定设计''模型表示的。我们在两个模型中各种数量的分布之间的距离上建立了非反应界限,它们在合适的稀疏类别中均匀地固定在信号上$ \ boldsymbol {\ theta}^*$。作为应用程序,我们研究了借助拉索的分布,并表明需要校正程度对于计算有效的置信区间是必要的。
translated by 谷歌翻译
预测一组结果 - 而不是独特的结果 - 是统计学习中不确定性定量的有前途的解决方案。尽管有关于构建具有统计保证的预测集的丰富文献,但适应未知的协变量转变(实践中普遍存在的问题)还是一个严重的未解决的挑战。在本文中,我们表明具有有限样本覆盖范围保证的预测集是非信息性的,并提出了一种新型的无灵活分配方法PredSet-1Step,以有效地构建了在未知协方差转移下具有渐近覆盖范围保证的预测集。我们正式表明我们的方法是\ textIt {渐近上可能是近似正确},对大型样本的置信度有很好的覆盖误差。我们说明,在南非队列研究中,它在许多实验和有关HIV风险预测的数据集中实现了名义覆盖范围。我们的理论取决于基于一般渐近线性估计器的WALD置信区间覆盖范围的融合率的新结合。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
假设我们观察一个随机向量$ x $从一个具有未知参数的已知家庭中的一些分发$ p $。我们问以下问题:什么时候可以将$ x $分为两部分$ f(x)$和$ g(x)$,使得两部分都足以重建$ x $自行,但两者都可以恢复$ x $完全,$(f(x),g(x))$的联合分布是贸易的吗?作为一个例子,如果$ x =(x_1,\ dots,x_n)$和$ p $是一个产品分布,那么对于任何$ m <n $,我们可以将样本拆分以定义$ f(x)=(x_1 ,\ dots,x_m)$和$ g(x)=(x_ {m + 1},\ dots,x_n)$。 Rasines和Young(2021)提供了通过使用$ x $的随机化实现此任务的替代路线,并通过加性高斯噪声来实现高斯分布数据的有限样本中的选择后推断和非高斯添加剂模型的渐近。在本文中,我们提供更一般的方法,可以通过借助贝叶斯推断的思路在有限样本中实现这种分裂,以产生(频繁的)解决方案,该解决方案可以被视为数据分裂的连续模拟。我们称我们的方法数据模糊,作为数据分割,数据雕刻和P值屏蔽的替代方案。我们举例说明了一些原型应用程序的方法,例如选择趋势过滤和其他回归问题的选择后推断。
translated by 谷歌翻译
交叉验证是一种广泛使用的技术来估计预测误差,但其行为很复杂且不完全理解。理想情况下,人们想认为,交叉验证估计手头模型的预测错误,适合训练数据。我们证明,普通最小二乘拟合的线性模型并非如此。相反,它估计模型的平均预测误差适合于同一人群提取的其他看不见的训练集。我们进一步表明,这种现象发生在大多数流行的预测误差估计中,包括数据拆分,自举和锦葵的CP。接下来,从交叉验证得出的预测误差的标准置信区间可能的覆盖范围远低于所需水平。由于每个数据点都用于训练和测试,因此每个折叠的测量精度之间存在相关性,因此方差的通常估计值太小。我们引入了嵌套的交叉验证方案,以更准确地估计该方差,并从经验上表明,在传统的交叉验证间隔失败的许多示例中,这种修改导致间隔大致正确覆盖。
translated by 谷歌翻译
在这项工作中,我们对基本思想和新颖的发展进行了综述的综述,这是基于最小的假设的一种无创新的,无分配的,非参数预测的方法 - 能够以非常简单的方式预测集屈服在有限样本案例中,在统计意义上也有效。论文中提供的深入讨论涵盖了共形预测的理论基础,然后继续列出原始想法的更高级的发展和改编。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
Many scientific and engineering challenges-ranging from personalized medicine to customized marketing recommendations-require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical inference. In experiments, we find causal forests to be substantially more powerful than classical methods based on nearest-neighbor matching, especially in the presence of irrelevant covariates.
translated by 谷歌翻译
我们解决了如何在没有严格缩放条件的情况下实现分布式分数回归中最佳推断的问题。由于分位数回归(QR)损失函数的非平滑性质,这是具有挑战性的,这使现有方法的使用无效。难度通过应用于本地(每个数据源)和全局目标函数的双光滑方法解决。尽管依赖局部和全球平滑参数的精致组合,但分位数回归模型是完全参数的,从而促进了解释。在低维度中,我们为顺序定义的分布式QR估计器建立了有限样本的理论框架。这揭示了通信成本和统计错误之间的权衡。我们进一步讨论并比较了基于WALD和得分型测试和重采样技术的反转的几种替代置信集结构,并详细介绍了对更极端分数系数有效的改进。在高维度中,采用了一个稀疏的框架,其中提出的双滑目标功能与$ \ ell_1 $ -penalty相辅相成。我们表明,相应的分布式QR估计器在近乎恒定的通信回合之后达到了全球收敛率。一项彻底的模拟研究进一步阐明了我们的发现。
translated by 谷歌翻译
High-dimensional data can often display heterogeneity due to heteroscedastic variance or inhomogeneous covariate effects. Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data. The former is computationally challenging due to the non-smooth nature of the check loss, and the latter is sensitive to heavy-tailed error distributions. In this paper, we propose and study (penalized) robust expectile regression (retire), with a focus on iteratively reweighted $\ell_1$-penalization which reduces the estimation bias from $\ell_1$-penalization and leads to oracle properties. Theoretically, we establish the statistical properties of the retire estimator under two regimes: (i) low-dimensional regime in which $d \ll n$; (ii) high-dimensional regime in which $s\ll n\ll d$ with $s$ denoting the number of significant predictors. In the high-dimensional setting, we carefully characterize the solution path of the iteratively reweighted $\ell_1$-penalized retire estimation, adapted from the local linear approximation algorithm for folded-concave regularization. Under a mild minimum signal strength condition, we show that after as many as $\log(\log d)$ iterations the final iterate enjoys the oracle convergence rate. At each iteration, the weighted $\ell_1$-penalized convex program can be efficiently solved by a semismooth Newton coordinate descent algorithm. Numerical studies demonstrate the competitive performance of the proposed procedure compared with either non-robust or quantile regression based alternatives.
translated by 谷歌翻译
统计推断中的主要范式取决于I.I.D.的结构。来自假设的无限人群的数据。尽管它取得了成功,但在复杂的数据结构下,即使在清楚无限人口所代表的内容的情况下,该框架在复杂的数据结构下仍然不灵活。在本文中,我们探讨了一个替代框架,在该框架中,推断只是对模型误差的不变性假设,例如交换性或符号对称性。作为解决这个不变推理问题的一般方法,我们提出了一个基于随机的过程。我们证明了该过程的渐近有效性的一般条件,并在许多数据结构中说明了,包括单向和双向布局中的群集误差。我们发现,通过残差随机化的不变推断具有三个吸引人的属性:(1)在弱且可解释的条件下是有效的,可以解决重型数据,有限聚类甚至一些高维设置的问题。 (2)它在有限样品中是可靠的,因为它不依赖经典渐近学所需的规律性条件。 (3)它以适应数据结构的统一方式解决了推断问题。另一方面,诸如OLS或Bootstrap之类的经典程序以I.I.D.为前提。结构,只要实际问题结构不同,就需要修改。经典框架中的这种不匹配导致了多种可靠的误差技术和自举变体,这些变体经常混淆应用研究。我们通过广泛的经验评估证实了这些发现。残留随机化对许多替代方案的表现有利,包括可靠的误差方法,自举变体和分层模型。
translated by 谷歌翻译
我们讨论了具有未知IV有效性的线性仪器变量(IV)模型中识别的基本问题。我们重新审视了流行的多数和多元化规则,并表明通常没有识别条件是“且仅在总体上”。假设“最稀少的规则”,该规则等同于多数规则,但在计算算法中变得运作,我们研究并证明了基于两步选择的其他IV估计器的非convex惩罚方法的优势,就两步选择而言选择一致性和单独弱IV的适应性。此外,我们提出了一种与识别条件保持一致的替代较低的惩罚,并同时提供甲骨文稀疏结构。与先前的文献相比,针对静脉强度较弱的估计仪得出了理想的理论特性。使用模拟证明了有限样本特性,并且选择和估计方法应用于有关贸易对经济增长的影响的经验研究。
translated by 谷歌翻译
当我们对优化模型中的不确定参数进行观察以及对协变量的同时观察时,我们研究了数据驱动决策的优化。鉴于新的协变量观察,目标是选择一个决定以此观察为条件的预期成本的决定。我们研究了三个数据驱动的框架,这些框架将机器学习预测模型集成在随机编程样本平均值近似(SAA)中,以近似解决该问题的解决方案。 SAA框架中的两个是新的,并使用了场景生成的剩余预测模型的样本外残差。我们研究的框架是灵活的,并且可以容纳参数,非参数和半参数回归技术。我们在数据生成过程,预测模型和随机程序中得出条件,在这些程序下,这些数据驱动的SaaS的解决方案是一致且渐近最佳的,并且还得出了收敛速率和有限的样本保证。计算实验验证了我们的理论结果,证明了我们数据驱动的公式比现有方法的潜在优势(即使预测模型被误解了),并说明了我们在有限的数据制度中新的数据驱动配方的好处。
translated by 谷歌翻译
我们考虑随机对照试验的差异问题,通过使用与结果相关的协变量但与治疗无关。我们提出了一种机器学习回归调整的处理效果估算器,我们称之为Mlrate。 Mlrate使用机器学习预测结果来降低估计方差。它采用交叉配件来避免过度偏置,在一般条件下,我们证明了一致性和渐近正常性。 Mlrate对机器学习的预测较差的鲁棒步骤:如果预测与结果不相关,则估计器执行渐近的差异,而不是标准差异估计器,而如果预测与结果高度相关,则效率提升大。在A / A测试中,对于在Facebook实验中通常监测的一组48个结果指标,估计器的差异比简单差分估计器差异超过70%,比仅调整的共同单变量过程约19%用于结果的预测值。
translated by 谷歌翻译
Conformal prediction constructs a confidence set for an unobserved response of a feature vector based on previous identically distributed and exchangeable observations of responses and features. It has a coverage guarantee at any nominal level without additional assumptions on their distribution. Its computation deplorably requires a refitting procedure for all replacement candidates of the target response. In regression settings, this corresponds to an infinite number of model fits. Apart from relatively simple estimators that can be written as pieces of linear function of the response, efficiently computing such sets is difficult, and is still considered as an open problem. We exploit the fact that, \emph{often}, conformal prediction sets are intervals whose boundaries can be efficiently approximated by classical root-finding algorithms. We investigate how this approach can overcome many limitations of formerly used strategies; we discuss its complexity and drawbacks.
translated by 谷歌翻译
在本文中,我们的目标是提供对半监督(SS)因果推理的一般性和完全理解治疗效果。具体而言,我们考虑两个这样的估计值:(a)平均治疗效果和(b)定量处理效果,作为原型案例,在SS设置中,其特征在于两个可用的数据集:(i)标记的数据集大小$ N $,为响应和一组高维协变量以及二元治疗指标提供观察。 (ii)一个未标记的数据集,大小超过$ n $,但未观察到的响应。使用这两个数据集,我们开发了一个SS估计系列,该系列是:(1)更强大,并且(2)比其监督对应力更高的基于标记的数据集。除了通过监督方法可以实现的“标准”双重稳健结果(在一致性方面),我们还在正确指定模型中的倾向得分,我们进一步建立了我们SS估计的根本-N一致性和渐近常态。没有需要涉及的特定形式的滋扰职能。这种改善的鲁棒性来自使用大规模未标记的数据,因此通常不能在纯粹监督的环境中获得。此外,只要正确指定所有滋扰函数,我们的估计值都显示为半参数效率。此外,作为滋扰估计器的说明,我们考虑逆概率加权型核平滑估计,涉及未知的协变量转换机制,并在高维情景新颖的情况下建立其统一的收敛速率,这应该是独立的兴趣。两种模拟和实际数据的数值结果验证了我们对其监督对应物的优势,了解鲁棒性和效率。
translated by 谷歌翻译
This paper provides estimation and inference methods for a conditional average treatment effects (CATE) characterized by a high-dimensional parameter in both homogeneous cross-sectional and unit-heterogeneous dynamic panel data settings. In our leading example, we model CATE by interacting the base treatment variable with explanatory variables. The first step of our procedure is orthogonalization, where we partial out the controls and unit effects from the outcome and the base treatment and take the cross-fitted residuals. This step uses a novel generic cross-fitting method we design for weakly dependent time series and panel data. This method "leaves out the neighbors" when fitting nuisance components, and we theoretically power it by using Strassen's coupling. As a result, we can rely on any modern machine learning method in the first step, provided it learns the residuals well enough. Second, we construct an orthogonal (or residual) learner of CATE -- the Lasso CATE -- that regresses the outcome residual on the vector of interactions of the residualized treatment with explanatory variables. If the complexity of CATE function is simpler than that of the first-stage regression, the orthogonal learner converges faster than the single-stage regression-based learner. Third, we perform simultaneous inference on parameters of the CATE function using debiasing. We also can use ordinary least squares in the last two steps when CATE is low-dimensional. In heterogeneous panel data settings, we model the unobserved unit heterogeneity as a weakly sparse deviation from Mundlak (1978)'s model of correlated unit effects as a linear function of time-invariant covariates and make use of L1-penalization to estimate these models. We demonstrate our methods by estimating price elasticities of groceries based on scanner data. We note that our results are new even for the cross-sectional (i.i.d) case.
translated by 谷歌翻译
我们考虑在估计涉及依赖参数的高维滋扰的估计方程中估计一个低维参数。一个中心示例是因果推理中(局部)分位数处理效应((L)QTE)的有效估计方程,涉及在分位数以估计的分位数评估的协方差累积分布函数。借记机学习(DML)是一种使用灵活的机器学习方法估算高维滋扰的数据分解方法,但是将其应用于参数依赖性滋扰的问题是不切实际的。对于(L)QTE,DML要求我们学习整个协变量累积分布函数。相反,我们提出了局部偏见的机器学习(LDML),该学习避免了这一繁重的步骤,并且只需要对参数进行一次初始粗糙猜测而估算烦恼。对于(L)QTE,LDML仅涉及学习两个回归功能,这是机器学习方法的标准任务。我们证明,在松弛速率条件下,我们的估计量与使用未知的真实滋扰的不可行的估计器具有相同的有利渐近行为。因此,LDML值得注意的是,当我们必须控制许多协变量和/或灵活的关系时,如(l)QTES在((l)QTES)中,实际上可以有效地估算重要数量,例如(l)QTES。
translated by 谷歌翻译
稳定性选择(Meinshausen和Buhlmann,2010)通过返回许多副页面一致选择的功能来使任何特征选择方法更稳定。我们证明(在我们的知识中,它的知识,它的第一个结果),对于包含重要潜在变量的高度相关代理的数据,套索通常选择一个代理,但与套索的稳定性选择不能选择任何代理,导致比单独的套索更糟糕的预测性能。我们介绍集群稳定性选择,这利用了从业者的知识,即数据中存在高度相关的集群,从而产生比此设置中的稳定性选择更好的特征排名。我们考虑了几种特征组合方法,包括在每个重要集群中占据各个重要集群中的特征的加权平均值,其中重量由选择集群成员的频率决定,我们显示的是比以前的提案更好地导致更好的预测模型。我们呈现来自Meinshausen和Buhlmann(2010)和Shah和Samworth(2012)的理论担保的概括,以表明集群稳定选择保留相同的保证。总之,集群稳定性选择享有两个世界的最佳选择,产生既稳定的稀疏选择集,具有良好的预测性能。
translated by 谷歌翻译