有许多可用于选择优先考虑治疗的可用方法,包括基于治疗效果估计,风险评分和手工制作规则的遵循申请。我们将秩加权平均治疗效应(RATY)指标作为一种简单常见的指标系列,用于比较水平竞争范围的治疗优先级规则。对于如何获得优先级规则,率是不可知的,并且仅根据他们在识别受益于治疗中受益的单位的方式进行评估。我们定义了一系列速率估算器,并证明了一个中央限位定理,可以在各种随机和观测研究环境中实现渐近精确的推断。我们为使用自主置信区间的使用提供了理由,以及用于测试关于治疗效果中的异质性的假设的框架,与优先级规则相关。我们对速率的定义嵌套了许多现有度量,包括QINI系数,以及我们的分析直接产生了这些指标的推论方法。我们展示了我们从个性化医学和营销的示例中的方法。在医疗环境中,使用来自Sprint和Accor-BP随机对照试验的数据,我们发现没有明显的证据证明异质治疗效果。另一方面,在大量的营销审判中,我们在一些数字广告活动的治疗效果中发现了具有的强大证据,并证明了如何使用率如何比较优先考虑估计风险的目标规则与估计治疗效益优先考虑的目标规则。
translated by 谷歌翻译
基于森林的方法最近在非参数治疗效应估计中获得了普及。在这一工作方面,我们引入了因果生存森林,可用于在可能右估计结果的生存和观察环境中估计异质治疗效果。我们的方法依赖于正交估计方程来在不满意的情况下对审查和选择效果进行鲁棒性调整。在我们的实验中,我们发现相对于许多基线的表现良好的方法。
translated by 谷歌翻译
在制定政策指南时,随机对照试验(RCT)代表了黄金标准。但是,RCT通常是狭窄的,并且缺乏更广泛的感兴趣人群的数据。这些人群中的因果效应通常是使用观察数据集估算的,这可能会遭受未观察到的混杂和选择偏见。考虑到一组观察估计(例如,来自多项研究),我们提出了一个试图拒绝偏见的观察性估计值的元偏值。我们使用验证效应,可以从RCT和观察数据中推断出的因果效应。在拒绝未通过此测试的估计器之后,我们对RCT中未观察到的亚组的外推性效应产生了保守的置信区间。假设至少一个观察估计量在验证和外推效果方面是渐近正常且一致的,我们为我们算法输出的间隔的覆盖率概率提供了保证。为了促进在跨数据集的因果效应运输的设置中,我们给出的条件下,即使使用灵活的机器学习方法用于估计滋扰参数,群体平均治疗效应的双重稳定估计值也是渐近的正常。我们说明了方法在半合成和现实世界数据集上的特性,并表明它与标准的荟萃分析技术相比。
translated by 谷歌翻译
Many scientific and engineering challenges-ranging from personalized medicine to customized marketing recommendations-require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical inference. In experiments, we find causal forests to be substantially more powerful than classical methods based on nearest-neighbor matching, especially in the presence of irrelevant covariates.
translated by 谷歌翻译
现代纵向研究在许多时间点收集特征数据,通常是相同的样本大小顺序。这些研究通常受到{辍学}和积极违规的影响。我们通过概括近期增量干预的效果(转换倾向分数而不是设置治疗价值)来解决这些问题,以适应多种结果和主题辍学。当条件忽略(不需要治疗阳性)时,我们给出了识别表达式的增量干预效果,并导出估计这些效果的非参数效率。然后我们提出了高效的非参数估计器,表明它们以快速参数速率收敛并产生均匀的推理保证,即使在较慢的速率下灵活估计滋扰函数。我们还研究了新型无限时间范围设置中的更传统的确定性效果的增量干预效应的方差比,其中时间点的数量可以随着样本大小而生长,并显示增量干预效果在统计精度下产生近乎指数的收益这个设置。最后,我们通过模拟得出结论,并在研究低剂量阿司匹林对妊娠结果的研究中进行了方法。
translated by 谷歌翻译
In many investigations, the primary outcome of interest is difficult or expensive to collect. Examples include long-term health effects of medical interventions, measurements requiring expensive testing or follow-up, and outcomes only measurable on small panels as in marketing. This reduces effective sample sizes for estimating the average treatment effect (ATE). However, there is often an abundance of observations on surrogate outcomes not of primary interest, such as short-term health effects or online-ad click-through. We study the role of such surrogate observations in the efficient estimation of treatment effects. To quantify their value, we derive the semiparametric efficiency bounds on ATE estimation with and without the presence of surrogates and several intermediary settings. The difference between these characterizes the efficiency gains from optimally leveraging surrogates. We study two regimes: when the number of surrogate observations is comparable to primary-outcome observations and when the former dominates the latter. We take an agnostic missing-data approach circumventing strong surrogate conditions previously assumed. To leverage surrogates' efficiency gains, we develop efficient ATE estimation and inference based on flexible machine-learning estimates of nuisance functions appearing in the influence functions we derive. We empirically demonstrate the gains by studying the long-term earnings effect of job training.
translated by 谷歌翻译
估算随机实验的因果效应是临床研究的核心。降低这些分析中的统计不确定性是统计学家的重要目标。注册管理机构,事先审判和健康记录构成了对患者的历史数据汇编,其在可能是可利用至此的患者下的历史数据。但是,大多数历史借贷方法通过牺牲严格的I型错误率控制来达到方差的减少。在这里,我们建议使用利用线性协变调整的历史数据来提高试验分析的效率而不会产生偏见。具体而言,我们在历史数据上培训预后模型,然后使用线性回归估计治疗效果,同时调整试验受试者预测结果(其预后分数)。我们证明,在某些条件下,这种预后调整程序在大类估算仪中获得了最低差异。当不符合这些条件时,预后的协变量调整仍然比原始协变量调整更有效,并且效率的增益与上述预后模型的预测准确性的衡量标准成正比,与原始协变量的线性关系的预测准确性。我们展示了使用模拟的方法和阿尔茨海默病的临床试验的再分析,并观察平均平均误差的有意义减少和估计方差。最后,我们提供了一种简化的渐近方差公式,使得能够计算这些收益的功率计算。在使用预后模型的预后模型中,可以实现10%和30%的样品尺寸减少。
translated by 谷歌翻译
在TAN(2006)边缘敏感模型下,在不观察到的混淆存在下构建平均处理效应的界限问题。结合涉及对冲倾向分数的现有表征具有对问题的新的分布稳健特征,我们提出了我们称之为“双重有效/双重尖锐”(DVD)估计的这些界限的新颖估算器。双重清晰度对应于DVD估计始终估计灵敏度模型所暗示的最有可能(即,夏普)的界限,即使当所有滋扰参数都适当一致时,即使在两个滋扰参数中的一个被击败并实现半污染参数之一。双倍有效性是部分识别的全新财产:DVD估计仍然提供有效,但即使在大多数滋扰参数都被遗漏时,仍然没有锐利。实际上,即使在DVDS点估计无法渐近正常的情况下,标准沃尔德置信区间也可能保持有效。在二进制结果的情况下,DVD估计是特别方便的并且在结果回归和倾向评分方面具有闭合形式的表达。我们展示了模拟研究中的DVD估计,以及对右心导管插入的案例研究。
translated by 谷歌翻译
我们考虑在估计涉及依赖参数的高维滋扰的估计方程中估计一个低维参数。一个中心示例是因果推理中(局部)分位数处理效应((L)QTE)的有效估计方程,涉及在分位数以估计的分位数评估的协方差累积分布函数。借记机学习(DML)是一种使用灵活的机器学习方法估算高维滋扰的数据分解方法,但是将其应用于参数依赖性滋扰的问题是不切实际的。对于(L)QTE,DML要求我们学习整个协变量累积分布函数。相反,我们提出了局部偏见的机器学习(LDML),该学习避免了这一繁重的步骤,并且只需要对参数进行一次初始粗糙猜测而估算烦恼。对于(L)QTE,LDML仅涉及学习两个回归功能,这是机器学习方法的标准任务。我们证明,在松弛速率条件下,我们的估计量与使用未知的真实滋扰的不可行的估计器具有相同的有利渐近行为。因此,LDML值得注意的是,当我们必须控制许多协变量和/或灵活的关系时,如(l)QTES在((l)QTES)中,实际上可以有效地估算重要数量,例如(l)QTES。
translated by 谷歌翻译
了解特定待遇或政策与许多感兴趣领域有关的影响,从政治经济学,营销到医疗保健。在本文中,我们开发了一种非参数算法,用于在合成控制的背景下检测随着时间的流逝的治疗作用。该方法基于许多算法的反事实预测,而不必假设该算法正确捕获模型。我们介绍了一种推论程序来检测治疗效果,并表明测试程序对于固定,β混合过程渐近有效,而无需对所考虑的一组基础算法施加任何限制。我们讨论了平均治疗效果估计的一致性保证,并为提出的方法提供了遗憾的界限。算法类别可能包括随机森林,套索或任何其他机器学习估计器。数值研究和应用说明了该方法的优势。
translated by 谷歌翻译
在本文中,我们提出了一种非参数估计的方法,并推断了一般样本选择模型中因果效应参数的异质界限,初始治疗可能会影响干预后结果是否观察到。可观察到的协变量可能会混淆治疗选择,而观察结果和不可观察的结果可能会混淆。该方法提供条件效应界限作为策略相关的预处理变量的功能。它允许对身份不明的条件效应曲线进行有效的统计推断。我们使用灵活的半参数脱偏机学习方法,该方法可以适应柔性功能形式和治疗,选择和结果过程之间的高维混杂变量。还提供了易于验证的高级条件,以进行估计和错误指定的鲁棒推理保证。
translated by 谷歌翻译
Strategic test allocation plays a major role in the control of both emerging and existing pandemics (e.g., COVID-19, HIV). Widespread testing supports effective epidemic control by (1) reducing transmission via identifying cases, and (2) tracking outbreak dynamics to inform targeted interventions. However, infectious disease surveillance presents unique statistical challenges. For instance, the true outcome of interest - one's positive infectious status, is often a latent variable. In addition, presence of both network and temporal dependence reduces the data to a single observation. As testing entire populations regularly is neither efficient nor feasible, standard approaches to testing recommend simple rule-based testing strategies (e.g., symptom based, contact tracing), without taking into account individual risk. In this work, we study an adaptive sequential design involving n individuals over a period of {\tau} time-steps, which allows for unspecified dependence among individuals and across time. Our causal target parameter is the mean latent outcome we would have obtained after one time-step, if, starting at time t given the observed past, we had carried out a stochastic intervention that maximizes the outcome under a resource constraint. We propose an Online Super Learner for adaptive sequential surveillance that learns the optimal choice of tests strategies over time while adapting to the current state of the outbreak. Relying on a series of working models, the proposed method learns across samples, through time, or both: based on the underlying (unknown) structure in the data. We present an identification result for the latent outcome in terms of the observed data, and demonstrate the superior performance of the proposed strategy in a simulation modeling a residential university environment during the COVID-19 pandemic.
translated by 谷歌翻译
Statistical risk assessments inform consequential decisions such as pretrial release in criminal justice, and loan approvals in consumer finance. Such risk assessments make counterfactual predictions, predicting the likelihood of an outcome under a proposed decision (e.g., what would happen if we approved this loan?). A central challenge, however, is that there may have been unmeasured confounders that jointly affected past decisions and outcomes in the historical data. This paper proposes a tractable mean outcome sensitivity model that bounds the extent to which unmeasured confounders could affect outcomes on average. The mean outcome sensitivity model partially identifies the conditional likelihood of the outcome under the proposed decision, popular predictive performance metrics (e.g., accuracy, calibration, TPR, FPR), and commonly-used predictive disparities. We derive their sharp identified sets, and we then solve three tasks that are essential to deploying statistical risk assessments in high-stakes settings. First, we propose a doubly-robust learning procedure for the bounds on the conditional likelihood of the outcome under the proposed decision. Second, we translate our estimated bounds on the conditional likelihood of the outcome under the proposed decision into a robust, plug-in decision-making policy. Third, we develop doubly-robust estimators of the bounds on the predictive performance of an existing risk assessment.
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
预测风险评分越来越多地用于指导复杂环境(尤其是医疗保健)中的临床或其他干预措施。直接更新用于指导干预措施的风险评分会导致风险估计。我们建议使用“保留集”(未接受风险评分引导干预措施的人口子集)进行更新,以防止这种情况。由于保留集中的样本并不能从风险预测中受益,因此其规模必须权衡更新的风险评分的性能,同时最大程度地减少被保留样品的数量。我们证明,这种方法的表现优于简单的替代方案,并且通过定义一般的损失函数描述了可以轻松识别最佳保持尺寸(OHS)的条件。我们引入了OHS估计的参数和半参数算法,并证明了它们在近期对先兆子痫的风险评分上的使用。基于这些结果,我们认为保留集是安全,可行且易于实施的手段,可以安全地更新预测风险得分。
translated by 谷歌翻译
预测一组结果 - 而不是独特的结果 - 是统计学习中不确定性定量的有前途的解决方案。尽管有关于构建具有统计保证的预测集的丰富文献,但适应未知的协变量转变(实践中普遍存在的问题)还是一个严重的未解决的挑战。在本文中,我们表明具有有限样本覆盖范围保证的预测集是非信息性的,并提出了一种新型的无灵活分配方法PredSet-1Step,以有效地构建了在未知协方差转移下具有渐近覆盖范围保证的预测集。我们正式表明我们的方法是\ textIt {渐近上可能是近似正确},对大型样本的置信度有很好的覆盖误差。我们说明,在南非队列研究中,它在许多实验和有关HIV风险预测的数据集中实现了名义覆盖范围。我们的理论取决于基于一般渐近线性估计器的WALD置信区间覆盖范围的融合率的新结合。
translated by 谷歌翻译
算法在政策和业务中产生越来越多的决策和建议。这种算法决策是自然实验(可条件准随机分配的仪器),因为该算法仅基于可观察输入变量的决定。我们使用该观察来为一类随机和确定性决策算法开发治疗效果估算器。我们的估算器被证明对于明确的因果效应,它们是一致的和渐近正常的。我们估算器的一个关键特例是多维回归不连续性设计。我们应用估算员以评估冠状病毒援助,救济和经济安全(关心)法案的效果,其中数十亿美元的资金通过算法规则分配给医院。我们的估计表明,救济资金对Covid-19相关的医院活动水平影响不大。天真的OLS和IV估计表现出实质性的选择偏差。
translated by 谷歌翻译
In many applications, heterogeneous treatment effects on a censored response variable are of primary interest, and it is natural to evaluate the effects at different quantiles (e.g., median). The large number of potential effect modifiers, the unknown structure of the treatment effects, and the presence of right censoring pose significant challenges. In this paper, we develop a hybrid forest approach called Hybrid Censored Quantile Regression Forest (HCQRF) to assess the heterogeneous effects varying with high-dimensional variables. The hybrid estimation approach takes advantage of the random forests and the censored quantile regression. We propose a doubly-weighted estimation procedure that consists of a redistribution-of-mass weight to handle censoring and an adaptive nearest neighbor weight derived from the forest to handle high-dimensional effect functions. We propose a variable importance decomposition to measure the impact of a variable on the treatment effect function. Extensive simulation studies demonstrate the efficacy and stability of HCQRF. The result of the simulation study also convinces us of the effectiveness of the variable importance decomposition. We apply HCQRF to a clinical trial of colorectal cancer. We achieve insightful estimations of the treatment effect and meaningful variable importance results. The result of the variable importance also confirms the necessity of the decomposition.
translated by 谷歌翻译
在本文中,我们研究了在一组单位上进行的设计实验的问题,例如在线市场中的用户或用户组,以多个时间段,例如数周或数月。这些实验特别有助于研究对当前和未来结果具有因果影响的治疗(瞬时和滞后的影响)。设计问题涉及在实验之前或期间选择每个单元的治疗时间,以便最精确地估计瞬间和滞后的效果,实验后。这种治疗决策的优化可以通过降低其样本尺寸要求,直接最小化实验的机会成本。优化是我们提供近最优解的NP-Hard整数程序,当时在开始时进行设计决策(固定样本大小设计)。接下来,我们研究允许在实验期间进行适应性决策的顺序实验,并且还可能早期停止实验,进一步降低其成本。然而,这些实验的顺序性质使设计阶段和估计阶段复杂化。我们提出了一种新的算法,PGAE,通过自适应地制造治疗决策,估算治疗效果和绘制有效的实验后推理来解决这些挑战。 PGAE将来自贝叶斯统计,动态编程和样品分裂的思想结合起来。使用来自多个域的真实数据集的合成实验,我们证明了与基准相比,我们的固定样本尺寸和顺序实验的提出解决方案将实验的机会成本降低了50%和70%。
translated by 谷歌翻译
我们考虑随机对照试验的差异问题,通过使用与结果相关的协变量但与治疗无关。我们提出了一种机器学习回归调整的处理效果估算器,我们称之为Mlrate。 Mlrate使用机器学习预测结果来降低估计方差。它采用交叉配件来避免过度偏置,在一般条件下,我们证明了一致性和渐近正常性。 Mlrate对机器学习的预测较差的鲁棒步骤:如果预测与结果不相关,则估计器执行渐近的差异,而不是标准差异估计器,而如果预测与结果高度相关,则效率提升大。在A / A测试中,对于在Facebook实验中通常监测的一组48个结果指标,估计器的差异比简单差分估计器差异超过70%,比仅调整的共同单变量过程约19%用于结果的预测值。
translated by 谷歌翻译