我们研究了具有多维动作的批量上下窗匪盗数据的脱离政策评估问题,通常被称为板岩。问题是推荐系统和用户界面优化的常见,并且由于组合大小的动作空间,它特别具有挑战性。Swaminathan等人。(2017)在假设条件平均奖励是在行动中添加剂的假设下,提出了伪倾霉素(PI)估计。使用控制变体,我们考虑一大类无偏见的估计,包括PI估计器的特定情况和(渐近)其自归一化变体。通过优化此类,我们获得了在PI和自归一化PI估算中具有风险改善的新估算器。具有现实世界推荐数据以及合成数据的实验,验证了这些改进的实践。
translated by 谷歌翻译
在上下文土匪中,非政策评估(OPE)已在现实世界中迅速采用,因为它仅使用历史日志数据就可以离线评估新政策。不幸的是,当动作数量较大时,现有的OPE估计器(其中大多数是基于反相反的得分加权)会严重降解,并且可能会遭受极端偏见和差异。这挫败了从推荐系统到语言模型的许多应用程序中使用OPE。为了克服这个问题,我们提出了一个新的OPE估计器,即当动作嵌入在动作空间中提供结构时,利用边缘化的重要性权重。我们表征了所提出的估计器的偏差,方差和平方平方误差,并分析了动作嵌入提供了比常规估计器提供统计益处的条件。除了理论分析外,我们还发现,即使由于大量作用,现有估计量崩溃,经验性绩效的改善也可以实现可靠的OPE。
translated by 谷歌翻译
Counterfactual reasoning from logged data has become increasingly important for many applications such as web advertising or healthcare. In this paper, we address the problem of learning stochastic policies with continuous actions from the viewpoint of counterfactual risk minimization (CRM). While the CRM framework is appealing and well studied for discrete actions, the continuous action case raises new challenges about modelization, optimization, and~offline model selection with real data which turns out to be particularly challenging. Our paper contributes to these three aspects of the CRM estimation pipeline. First, we introduce a modelling strategy based on a joint kernel embedding of contexts and actions, which overcomes the shortcomings of previous discretization approaches. Second, we empirically show that the optimization aspect of counterfactual learning is important, and we demonstrate the benefits of proximal point algorithms and differentiable estimators. Finally, we propose an evaluation protocol for offline policies in real-world logged systems, which is challenging since policies cannot be replayed on test data, and we release a new large-scale dataset along with multiple synthetic, yet realistic, evaluation setups.
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
非政策评估和学习(OPE/L)使用离线观察数据来做出更好的决策,这对于在线实验有限的应用至关重要。但是,完全取决于记录的数据,OPE/L对环境分布的变化很敏感 - 数据生成环境和部署策略的差异。 \ citet {si2020distributional}提议的分布在稳健的OPE/L(Drope/L)解决此问题,但该提案依赖于逆向权重,如果估计错误和遗憾,如果倾向是非参数估计的,即使其差异是次级估计,即使是次级估计的,其估计错误和遗憾将降低。对于标准的,非体,OPE/L,这是通过双重鲁棒(DR)方法来解决的,但它们并不自然地扩展到更复杂的drop/l,涉及最糟糕的期望。在本文中,我们提出了具有KL-Divergence不确定性集的DROPE/L的第一个DR算法。为了进行评估,我们提出了局部双重稳健的drope(LDR $^2 $ ope),并表明它在弱产品速率条件下实现了半摩托效率。多亏了本地化技术,LDR $^2 $ OPE仅需要安装少量回归,就像标准OPE的DR方法一样。为了学习,我们提出了连续的双重稳健下降(CDR $^2 $ opl),并表明,在涉及连续回归的产品速率条件下,它具有$ \ Mathcal {o} \ left的快速后悔率(n^) {-1/2} \ right)$即使未知的倾向是非参数估计的。我们从经验上验证了模拟中的算法,并将结果进一步扩展到一般$ f $ divergence的不确定性集。
translated by 谷歌翻译
Off-Policy evaluation (OPE) is concerned with evaluating a new target policy using offline data generated by a potentially different behavior policy. It is critical in a number of sequential decision making problems ranging from healthcare to technology industries. Most of the work in existing literature is focused on evaluating the mean outcome of a given policy, and ignores the variability of the outcome. However, in a variety of applications, criteria other than the mean may be more sensible. For example, when the reward distribution is skewed and asymmetric, quantile-based metrics are often preferred for their robustness. In this paper, we propose a doubly-robust inference procedure for quantile OPE in sequential decision making and study its asymptotic properties. In particular, we propose utilizing state-of-the-art deep conditional generative learning methods to handle parameter-dependent nuisance function estimation. We demonstrate the advantages of this proposed estimator through both simulations and a real-world dataset from a short-video platform. In particular, we find that our proposed estimator outperforms classical OPE estimators for the mean in settings with heavy-tailed reward distributions.
translated by 谷歌翻译
我们探索了一个新的强盗实验模型,其中潜在的非组织序列会影响武器的性能。上下文 - 统一算法可能会混淆,而那些执行正确的推理面部信息延迟的算法。我们的主要见解是,我们称之为Deconfounst Thompson采样的算法在适应性和健壮性之间取得了微妙的平衡。它的适应性在易于固定实例中带来了最佳效率,但是在硬性非平稳性方面显示出令人惊讶的弹性,这会导致其他自适应算法失败。
translated by 谷歌翻译
高质量数据在确保政策评估的准确性方面起着核心作用。本文启动了针对强盗政策评估的高效和安全数据收集的研究。我们提出问题并研究其几种代表性变体。对于每个变体,我们分析其统计属性,得出相应的勘探策略,并设计用于计算它的有效算法。理论分析和实验都支持所提出方法的有用性。
translated by 谷歌翻译
本文提出了在多阶段实验的背景下的异质治疗效应的置信区间结构,以$ N $样品和高维,$ D $,混淆。我们的重点是$ d \ gg n $的情况,但获得的结果也适用于低维病例。我们展示了正则化估计的偏差,在高维变焦空间中不可避免,具有简单的双重稳固分数。通过这种方式,不需要额外的偏差,并且我们获得root $ N $推理结果,同时允许治疗和协变量的多级相互依赖性。记忆财产也没有假设;治疗可能取决于所有先前的治疗作业以及以前的所有多阶段混淆。我们的结果依赖于潜在依赖的某些稀疏假设。我们发现具有动态处理的强大推理所需的新产品率条件。
translated by 谷歌翻译
离线政策优化可能会对许多现实世界的决策问题产生重大影响,因为在线学习在许多应用中可能是不可行的。重要性采样及其变体是离线策略评估中一种常用的估计器类型,此类估计器通常不需要关于价值函数或决策过程模型功能类的属性和代表性能力的假设。在本文中,我们确定了一种重要的过度拟合现象,以优化重要性加权收益,在这种情况下,学到的政策可以基本上避免在最初的状态空间的一部分中做出一致的决策。我们提出了一种算法,以避免通过新的每个国家 - 邻居标准化约束过度拟合,并提供对拟议算法的理论理由。我们还显示了以前尝试这种方法的局限性。我们在以医疗风格的模拟器为中测试算法,该模拟器是从真实医院收集的记录数据集和连续的控制任务。这些实验表明,与最先进的批处理学习算法相比,所提出的方法的过度拟合和更好的测试性能。
translated by 谷歌翻译
交叉验证是一种广泛使用的技术来估计预测误差,但其行为很复杂且不完全理解。理想情况下,人们想认为,交叉验证估计手头模型的预测错误,适合训练数据。我们证明,普通最小二乘拟合的线性模型并非如此。相反,它估计模型的平均预测误差适合于同一人群提取的其他看不见的训练集。我们进一步表明,这种现象发生在大多数流行的预测误差估计中,包括数据拆分,自举和锦葵的CP。接下来,从交叉验证得出的预测误差的标准置信区间可能的覆盖范围远低于所需水平。由于每个数据点都用于训练和测试,因此每个折叠的测量精度之间存在相关性,因此方差的通常估计值太小。我们引入了嵌套的交叉验证方案,以更准确地估计该方差,并从经验上表明,在传统的交叉验证间隔失败的许多示例中,这种修改导致间隔大致正确覆盖。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
我们研究了一个定价设置,其中每个客户都基于客户和/或产品特征提供了一种预测客户对该产品的估值的产品特征。通常只有历史销售记录,我们遵守每个客户是否以规定的价格购买产品,而不是客户的真实估值。因此,数据受到历史销售政策的影响,历史销售政策在没有进行实际实验的可能性的情况下估算未来损失/遗憾的困难/遗憾的损失/遗憾,而是优化诸如收入管理等下游任务的新政策。我们研究如何制定损失功能,该功能可用于直接优化定价策略,而不是通过中间需求估计阶段,这可能在实践中被偏见,因为模型拼写,正常化或校准差。虽然在估值数据可用时提出了现有方法,但我们提出了观察数据设置的损失函数。为实现这一目标,我们将机器学习的想法适应损坏的标签,我们可以考虑每个观察到的客户的结果(购买或不按规定的价格购买),作为客户估值的(已知)概率转变。从这种转变,我们派生了一类合适的无偏损失功能。在此类中,我们识别最小方差估计器,那些对不良需求函数估计的稳健性,并在估计的需求功能有用时提供指导。此外,我们还表明,当应用于我们的上下文定价环境时,在违规评估文学中流行的估计人员在这类损失职能范围内,并且当每个估算师在实践中可能表现良好时,还提供管理层。
translated by 谷歌翻译
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
translated by 谷歌翻译
Statistical risk assessments inform consequential decisions such as pretrial release in criminal justice, and loan approvals in consumer finance. Such risk assessments make counterfactual predictions, predicting the likelihood of an outcome under a proposed decision (e.g., what would happen if we approved this loan?). A central challenge, however, is that there may have been unmeasured confounders that jointly affected past decisions and outcomes in the historical data. This paper proposes a tractable mean outcome sensitivity model that bounds the extent to which unmeasured confounders could affect outcomes on average. The mean outcome sensitivity model partially identifies the conditional likelihood of the outcome under the proposed decision, popular predictive performance metrics (e.g., accuracy, calibration, TPR, FPR), and commonly-used predictive disparities. We derive their sharp identified sets, and we then solve three tasks that are essential to deploying statistical risk assessments in high-stakes settings. First, we propose a doubly-robust learning procedure for the bounds on the conditional likelihood of the outcome under the proposed decision. Second, we translate our estimated bounds on the conditional likelihood of the outcome under the proposed decision into a robust, plug-in decision-making policy. Third, we develop doubly-robust estimators of the bounds on the predictive performance of an existing risk assessment.
translated by 谷歌翻译
In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods-it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based estimates and importance sampling based estimates.
translated by 谷歌翻译
由于在数据稀缺的设置中,交叉验证的性能不佳,我们提出了一个新颖的估计器,以估计数据驱动的优化策略的样本外部性能。我们的方法利用优化问题的灵敏度分析来估计梯度关于数据中噪声量的最佳客观值,并利用估计的梯度将策略的样本中的表现为依据。与交叉验证技术不同,我们的方法避免了为测试集牺牲数据,在训练和因此非常适合数据稀缺的设置时使用所有数据。我们证明了我们估计量的偏见和方差范围,这些问题与不确定的线性目标优化问题,但已知的,可能是非凸的,可行的区域。对于更专业的优化问题,从某种意义上说,可行区域“弱耦合”,我们证明结果更强。具体而言,我们在估算器的错误上提供明确的高概率界限,该估计器在策略类别上均匀地保持,并取决于问题的维度和策略类的复杂性。我们的边界表明,在轻度条件下,随着优化问题的尺寸的增长,我们的估计器的误差也会消失,即使可用数据的量仍然很小且恒定。说不同的是,我们证明我们的估计量在小型数据中的大规模政权中表现良好。最后,我们通过数值将我们提出的方法与最先进的方法进行比较,通过使用真实数据调度紧急医疗响应服务的案例研究。我们的方法提供了更准确的样本外部性能估计,并学习了表现更好的政策。
translated by 谷歌翻译
我们考虑了上下文匪徒的违规评估(OPE)问题,其中目标是使用日志记录策略收集的数据估计目标策略的值。 ope的最流行方法是通过组合直接方法(DM)估计和涉及逆倾向得分(IP)的校正项而获得的双重稳健(DR)估计器的变型。现有算法主要关注降低大型IPS引起的博士估算器方差的策略。我们提出了一种称为双重强大的新方法,具有信息借用和基于上下文的交换(DR-IC)估计,专注于减少偏差和方差。 DR-IC估计器用参数奖励模型替换标准DM估计器,该参数奖励模型通过依赖于IPS的相关结构从“更近的”上下文中借用信息。 DR-IC估计器还基于特定于上下文的切换规则在该修改的DM估计器和修改的DR估计器之间自适应地插值。我们对DR-IC估算员的表现提供了可证明的保证。我们还展示了DR-IC估计的卓越性能与艺术最先进的OPE算法相比,在许多基准问题上的算法相比。
translated by 谷歌翻译
We introduce a new setting, optimize-and-estimate structured bandits. Here, a policy must select a batch of arms, each characterized by its own context, that would allow it to both maximize reward and maintain an accurate (ideally unbiased) population estimate of the reward. This setting is inherent to many public and private sector applications and often requires handling delayed feedback, small data, and distribution shifts. We demonstrate its importance on real data from the United States Internal Revenue Service (IRS). The IRS performs yearly audits of the tax base. Two of its most important objectives are to identify suspected misreporting and to estimate the "tax gap" -- the global difference between the amount paid and true amount owed. Based on a unique collaboration with the IRS, we cast these two processes as a unified optimize-and-estimate structured bandit. We analyze optimize-and-estimate approaches to the IRS problem and propose a novel mechanism for unbiased population estimation that achieves rewards comparable to baseline approaches. This approach has the potential to improve audit efficacy, while maintaining policy-relevant estimates of the tax gap. This has important social consequences given that the current tax gap is estimated at nearly half a trillion dollars. We suggest that this problem setting is fertile ground for further research and we highlight its interesting challenges. The results of this and related research are currently being incorporated into the continual improvement of the IRS audit selection methods.
translated by 谷歌翻译
现代纵向研究在许多时间点收集特征数据,通常是相同的样本大小顺序。这些研究通常受到{辍学}和积极违规的影响。我们通过概括近期增量干预的效果(转换倾向分数而不是设置治疗价值)来解决这些问题,以适应多种结果和主题辍学。当条件忽略(不需要治疗阳性)时,我们给出了识别表达式的增量干预效果,并导出估计这些效果的非参数效率。然后我们提出了高效的非参数估计器,表明它们以快速参数速率收敛并产生均匀的推理保证,即使在较慢的速率下灵活估计滋扰函数。我们还研究了新型无限时间范围设置中的更传统的确定性效果的增量干预效应的方差比,其中时间点的数量可以随着样本大小而生长,并显示增量干预效果在统计精度下产生近乎指数的收益这个设置。最后,我们通过模拟得出结论,并在研究低剂量阿司匹林对妊娠结果的研究中进行了方法。
translated by 谷歌翻译