在上下文土匪中,非政策评估(OPE)已在现实世界中迅速采用,因为它仅使用历史日志数据就可以离线评估新政策。不幸的是,当动作数量较大时,现有的OPE估计器(其中大多数是基于反相反的得分加权)会严重降解,并且可能会遭受极端偏见和差异。这挫败了从推荐系统到语言模型的许多应用程序中使用OPE。为了克服这个问题,我们提出了一个新的OPE估计器,即当动作嵌入在动作空间中提供结构时,利用边缘化的重要性权重。我们表征了所提出的估计器的偏差,方差和平方平方误差,并分析了动作嵌入提供了比常规估计器提供统计益处的条件。除了理论分析外,我们还发现,即使由于大量作用,现有估计量崩溃,经验性绩效的改善也可以实现可靠的OPE。
translated by 谷歌翻译
Counterfactual reasoning from logged data has become increasingly important for many applications such as web advertising or healthcare. In this paper, we address the problem of learning stochastic policies with continuous actions from the viewpoint of counterfactual risk minimization (CRM). While the CRM framework is appealing and well studied for discrete actions, the continuous action case raises new challenges about modelization, optimization, and~offline model selection with real data which turns out to be particularly challenging. Our paper contributes to these three aspects of the CRM estimation pipeline. First, we introduce a modelling strategy based on a joint kernel embedding of contexts and actions, which overcomes the shortcomings of previous discretization approaches. Second, we empirically show that the optimization aspect of counterfactual learning is important, and we demonstrate the benefits of proximal point algorithms and differentiable estimators. Finally, we propose an evaluation protocol for offline policies in real-world logged systems, which is challenging since policies cannot be replayed on test data, and we release a new large-scale dataset along with multiple synthetic, yet realistic, evaluation setups.
translated by 谷歌翻译
我们研究了具有多维动作的批量上下窗匪盗数据的脱离政策评估问题,通常被称为板岩。问题是推荐系统和用户界面优化的常见,并且由于组合大小的动作空间,它特别具有挑战性。Swaminathan等人。(2017)在假设条件平均奖励是在行动中添加剂的假设下,提出了伪倾霉素(PI)估计。使用控制变体,我们考虑一大类无偏见的估计,包括PI估计器的特定情况和(渐近)其自归一化变体。通过优化此类,我们获得了在PI和自归一化PI估算中具有风险改善的新估算器。具有现实世界推荐数据以及合成数据的实验,验证了这些改进的实践。
translated by 谷歌翻译
我们考虑了上下文匪徒的违规评估(OPE)问题,其中目标是使用日志记录策略收集的数据估计目标策略的值。 ope的最流行方法是通过组合直接方法(DM)估计和涉及逆倾向得分(IP)的校正项而获得的双重稳健(DR)估计器的变型。现有算法主要关注降低大型IPS引起的博士估算器方差的策略。我们提出了一种称为双重强大的新方法,具有信息借用和基于上下文的交换(DR-IC)估计,专注于减少偏差和方差。 DR-IC估计器用参数奖励模型替换标准DM估计器,该参数奖励模型通过依赖于IPS的相关结构从“更近的”上下文中借用信息。 DR-IC估计器还基于特定于上下文的切换规则在该修改的DM估计器和修改的DR估计器之间自适应地插值。我们对DR-IC估算员的表现提供了可证明的保证。我们还展示了DR-IC估计的卓越性能与艺术最先进的OPE算法相比,在许多基准问题上的算法相比。
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most vibrant research frontiers in machine learning and has been recently applied to solve a number of challenging problems. In this paper, we primarily focus on off-policy evaluation (OPE), one of the most fundamental topics in RL. In recent years, a number of OPE methods have been developed in the statistics and computer science literature. We provide a discussion on the efficiency bound of OPE, some of the existing state-of-the-art OPE methods, their statistical properties and some other related research directions that are currently actively explored.
translated by 谷歌翻译
Off-policy evaluation methods are important in recommendation systems and search engines, where data collected under an existing logging policy is used to estimate the performance of a new proposed policy. A common approach to this problem is weighting, where data is weighted by a density ratio between the probability of actions given contexts in the target and logged policies. In practice, two issues often arise. First, many problems have very large action spaces and we may not observe rewards for most actions, and so in finite samples we may encounter a positivity violation. Second, many recommendation systems are not probabilistic and so having access to logging and target policy densities may not be feasible. To address these issues, we introduce the featurized embedded permutation weighting estimator. The estimator computes the density ratio in an action embedding space, which reduces the possibility of positivity violations. The density ratio is computed leveraging recent advances in normalizing flows and density ratio estimation as a classification problem, in order to obtain estimates which are feasible in practice.
translated by 谷歌翻译
我们研究了一个定价设置,其中每个客户都基于客户和/或产品特征提供了一种预测客户对该产品的估值的产品特征。通常只有历史销售记录,我们遵守每个客户是否以规定的价格购买产品,而不是客户的真实估值。因此,数据受到历史销售政策的影响,历史销售政策在没有进行实际实验的可能性的情况下估算未来损失/遗憾的困难/遗憾的损失/遗憾,而是优化诸如收入管理等下游任务的新政策。我们研究如何制定损失功能,该功能可用于直接优化定价策略,而不是通过中间需求估计阶段,这可能在实践中被偏见,因为模型拼写,正常化或校准差。虽然在估值数据可用时提出了现有方法,但我们提出了观察数据设置的损失函数。为实现这一目标,我们将机器学习的想法适应损坏的标签,我们可以考虑每个观察到的客户的结果(购买或不按规定的价格购买),作为客户估值的(已知)概率转变。从这种转变,我们派生了一类合适的无偏损失功能。在此类中,我们识别最小方差估计器,那些对不良需求函数估计的稳健性,并在估计的需求功能有用时提供指导。此外,我们还表明,当应用于我们的上下文定价环境时,在违规评估文学中流行的估计人员在这类损失职能范围内,并且当每个估算师在实践中可能表现良好时,还提供管理层。
translated by 谷歌翻译
我们考虑在部分可观察到的马尔可夫决策过程(POMDP)中的违法评估(OPE),其中评估策略仅取决于可观察变量,并且行为策略取决于不可观察的潜在变量。现有的作品无论是假设未测量的混乱,还是专注于观察和状态空间都是表格的设置。因此,这些方法在存在未测量的混淆器的情况下遭受大偏差,或者在具有连续或大观察/状态空间的设置中的大方差。在这项工作中,通过引入将目标策略的价值和观察到的数据分布联系起来,提出了具有潜在混淆的POMDPS的新识别方法。在完全可观察到的MDP中,这些桥接功能将熟悉的值函数和评估与行为策略之间的边际密度比减少。我们接下来提出了用于学习这些桥接功能的最小值估计方法。我们的提案允许一般函数近似,因此适用于具有连续或大观察/状态空间的设置。最后,我们基于这些估计的桥梁功能构建了三种估计,对应于基于价值函数的估计器,边缘化重要性采样估计器和双重稳健的估计器。他们的掺入无血症和渐近性质进行了详细研究。
translated by 谷歌翻译
We consider local kernel metric learning for off-policy evaluation (OPE) of deterministic policies in contextual bandits with continuous action spaces. Our work is motivated by practical scenarios where the target policy needs to be deterministic due to domain requirements, such as prescription of treatment dosage and duration in medicine. Although importance sampling (IS) provides a basic principle for OPE, it is ill-posed for the deterministic target policy with continuous actions. Our main idea is to relax the target policy and pose the problem as kernel-based estimation, where we learn the kernel metric in order to minimize the overall mean squared error (MSE). We present an analytic solution for the optimal metric, based on the analysis of bias and variance. Whereas prior work has been limited to scalar action spaces or kernel bandwidth selection, our work takes a step further being capable of vector action spaces and metric optimization. We show that our estimator is consistent, and significantly reduces the MSE compared to baseline OPE methods through experiments on various domains.
translated by 谷歌翻译
解决了与人类偏好的安全一致性以及学习效率之类的各种目的,越来越多的强化学习研究集中在依赖整个收益分配的风险功能上。关于\ emph {Oplicy风险评估}(OPRA)的最新工作,针对上下文匪徒引入了目标策略的收益率以及有限样本保证的一致估计量,并保证了(并同时保留所有风险)。在本文中,我们将OPRA提升到马尔可夫决策过程(MDPS),其中重要性采样(IS)CDF估计量由于有效样本量较小而遭受较长轨迹的较大差异。为了减轻这些问题,我们合并了基于模型的估计,以开发MDPS回报的CDF的第一个双重鲁棒(DR)估计器。该估计器的差异明显较小,并且在指定模型时,可以实现Cramer-Rao方差下限。此外,对于许多风险功能,下游估计值同时享有较低的偏差和较低的差异。此外,我们得出了非政策CDF和风险估计的第一个Minimax下限,这与我们的误差界限到恒定因子。最后,我们在几种不同的环境上实验表明了DR CDF估计的精度。
translated by 谷歌翻译
高质量数据在确保政策评估的准确性方面起着核心作用。本文启动了针对强盗政策评估的高效和安全数据收集的研究。我们提出问题并研究其几种代表性变体。对于每个变体,我们分析其统计属性,得出相应的勘探策略,并设计用于计算它的有效算法。理论分析和实验都支持所提出方法的有用性。
translated by 谷歌翻译
非政策评估和学习(OPE/L)使用离线观察数据来做出更好的决策,这对于在线实验有限的应用至关重要。但是,完全取决于记录的数据,OPE/L对环境分布的变化很敏感 - 数据生成环境和部署策略的差异。 \ citet {si2020distributional}提议的分布在稳健的OPE/L(Drope/L)解决此问题,但该提案依赖于逆向权重,如果估计错误和遗憾,如果倾向是非参数估计的,即使其差异是次级估计,即使是次级估计的,其估计错误和遗憾将降低。对于标准的,非体,OPE/L,这是通过双重鲁棒(DR)方法来解决的,但它们并不自然地扩展到更复杂的drop/l,涉及最糟糕的期望。在本文中,我们提出了具有KL-Divergence不确定性集的DROPE/L的第一个DR算法。为了进行评估,我们提出了局部双重稳健的drope(LDR $^2 $ ope),并表明它在弱产品速率条件下实现了半摩托效率。多亏了本地化技术,LDR $^2 $ OPE仅需要安装少量回归,就像标准OPE的DR方法一样。为了学习,我们提出了连续的双重稳健下降(CDR $^2 $ opl),并表明,在涉及连续回归的产品速率条件下,它具有$ \ Mathcal {o} \ left的快速后悔率(n^) {-1/2} \ right)$即使未知的倾向是非参数估计的。我们从经验上验证了模拟中的算法,并将结果进一步扩展到一般$ f $ divergence的不确定性集。
translated by 谷歌翻译
离线政策优化可能会对许多现实世界的决策问题产生重大影响,因为在线学习在许多应用中可能是不可行的。重要性采样及其变体是离线策略评估中一种常用的估计器类型,此类估计器通常不需要关于价值函数或决策过程模型功能类的属性和代表性能力的假设。在本文中,我们确定了一种重要的过度拟合现象,以优化重要性加权收益,在这种情况下,学到的政策可以基本上避免在最初的状态空间的一部分中做出一致的决策。我们提出了一种算法,以避免通过新的每个国家 - 邻居标准化约束过度拟合,并提供对拟议算法的理论理由。我们还显示了以前尝试这种方法的局限性。我们在以医疗风格的模拟器为中测试算法,该模拟器是从真实医院收集的记录数据集和连续的控制任务。这些实验表明,与最先进的批处理学习算法相比,所提出的方法的过度拟合和更好的测试性能。
translated by 谷歌翻译
Off-policy evaluation (OPE) is a method for estimating the return of a target policy using some pre-collected observational data generated by a potentially different behavior policy. In some cases, there may be unmeasured variables that can confound the action-reward or action-next-state relationships, rendering many existing OPE approaches ineffective. This paper develops an instrumental variable (IV)-based method for consistent OPE in confounded Markov decision processes (MDPs). Similar to single-stage decision making, we show that IV enables us to correctly identify the target policy's value in infinite horizon settings as well. Furthermore, we propose an efficient and robust value estimator and illustrate its effectiveness through extensive simulations and analysis of real data from a world-leading short-video platform.
translated by 谷歌翻译
Off-Policy evaluation (OPE) is concerned with evaluating a new target policy using offline data generated by a potentially different behavior policy. It is critical in a number of sequential decision making problems ranging from healthcare to technology industries. Most of the work in existing literature is focused on evaluating the mean outcome of a given policy, and ignores the variability of the outcome. However, in a variety of applications, criteria other than the mean may be more sensible. For example, when the reward distribution is skewed and asymmetric, quantile-based metrics are often preferred for their robustness. In this paper, we propose a doubly-robust inference procedure for quantile OPE in sequential decision making and study its asymptotic properties. In particular, we propose utilizing state-of-the-art deep conditional generative learning methods to handle parameter-dependent nuisance function estimation. We demonstrate the advantages of this proposed estimator through both simulations and a real-world dataset from a short-video platform. In particular, we find that our proposed estimator outperforms classical OPE estimators for the mean in settings with heavy-tailed reward distributions.
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
本文关注的是,基于无限视野设置中预采用的观察数据,为目标策略的价值离线构建置信区间。大多数现有作品都假定不存在混淆观察到的动作的未测量变量。但是,在医疗保健和技术行业等实际应用中,这种假设可能会违反。在本文中,我们表明,使用一些辅助变量介导动作对系统动态的影响,目标策略的价值在混杂的马尔可夫决策过程中可以识别。基于此结果,我们开发了一个有效的非政策值估计器,该估计值可用于潜在模型错误指定并提供严格的不确定性定量。我们的方法是通过理论结果,从乘车共享公司获得的模拟和真实数据集证明的。python实施了建议的过程,请访问https://github.com/mamba413/cope。
translated by 谷歌翻译
当并非观察到所有混杂因子并获得负面对照时,我们研究因果参数的估计。最近的工作表明,这些方法如何通过两个所谓的桥梁函数来实现识别和有效估计。在本文中,我们使用阴性对照来应对因果推断的主要挑战:这些桥梁功能的识别和估计。先前的工作依赖于这些功能的完整性条件,以识别因果参数并在估计中需要进行独特性假设,并且还集中于桥梁函数的参数估计。相反,我们提供了一种新的识别策略,以避免完整性条件。而且,我们根据最小学习公式为这些功能提供新的估计量。这些估计值适合通用功能类别,例如重现Hilbert空间和神经网络。我们研究了有限样本收敛的结果,既可以估计桥梁功能本身,又要在各种假设组合下对因果参数进行最终估计。我们尽可能避免桥梁上的独特条件。
translated by 谷歌翻译
我们为加强学习提供了实验基准和实验研究,以便在加固学习中进行违规政策评估(OPE),这是许多安全关键申请中的关键问题。鉴于部署基于学习的方法的兴趣日益越来越令人兴趣,最近的OPE方法提出了势头,导致需要标准化的经验分析。我们的工作强烈关注实验设计的多样性,以实现OPE方法的压力测试。我们提供了一个全面的基准测试套件,以研究不同属性对方法性能的相互作用。我们在实践中将结果蒸煮为OPE的概要指南。我们的软件包,Caltech Ope基准套件(COB),是开放的,我们邀请有兴趣的研究人员进一步贡献基准。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译