Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译
Off-policy evaluation (OPE) is a method for estimating the return of a target policy using some pre-collected observational data generated by a potentially different behavior policy. In some cases, there may be unmeasured variables that can confound the action-reward or action-next-state relationships, rendering many existing OPE approaches ineffective. This paper develops an instrumental variable (IV)-based method for consistent OPE in confounded Markov decision processes (MDPs). Similar to single-stage decision making, we show that IV enables us to correctly identify the target policy's value in infinite horizon settings as well. Furthermore, we propose an efficient and robust value estimator and illustrate its effectiveness through extensive simulations and analysis of real data from a world-leading short-video platform.
translated by 谷歌翻译
Off-Policy evaluation (OPE) is concerned with evaluating a new target policy using offline data generated by a potentially different behavior policy. It is critical in a number of sequential decision making problems ranging from healthcare to technology industries. Most of the work in existing literature is focused on evaluating the mean outcome of a given policy, and ignores the variability of the outcome. However, in a variety of applications, criteria other than the mean may be more sensible. For example, when the reward distribution is skewed and asymmetric, quantile-based metrics are often preferred for their robustness. In this paper, we propose a doubly-robust inference procedure for quantile OPE in sequential decision making and study its asymptotic properties. In particular, we propose utilizing state-of-the-art deep conditional generative learning methods to handle parameter-dependent nuisance function estimation. We demonstrate the advantages of this proposed estimator through both simulations and a real-world dataset from a short-video platform. In particular, we find that our proposed estimator outperforms classical OPE estimators for the mean in settings with heavy-tailed reward distributions.
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most vibrant research frontiers in machine learning and has been recently applied to solve a number of challenging problems. In this paper, we primarily focus on off-policy evaluation (OPE), one of the most fundamental topics in RL. In recent years, a number of OPE methods have been developed in the statistics and computer science literature. We provide a discussion on the efficiency bound of OPE, some of the existing state-of-the-art OPE methods, their statistical properties and some other related research directions that are currently actively explored.
translated by 谷歌翻译
This paper studies reinforcement learning (RL) in doubly inhomogeneous environments under temporal non-stationarity and subject heterogeneity. In a number of applications, it is commonplace to encounter datasets generated by system dynamics that may change over time and population, challenging high-quality sequential decision making. Nonetheless, most existing RL solutions require either temporal stationarity or subject homogeneity, which would result in sub-optimal policies if both assumptions were violated. To address both challenges simultaneously, we propose an original algorithm to determine the ``best data chunks" that display similar dynamics over time and across individuals for policy learning, which alternates between most recent change point detection and cluster identification. Our method is general, and works with a wide range of clustering and change point detection algorithms. It is multiply robust in the sense that it takes multiple initial estimators as input and only requires one of them to be consistent. Moreover, by borrowing information over time and population, it allows us to detect weaker signals and has better convergence properties when compared to applying the clustering algorithm per time or the change point detection algorithm per subject. Empirically, we demonstrate the usefulness of our method through extensive simulations and a real data application.
translated by 谷歌翻译
我们研究了具有一般函数近似的部分可观察的MDP(POMDP)的外部评估(OPE)。现有的方法,例如顺序重要性采样估计器和拟合-Q评估,受POMDP中的地平线的诅咒。为了解决这个问题,我们通过引入将未来代理作为输入的未来依赖性值函数来开发一种新颖的无模型OPE方法。未来依赖性的价值函数在完全可观察的MDP中起着与经典价值函数相似的角色。我们为未来依赖性价值作为条件矩方程提供了一个新的Bellman方程,将历史记录代理用作仪器变量。我们进一步提出了一种最小值学习方法,以使用新的Bellman方程来学习未来依赖的价值函数。我们获得PAC结果,这意味着我们的OPE估计器是一致的,只要期货和历史包含有关潜在状态和Bellman完整性的足够信息。最后,我们将方法扩展到学习动力学,并在POMDP中建立我们的方法与众所周知的光谱学习方法之间的联系。
translated by 谷歌翻译
在许多应用程序中,在线部署之前需要离线评估新政策,因此非政策评估至关重要。大多数现有方法都集中在预期的回报上,通过平均定义目标参数,并仅提供点估计器。在本文中,我们开发了一种新的程序,以从任何初始状态开始为目标策略的回报产生可靠的间隔估计器。我们的提案说明了回报围绕其期望的可变性,重点关注个人效果,并提供有效的不确定性量化。我们的主要思想在于设计伪策略,该伪政策像从目标策略中取样一样生成子样本,以便现有的保形预测算法适用于预测间隔构建。我们的方法是由来自短视频平台的理论,合成数据和真实数据证明是合理的。
translated by 谷歌翻译
我们考虑在离线域中的强化学习(RL)方法,没有其他在线数据收集,例如移动健康应用程序。计算机科学文献中的大多数现有策略优化算法都是在易于收集或模拟的在线设置中开发的。通过预采用的离线数据集,它们对移动健康应用程序的概括尚不清楚。本文的目的是开发一个新颖的优势学习框架,以便有效地使用预采用的数据进行策略优化。所提出的方法采用由任何现有的最新RL算法计算的最佳Q-估计器作为输入,并输出一项新策略,其价值比基于初始Q-得出的策略更快地收敛速度。估计器。进行广泛的数值实验以支持我们的理论发现。我们提出的方法的Python实现可在https://github.com/leyuanheart/seal上获得。
translated by 谷歌翻译
基于A/B测试的政策评估引起了人们对数字营销的极大兴趣,但是在乘车平台(例如Uber和Didi)中的这种评估主要是由于其时间和/或空间依赖性实验的复杂结构而被很好地研究。 。本文的目的是在乘车平台中的政策评估中进行,目的是在平台的政策和换回设计下的感兴趣结果之间建立因果关系。我们提出了一个基于时间变化系数决策过程(VCDP)模型的新型潜在结果框架,以捕获时间依赖性实验中的动态治疗效果。我们通过将其分解为直接效应总和(DE)和间接效应(IE)来进一步表征平均治疗效应。我们为DE和IE制定了估计和推理程序。此外,我们提出了一个时空VCDP来处理时空依赖性实验。对于这两个VCDP模型,我们都建立了估计和推理程序的统计特性(例如弱收敛和渐近力)。我们进行广泛的模拟,以研究拟议估计和推理程序的有限样本性能。我们研究了VCDP模型如何帮助改善DIDI中各种派遣和处置政策的政策评估。
translated by 谷歌翻译
本文关注的是,基于无限视野设置中预采用的观察数据,为目标策略的价值离线构建置信区间。大多数现有作品都假定不存在混淆观察到的动作的未测量变量。但是,在医疗保健和技术行业等实际应用中,这种假设可能会违反。在本文中,我们表明,使用一些辅助变量介导动作对系统动态的影响,目标策略的价值在混杂的马尔可夫决策过程中可以识别。基于此结果,我们开发了一个有效的非政策值估计器,该估计值可用于潜在模型错误指定并提供严格的不确定性定量。我们的方法是通过理论结果,从乘车共享公司获得的模拟和真实数据集证明的。python实施了建议的过程,请访问https://github.com/mamba413/cope。
translated by 谷歌翻译