在强化学习文献中,有许多用于上下文强盗(CB)或马尔可夫决策过程(MDP)环境的算法。但是,当在现实世界中部署强化学习算法时,即使有领域专业知识,通常也很难知道将顺序决策问题视为CB或MDP是否适合。换句话说,行动会影响未来的状态,还是仅影响即时奖励?关于环境的性质做出错误的假设可能会导致学习效率低下,甚至可以阻止该算法学习最佳政策,即使使用无限数据。在这项工作中,我们开发了一种在线算法,该算法使用贝叶斯假设测试方法来学习环境的性质。我们的算法允许从业人员合并有关环境是否是CB还是MDP的知识,并有效地在经典CB和基于MDP的算法之间插值,以减轻对环境分配环境的影响。我们进行仿真并证明,在CB设置中,我们的算法比基于MDP的算法降低了遗憾,而在非Bandit MDP设置中,我们的算法能够学习最佳策略,通常可以与基于MDP的算法相当地遗憾。
translated by 谷歌翻译
In reinforcement learning the Q-values summarize the expected future rewards that the agent will attain. However, they cannot capture the epistemic uncertainty about those rewards. In this work we derive a new Bellman operator with associated fixed point we call the `knowledge values'. These K-values compress both the expected future rewards and the epistemic uncertainty into a single value, so that high uncertainty, high reward, or both, can yield high K-values. The key principle is to endow the agent with a risk-seeking utility function that is carefully tuned to balance exploration and exploitation. When the agent follows a Boltzmann policy over the K-values it yields a Bayes regret bound of $\tilde O(L \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the total number of states, $A$ is the number of actions, and $T$ is the number of elapsed timesteps. We show deep connections of this approach to the soft-max and maximum-entropy strands of research in reinforcement learning.
translated by 谷歌翻译
牙齿疾病是最常见的慢性疾病之一,尽管可以预防。但是,关于最佳口腔卫生实践的专业建议通常被患者遗忘或放弃。因此,患者可能会受益于及时和个性化的鼓励来进行口腔自我保健行为。在本文中,我们开发了一种在线增强学习(RL)算法,用于优化基于移动的提示以鼓励口腔卫生行为的交付。开发这种算法的主要挑战之一是确保算法考虑当前行动对未来行动有效性(即延迟效应)的影响,尤其是当使算法变得稳定,自动运行时,尤其是当该算法变得简单时在受约束的现实世界中(即高度嘈杂,稀疏的数据)中。我们通过设计质量奖励来应对这一挑战,从而最大程度地提高所需的健康结果(即高质量的刷牙),同时最大程度地减少用户负担。我们还强调了一个程序,可以通过构建模拟环境测试床并使用测试床评估候选人来优化奖励的超参数。本文讨论的RL算法将用于Oralytics,这是一种口头自我护理应用程序,提供行为策略,以促进患者参与口腔卫生实践。
translated by 谷歌翻译
在不确定性面前的乐观原则在整个连续决策中普遍存在,如多武装匪和加强学习(RL)等问题。为了成功,乐观的RL算法必须过度估计真正的值函数(乐观),但不是通过它不准确的(估计错误)。在表格设置中,许多最先进的方法通过在缩放到深rl时难以应变的方法产生所需的乐观。我们重新解释基于可扩展的乐观模型的算法,以解决易解噪声增强MDP。这种配方实现了竞争遗憾:$ \ tilde {\ mathcal {o}}(| \ mathcal {s} | h \ sqrt {| \ mathcal {a} | t} $在使用高斯噪音时,$ t $是环境步骤的总数。我们还探讨了这种权衡在深度RL设置中的权衡变化,我们在验证上显示估计误差明显更麻烦。但是,我们还表明,如果此错误减少,基于乐观的模型的RL算法可以在连续控制问题中匹配最先进的性能。
translated by 谷歌翻译
Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces. In cases where the transition dynamics can be readily evaluated at specified states (e.g., via a simulator), agents can operate in what is often referred to as planning with a \emph{generative model}. We propose the AE-LSVI algorithm for best-policy identification, a novel variant of the kernelized least-squares value iteration (LSVI) algorithm that combines optimism with pessimism for active exploration (AE). AE-LSVI provably identifies a near-optimal policy \emph{uniformly} over an entire state space and achieves polynomial sample complexity guarantees that are independent of the number of states. When specialized to the recently introduced offline contextual Bayesian optimization setting, our algorithm achieves improved sample complexity bounds. Experimentally, we demonstrate that AE-LSVI outperforms other RL algorithms in a variety of environments when robustness to the initial state is required.
translated by 谷歌翻译
我们研究了在约束强化学习中有效探索的后验抽样方法。或者,对于现有算法,我们提出了两种简单的算法,这些算法在统计上更有效,更简单地实现和计算便宜。第一种算法基于CMDP的线性公式,第二算法利用CMDP的鞍点公式。我们的经验结果表明,尽管具有简单性,但后取样可实现最先进的表现,在某些情况下,采样明显优于乐观算法。
translated by 谷歌翻译
We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach maintains a statistically plausible model of the environment and follows a policy that maximizes expected $\gamma$-discounted return in that model. At each time, with probability $1-\gamma$, the model is replaced by a sample from the posterior distribution over environments. For a suitable schedule of $\gamma$, we establish an $\tilde{O}(\tau S \sqrt{A T})$ bound on the Bayesian regret, where $S$ is the number of environment states, $A$ is the number of actions, and $\tau$ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy.
translated by 谷歌翻译
We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we establish a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. To balance the exploration-exploitation trade-off, we propose an upper confidence bound-based algorithm. We show that our proposed algorithm achieves $\tilde{\mathcal{O}}(d \sqrt{H^3 T})$ regret bound where $d$ is the dimension of the transition core, $H$ is the horizon, and $T$ is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.
translated by 谷歌翻译
We present the UC$^3$RL algorithm for regret minimization in Stochastic Contextual MDPs (CMDPs). The algorithm operates under the minimal assumptions of realizable function class, and access to offline least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys an $\widetilde{O}(H^3 \sqrt{T |S| |A|(\log (|\mathcal{F}|/\delta) + \log (|\mathcal{P}|/ \delta) )})$ regret guarantee, with $T$ being the number of episodes, $S$ the state space, $A$ the action space, $H$ the horizon, and $\mathcal{P}$ and $\mathcal{F}$ are finite function classes, used to approximate the context-dependent dynamics and rewards, respectively. To the best of our knowledge, our algorithm is the first efficient and rate-optimal regret minimization algorithm for CMDPs, which operates under the general offline function approximation setting.
translated by 谷歌翻译
平衡勘探和剥削对加强学习(RL)至关重要。在本文中,我们在理论上和经验上,研究了用于连续状态行动空间的加固学习(PSRL)的模型后采样。首先,我们在连续空间中显示PSRL的第一个遗憾,这是我们知识中的最佳地段中的多项式。假设奖励和转换函数可以由贝叶斯线性回归建模,我们开发了$ \ tilde {o}的遗憾(h ^ {3/2} d \ sqrt {t})$,其中$ h $剧集长度,$ D $是状态动作空间的维度,$ t $表示总时间步骤。此结果与线性MDP中的非PSRL方法的最佳已知的遗憾符合。我们的绑定可以扩展到非线性情况以及功能嵌入功能:在特征表示上的线性内核$ \ phi $,后悔绑定成为$ \ tilde {o}(h ^ {3/2} d _ {\ phi} \ SQRT {T})$,其中$ d_ \ phi $是表示空间的尺寸。此外,我们呈现MPC-PSRL,一种基于模型的后部采样算法,具有用于动作选择的模型预测控制。为了捕获模型中的不确定性,我们在神经网络的倒数第二层(特征表示层$ \ phi $)上使用贝叶斯线性回归。实证结果表明,与基于模型的算法相比,我们的算法在基准连续控制任务中实现了最先进的示例效率,并匹配无模型算法的渐近性能。
translated by 谷歌翻译
我们认为在情节环境中的强化学习(RL)中的遗憾最小化问题。在许多实际的RL环境中,状态和动作空间是连续的或非常大的。现有方法通过随机过渡模型的低维表示或$ q $ functions的近似值来确定遗憾的保证。但是,对国家价值函数的函数近似方案的理解基本上仍然缺失。在本文中,我们提出了一种基于在线模型的RL算法,即CME-RL,该算法将过渡分布的表示形式学习为嵌入在复制的内核希尔伯特领域中的嵌入,同时仔细平衡了利用探索 - 探索权衡取舍。我们通过证明频繁的(最糟糕的)遗憾结束了$ \ tilde {o} \ big(h \ gamma_n \ sqrt {n} \ big)$ \ footnote {$ footnote {$ tilde {$ o}(\ cdot)$仅隐藏绝对常数和poly-logarithmic因素。},其中$ h $是情节长度,$ n $是时间步长的总数,$ \ gamma_n $是信息理论数量国家行动特征空间的有效维度。我们的方法绕过了估计过渡概率的需求,并适用于可以定义内核的任何域。它还为内核方法的一般理论带来了新的见解,以进行近似推断和RL遗憾的最小化。
translated by 谷歌翻译
在线强化学习(RL)中的挑战之一是代理人需要促进对环境的探索和对样品的利用来优化其行为。无论我们是否优化遗憾,采样复杂性,状态空间覆盖范围或模型估计,我们都需要攻击不同的勘探开发权衡。在本文中,我们建议在分离方法组成的探索 - 剥削问题:1)“客观特定”算法(自适应)规定哪些样本以收集到哪些状态,似乎它可以访问a生成模型(即环境的模拟器); 2)负责尽可能快地生成规定样品的“客观无关的”样品收集勘探策略。建立最近在随机最短路径问题中进行探索的方法,我们首先提供一种算法,它给出了每个状态动作对所需的样本$ B(S,a)$的样本数量,需要$ \ tilde {o} (bd + d ^ {3/2} s ^ 2 a)收集$ b = \ sum_ {s,a} b(s,a)$所需样本的$时间步骤,以$ s $各国,$ a $行动和直径$ d $。然后我们展示了这种通用探索算法如何与“客观特定的”策略配对,这些策略规定了解决各种设置的样本要求 - 例如,模型估计,稀疏奖励发现,无需无成本勘探沟通MDP - 我们获得改进或新颖的样本复杂性保证。
translated by 谷歌翻译
Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [7,22]. The theoretical question of "whether model-free algorithms can be made sample efficient" is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions.We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret Õ( √ H 3 SAT ), where S and A are the numbers of states and actions, H is the number of steps per episode, and T is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single √ H factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes √ T regret without requiring access to a "simulator." * The first two authors contributed equally.
translated by 谷歌翻译
我们考虑非平稳马尔可夫决策过程中的无模型增强学习(RL)。只要其累积变化不超过某些变化预算,奖励功能和国家过渡功能都可以随时间随时间变化。我们提出了重新启动的Q学习,以上置信度范围(RestartQ-UCB),这是第一个用于非平稳RL的无模型算法,并表明它在动态遗憾方面优于现有的解决方案。具体而言,带有freedman型奖励项的restartq-ucb实现了$ \ widetilde {o}(s^{\ frac {1} {3}} {\ frac {\ frac {1} {1} {3}} {3}} {3}} {3}} {3}} {3}} {3}} {3}} {\ delta ^{\ frac {1} {3}} h t^{\ frac {2} {3}}} $,其中$ s $和$ a $分别是$ \ delta> 0 $的状态和动作的数字是变化预算,$ h $是每集的时间步数,而$ t $是时间步长的总数。我们进一步提出了一种名为Double-Restart Q-UCB的无参数算法,该算法不需要事先了解变化预算。我们证明我们的算法是\ emph {几乎是最佳},通过建立$ \ omega的信息理论下限(s^{\ frac {1} {1} {3}}} a^{\ frac {1} {1} {3}}}}}} \ delta^{\ frac {1} {3}} h^{\ frac {2} {3}}}} t^{\ frac {2} {3}}} $,是非稳态RL中的第一个下下限。数值实验可以根据累积奖励和计算效率来验证RISTARTQ-UCB的优势。我们在相关产品的多代理RL和库存控制的示例中证明了我们的结果的力量。
translated by 谷歌翻译
Epsilon-Greedy,SoftMax或Gaussian噪声等近视探索政策在某些强化学习任务中无法有效探索,但是在许多其他方面,它们的表现都很好。实际上,实际上,由于简单性,它们通常被选为最佳选择。但是,对于哪些任务执行此类政策成功?我们可以为他们的有利表现提供理论保证吗?尽管这些政策具有显着的实际重要性,但这些关键问题几乎没有得到研究。本文介绍了对此类政策的理论分析,并为通过近视探索提供了对增强学习的首次遗憾和样本复杂性。我们的结果适用于具有有限的Bellman Eluder维度的情节MDP中的基于价值功能的算法。我们提出了一种新的复杂度度量,称为近视探索差距,用Alpha表示,该差距捕获了MDP的结构属性,勘探策略和给定的值函数类别。我们表明,近视探索的样品复杂性与该数量的倒数1 / alpha^2二次地量表。我们通过具体的例子进一步证明,由于相应的动态和奖励结构,在近视探索成功的几项任务中,近视探索差距确实是有利的。
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
Modern Reinforcement Learning (RL) is commonly applied to practical problems with an enormous number of states, where function approximation must be deployed to approximate either the value function or the policy. The introduction of function approximation raises a fundamental set of challenges involving computational and statistical efficiency, especially given the need to manage the exploration/exploitation tradeoff. As a result, a core RL question remains open: how can we design provably efficient RL algorithms that incorporate function approximation? This question persists even in a basic setting with linear dynamics and linear rewards, for which only linear function approximation is needed.This paper presents the first provable RL algorithm with both polynomial runtime and polynomial sample complexity in this linear setting, without requiring a "simulator" or additional assumptions. Concretely, we prove that an optimistic modification of Least-Squares Value Iteration (LSVI)-a classical algorithm frequently studied in the linear setting-achieves O( √ d 3 H 3 T ) regret, where d is the ambient dimension of feature space, H is the length of each episode, and T is the total number of steps. Importantly, such regret is independent of the number of states and actions.
translated by 谷歌翻译
Restless multi-armed bandits (RMABs) extend multi-armed bandits to allow for stateful arms, where the state of each arm evolves restlessly with different transitions depending on whether that arm is pulled. Solving RMABs requires information on transition dynamics, which are often unknown upfront. To plan in RMAB settings with unknown transitions, we propose the first online learning algorithm based on the Whittle index policy, using an upper confidence bound (UCB) approach to learn transition dynamics. Specifically, we estimate confidence bounds of the transition probabilities and formulate a bilinear program to compute optimistic Whittle indices using these estimates. Our algorithm, UCWhittle, achieves sublinear $O(H \sqrt{T \log T})$ frequentist regret to solve RMABs with unknown transitions in $T$ episodes with a constant horizon $H$. Empirically, we demonstrate that UCWhittle leverages the structure of RMABs and the Whittle index policy solution to achieve better performance than existing online learning baselines across three domains, including one constructed via sampling from a real-world maternal and childcare dataset.
translated by 谷歌翻译
我们考虑了马尔可夫决策过程(CMDP)的问题,其中代理与Markov Unichain决策过程进行交互。在每次互动中,代理都会获得奖励。此外,还有$ K $成本功能。该代理商的目标是最大程度地提高长期平均奖励,同时使$ k $的长期平均成本低于一定阈值。在本文中,我们提出了CMDP-PSRL,这是一种基于后取样的算法,使用该算法,代理可以学习与CMDP相互作用的最佳策略。此外,对于具有$ s $州的MDP,$ A $ ACTICE和DIAMETER $ D $,我们证明,遵循CMDP-PSRL算法,代理商可能会束缚不累积最佳策略奖励的遗憾。 (poly(dsa)\ sqrt {t})$。此外,我们表明,任何$ k $约束的违规行为也受$ \ tilde {o}(poly(dsa)\ sqrt {t})$的限制。据我们所知,这是第一批获得$ \ tilde {o}(\ sqrt {t})$遗憾的Ergodic MDP的界限,并具有长期平均约束。
translated by 谷歌翻译
我们考虑在以$ s $状态的地平线$ h $和$ a $ ACTIVE的偶发性,有限的,依赖于阶段的马尔可夫决策过程的环境中进行强化学习。代理商的性能是在与环境互动以$ t $插件互动后的遗憾来衡量的。我们提出了一种乐观的后验抽样算法(OPSRL),这是一种简单的后验抽样变体,仅需要许多后样品对数,$ h $,$ s $,$ a $和$ t $ a $ h $ s $ s $ a $ a $和$ t $一对。对于OPSRL,我们保证最多可容纳订单的高概率遗憾,$ \ wideTilde {\ mathcal {o}}}(\ sqrt {h^3sat})$忽略$ \ text {poly} \ log(hsat)$项。新型的新型技术成分是线性形式的新型抗浓缩不等式,可能具有独立感兴趣。具体而言,我们将Alfers and Dinges [1984]的Beta分布的基于正常近似的下限扩展到Dirichlet分布。我们的界限匹配订单$ \ omega(\ sqrt {h^3sat})$的下限,从而回答了Agrawal和Jia [2017b]在情节环境中提出的空旷问题。
translated by 谷歌翻译