跨任务转移技能的能力有可能将增强型学习(RL)代理扩展到目前无法实现的环境。最近,基于两个概念,后继特征(SF)和广泛策略改进(GPI)的框架已被引入转移技能的原则性方式。在本文中,我们在两个方面扩展了SF和GPI框架。 SFs和GPI原始公式的基本假设之一是,所有感兴趣的任务的奖励可以计算为固定特征集的线性组合。我们放松了这个约束,并表明支持框架的理论保证可以扩展到只有奖励函数不同的任何一组任务。我们的第二个贡献是,可以使用奖励函数本身作为未来任务的特征,而不会损失任何表现力,从而无需事先指定一组特征。这使得可以以更稳定的方式将SF和GPI与深度学习相结合。我们在acomplex 3D环境中凭经验验证了这一主张,其中观察是来自第一人称视角的图像。我们表明,SF和GPI推动的转移几乎可以立即实现看不见任务的非常好的政策。我们还描述了如何以一种允许将它们添加到代理的技能集中的方式学习专门用于新任务的策略,从而在将来重用。
translated by 谷歌翻译
强化学习中的转移是指概念不仅应发生在任务中,还应发生在任务之间。我们提出了转移框架,用于奖励函数改变的场景,但环境的动态保持不变。我们的方法依赖于两个关键思想:“后继特征”,一种将环境动态与奖励分离的价值函数表示,以及“广义政策改进”,即动态规划的政策改进操作的概括,它考虑一组政策而不是单一政策。 。总而言之,这两个想法导致了一种方法,可以与强化学习框架无缝集成,并允许跨任务自由交换信息。即使在任何学习过程之前,所提出的方法也为转移的政策提供了保证。我们推导出两个定理,将我们的方法设置在坚实的理论基础和现有的实验中,表明它成功地促进了实践中的转移,在一系列导航任务中明显优于替代方法。并控制模拟机器人手臂。
translated by 谷歌翻译
强化学习(RL)代理同时学习许多奖励功能的能力具有许多潜在的好处,例如将复杂任务分解为更简单的任务,任务之间的信息交换以及技能的重用。我们特别关注一个方面,即能够推广到看不见的任务。参数泛化依赖于函数逼近器的插值功率,该函数逼近器被赋予任务描述作为输入;其最常见的形式之一是通用值函数逼近器(UVFA)。推广到新任务的另一种方法是在RL问题本身中开发结构。广义策略改进(GPI)将先前任务的解决方案组合到针对看不见的任务的策略中;这依赖于新向下功能下的旧策略的即时策略评估,这通过后继特征(SF)实现。我们提出的通用后继特征近似器(USFAs)结合了所有这些的优点,即UVFAs的可扩展性,SF的即时参考,以及GPI的强大推广。我们讨论了培训USFA所涉及的挑战,其泛化属性,并证明其实际利益和转移能力在一个大规模的领域,其中代理人必须在第一人称视角三维环境中导航。
translated by 谷歌翻译
In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.
translated by 谷歌翻译
Transfer in reinforcement learning is a novel research area that focuses on the development of methods to transfer knowledge from a set of source tasks to a target task. Whenever the tasks are similar, the transferred knowledge can be used by a learning algorithm to solve the target task and significantly improve its performance (e.g., by reducing the number of samples needed to achieve a nearly optimal performance). In this chapter we provide a formalization of the general transfer problem, we identify the main settings which have been investigated so far, and we review the most important approaches to transfer in reinforcement learning.
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. In reinforcement learning (RL) (Sutton and Barto, 1998) problems, leaning agents take sequential actions with the goal of maximizing a reward signal, which may be time-delayed. For example, an agent could learn to play a game by being told whether it wins or loses, but is never given the "correct" action at any given point in time. The RL framework has gained popularity as learning methods have been developed that are capable of handling increasingly complex problems. However , when RL agents begin learning tabula rasa, mastering difficult tasks is often slow or infeasible, and thus a significant amount of current RL research focuses on improving the speed of learning by exploiting domain expertise with varying amounts of human-provided knowledge. Common approaches include deconstructing the task into a hierarchy of subtasks (cf., Dietterich, 2000); learning with higher-level, temporally abstract, actions (e.g., options, Sutton et al. 1999) rather than simple one-step actions; and efficiently abstracting over the state space (e.g., via function approximation) so that the agent may generalize its experience more efficiently. The insight behind transfer learning (TL) is that generalization may occur not only within tasks, but also across tasks. This insight is not new; transfer has long been studied in the psychological literature (cf., Thorndike and Woodworth, 1901; Skinner, 1953). More relevant are a number of * .
translated by 谷歌翻译
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.
translated by 谷歌翻译
一些真实世界的域名最好被描述为单一任务,但对于其他人而言,这种观点是有限的。相反,一些任务不断增加不复杂性,与代理人的能力相结合。在不断学习中,也被认为是终身学习,没有明确的任务边界或课程。随着学习代理变得越来越强大,持续学习仍然是阻碍快速进步的前沿之一。为了测试连续学习能力,我们考虑具有明确的任务序列和稀疏奖励的具有挑战性的3D域。我们提出了一种名为Unicorn的新型代理体系结构,它展示了强大的持续学习能力,并在拟议的领域中表现出优秀的几个基线代理。代理通过使用并行的非策略学习设置,有效地共同表示和学习多个策略来实现这一目标。
translated by 谷歌翻译
强化学习中的课程学习是一种培训方法,通过首先对一系列简单任务进行培训并将获得的知识转移到目标任务来加速学习困难的目标任务。自动选择一系列这样的任务(即课程)是一个开放的问题,这是该领域最近的工作的主题。在本文中,我们以最近的课程设计方法为基础,将课程排序问题表述为马尔可夫决策过程。我们扩展了这个模型来处理多种转移学习算法,并且第一次展示了这个MDP的课程政策可以从经验中学习。我们探索使这成为可能的各种表述,并通过在不同领域中学习多个代理的课程策略来评估我们的方法。结果表明,我们的方法生成的课程可以训练代理人在目标任务上执行的速度比现有方法快或快。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolu-tionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that-among other things-points out some avenues for bridging the gap between control and artificial-intelligence RL techniques.
translated by 谷歌翻译
Transfer learning is a method where an agent reuses knowledge learned in a source task to improve learning on a target task. Recent work has shown that transfer learning can be extended to the idea of curriculum learning, where the agent incrementally accumulates knowledge over a sequence of tasks (i.e. a curriculum). In most existing work, such curricula have been constructed manually. Furthermore , they are fixed ahead of time, and do not adapt to the progress or abilities of the agent. In this paper , we formulate the design of a curriculum as a Markov Decision Process, which directly models the accumulation of knowledge as an agent interacts with tasks, and propose a method that approximates an execution of an optimal policy in this MDP to produce an agent-specific curriculum. We use our approach to automatically sequence tasks for 3 agents with varying sensing and action capabilities in an experimental domain, and show that our method produces curricula customized for each agent that improve performance relative to learning from scratch or using a different agent's curriculum .
translated by 谷歌翻译
强化学习(RL)是一种学习范式,关注学习控制系统,以便最大化长期目标。这种对学习的接受在最近时代引起了极大的兴趣,并且成功地以诸如\ textit {Go}之类的游戏的人类表现形式表现出来。虽然RL正在成为真实生命系统中的实用组件,但大多数成功都在单一代理域中。本报告将特别关注混合协作和竞争环境中Multi-Agent Systems交互所特有的挑战。该报告基于MDPscalled \ textit {Decentralized Partially Observable MDP}的扩展,包括培训多代理系统范式的进展,称为\ textit {Decentralized Actor,Centralized Critic},最近引起了人们的兴趣。
translated by 谷歌翻译
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution , which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.
translated by 谷歌翻译
近年来,深度强化学习(RL)算法取得了长足的进步。一个重要的剩余挑战是能够快速将技能转化为新任务,并将现有技能与新获得的技能相结合。在通过组合技能解决任务的领域中,这种能力有望大幅降低深度RL算法的数据要求,从而提高其适用性。最近的工作已经研究了以行动 - 价值函数的形式表现出行为的方式。我们分析这些方法以突出它们的优势和弱点,并指出每种方法都容易出现性能不佳的情况。为了进行这种分析,我们将广义策略改进扩展到最大熵框架,并介绍了在连续动作空间中实现后继特征的实际方法。然后我们提出了一种新方法,原则上可以恢复最佳的policyduring转移。该方法通过明确地学习策略之间的(折扣的,未来的)差异来工作。我们在表格案例中研究了这种方法,并提出了一种适用于多维连续动作空间的可扩展变体。我们将我们的方法与现有的方法进行比较,讨论一系列具有组成结构的非平凡连续控制问题,并且尽管不需要同时观察所有任务奖励,但仍能在质量上更好地表现。
translated by 谷歌翻译
Imitation can be viewed as a means of enhancing learning in multiagent environments. It augments an agent's ability to learn useful behaviors by making intelligent use of the knowledge implicit in behaviors demonstrated by cooperative teachers or other more experienced agents. We propose and study a formal model of implicit imitation that can accelerate reinforcement learning dramatically in certain cases. Roughly, by observing a mentor, a reinforcement-learning agent can extract information about its own capabilities in, and the relative value of, unvisited parts of the state space. We study two specific instantiations of this model, one in which the learning agent and the mentor have identical abilities, and one designed to deal with agents and mentors with different action sets. We illustrate the benefits of implicit imitation by integrating it with prioritized sweeping, and demonstrating improved performance and convergence through observation of single and multiple mentors. Though we make some stringent assumptions regarding observability and possible interactions, we briefly comment on extensions of the model that relax these restricitions.
translated by 谷歌翻译
终身学习代理人的一个重要特性是能够结合现有技能来解决看不见的任务。但总的来说,目前尚不清楚如何以原则的方式组合技能。我们在熵正则化强化学习(RL)中提供最佳价值函数组合的“配方”,然后将其扩展到标准RL设置。组合在视频游戏环境中进行演示,其中具有现有策略库的代理可以在不需要进一步学习的情况下解决新任务。
translated by 谷歌翻译
本文提出了MAXQ分层强化学习方法,该方法基于将目标马尔可夫决策过程(MDP)分解为较小MDP的层次结构,并将目标MDP的值函数分解为较小MDP的值函数的加法组合。本文定义了MAXQ层次结构,证明了其代表性功率的正式结果,并为状态抽象的安全使用建立了五个条件。本文提出了一种在线无模型学习算法MAXQ-Q,并证明它将概率1收敛到一种即使在存在五种状态抽象的情况下,本地最优策略也称为递归最优策略。本文通过三个领域的一系列实验来评估MAXQ表示和MAXQ-Q,并通过实验证明MAXQ-Q(具有状态抽象)收敛于递归最优策略比平坦Q学习更快。 MAXQ学习值函数的表示这一事实具有以下重要优点:它可以通过类似于策略迭代的策略改进步骤的过程来计算和执行改进的非分层策略。本文通过实验证明了这种非等级执行的有效性。最后,本文最后对相关工作进行了比较,并讨论了分层强化学习中的设计权衡。
translated by 谷歌翻译
This chapter surveys recent lines of work that use Bayesian techniques for reinforcement learning. In Bayesian learning, uncertainty is expressed by a prior distribution over unknown parameters and learning is achieved by computing a posterior distribution based on the data observed. Hence, Bayesian reinforcement learning distinguishes itself from other forms of reinforcement learning by explicitly maintaining a distribution over various quantities such as the parameters of the model, the value function, the policy or its gradient. This yields several benefits: a) domain knowledge can be naturally encoded in the prior distribution to speed up learning; b) the exploration/exploitation tradeoff can be naturally optimized; and c) notions of risk can be naturally taken into account to obtain robust policies.
translated by 谷歌翻译
We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors-from scratch-in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment-enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe M.
translated by 谷歌翻译