跨任务转移技能的能力有可能将增强型学习(RL)代理扩展到目前无法实现的环境。最近,基于两个概念,后继特征(SF)和广泛策略改进(GPI)的框架已被引入转移技能的原则性方式。在本文中,我们在两个方面扩展了SF和GPI框架。 SFs和GPI原始公式的基本假设之一是,所有感兴趣的任务的奖励可以计算为固定特征集的线性组合。我们放松了这个约束,并表明支持框架的理论保证可以扩展到只有奖励函数不同的任何一组任务。我们的第二个贡献是,可以使用奖励函数本身作为未来任务的特征,而不会损失任何表现力,从而无需事先指定一组特征。这使得可以以更稳定的方式将SF和GPI与深度学习相结合。我们在acomplex 3D环境中凭经验验证了这一主张,其中观察是来自第一人称视角的图像。我们表明,SF和GPI推动的转移几乎可以立即实现看不见任务的非常好的政策。我们还描述了如何以一种允许将它们添加到代理的技能集中的方式学习专门用于新任务的策略,从而在将来重用。
translated by 谷歌翻译
The ability to transfer skills across tasks has the potential to scale up reinforcement learning (RL) agents to environments currently out of reach. Recently , a framework based on two ideas, successor features (SFs) and generalised policy improvement (GPI), has been introduced as a principled way of transferring skills. In this paper we extend the SF&GPI framework in two ways. One of the basic assumptions underlying the original formulation of SF&GPI is that rewards for all tasks of interest can be computed as linear combinations of a fixed set of features. We relax this constraint and show that the theoretical guarantees supporting the framework can be extended to any set of tasks that only differ in the reward function. Our second contribution is to show that one can use the reward functions themselves as features for future tasks, without any loss of expressiveness, thus removing the need to specify a set of features beforehand. This makes it possible to combine SF&GPI with deep learning in a more stable way. We empirically verify this claim on a complex 3D environment where observations are images from a first-person perspective. We show that the transfer promoted by SF&GPI leads to very good policies on unseen tasks almost instantaneously. We also describe how to learn policies specialised to the new tasks in a way that allows them to be added to the agent's set of skills, and thus be reused in the future.
translated by 谷歌翻译
强化学习中的转移是指概念不仅应发生在任务中,还应发生在任务之间。我们提出了转移框架,用于奖励函数改变的场景,但环境的动态保持不变。我们的方法依赖于两个关键思想:“后继特征”,一种将环境动态与奖励分离的价值函数表示,以及“广义政策改进”,即动态规划的政策改进操作的概括,它考虑一组政策而不是单一政策。 。总而言之,这两个想法导致了一种方法,可以与强化学习框架无缝集成,并允许跨任务自由交换信息。即使在任何学习过程之前,所提出的方法也为转移的政策提供了保证。我们推导出两个定理,将我们的方法设置在坚实的理论基础和现有的实验中,表明它成功地促进了实践中的转移,在一系列导航任务中明显优于替代方法。并控制模拟机器人手臂。
translated by 谷歌翻译
一些真实世界的域名最好被描述为单一任务,但对于其他人而言,这种观点是有限的。相反,一些任务不断增加不复杂性,与代理人的能力相结合。在不断学习中,也被认为是终身学习,没有明确的任务边界或课程。随着学习代理变得越来越强大,持续学习仍然是阻碍快速进步的前沿之一。为了测试连续学习能力,我们考虑具有明确的任务序列和稀疏奖励的具有挑战性的3D域。我们提出了一种名为Unicorn的新型代理体系结构,它展示了强大的持续学习能力,并在拟议的领域中表现出优秀的几个基线代理。代理通过使用并行的非策略学习设置,有效地共同表示和学习多个策略来实现这一目标。
translated by 谷歌翻译
Imitation can be viewed as a means of enhancing learning in multiagent environments. It augments an agent's ability to learn useful behaviors by making intelligent use of the knowledge implicit in behaviors demonstrated by cooperative teachers or other more experienced agents. We propose and study a formal model of implicit imitation that can accelerate reinforcement learning dramatically in certain cases. Roughly, by observing a mentor, a reinforcement-learning agent can extract information about its own capabilities in, and the relative value of, unvisited parts of the state space. We study two specific instantiations of this model, one in which the learning agent and the mentor have identical abilities, and one designed to deal with agents and mentors with different action sets. We illustrate the benefits of implicit imitation by integrating it with prioritized sweeping, and demonstrating improved performance and convergence through observation of single and multiple mentors. Though we make some stringent assumptions regarding observability and possible interactions, we briefly comment on extensions of the model that relax these restricitions.
translated by 谷歌翻译
Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that-among other things-points out some avenues for bridging the gap between control and artificial-intelligence RL techniques.
translated by 谷歌翻译
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work. In reinforcement learning (RL) (Sutton and Barto, 1998) problems, leaning agents take sequential actions with the goal of maximizing a reward signal, which may be time-delayed. For example, an agent could learn to play a game by being told whether it wins or loses, but is never given the "correct" action at any given point in time. The RL framework has gained popularity as learning methods have been developed that are capable of handling increasingly complex problems. However , when RL agents begin learning tabula rasa, mastering difficult tasks is often slow or infeasible, and thus a significant amount of current RL research focuses on improving the speed of learning by exploiting domain expertise with varying amounts of human-provided knowledge. Common approaches include deconstructing the task into a hierarchy of subtasks (cf., Dietterich, 2000); learning with higher-level, temporally abstract, actions (e.g., options, Sutton et al. 1999) rather than simple one-step actions; and efficiently abstracting over the state space (e.g., via function approximation) so that the agent may generalize its experience more efficiently. The insight behind transfer learning (TL) is that generalization may occur not only within tasks, but also across tasks. This insight is not new; transfer has long been studied in the psychological literature (cf., Thorndike and Woodworth, 1901; Skinner, 1953). More relevant are a number of * .
translated by 谷歌翻译
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-objective problems. Therefore, we identify three distinct scenarios in which converting such a problem to a single-objective one is impossible, infeasible, or undesirable. Furthermore, we propose a taxonomy that classifies multi-objective methods according to the applicable scenario, the nature of the scalarization function (which projects multi-objective values to scalar ones), and the type of policies considered. We show how these factors determine the nature of an optimal solution , which can be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we survey the literature on multi-objective methods for planning and learning. Finally, we discuss key applications of such methods and outline opportunities for future work.
translated by 谷歌翻译
This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a preference-based approach to reinforcement learning is the observation that in many real-world domains, numerical feedback signals are not readily available, or are defined arbitrarily in order to satisfy the needs of conventional RL algorithms. Instead, we propose an alternative framework for reinforcement learning, in which qualitative reward signals can be directly used by the learner. The framework may be viewed as a generalization of the conventional RL framework in which only a partial order between policies is required instead of the total order induced by their respective expected long-term reward. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. As a proof of concept, we realize a first simple instanti-ation of this framework that defines preferences based on utilities observed for trajectories. To that end, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called 124 Mach Learn (2012) 89:123-156 label ranking. Advantages of preference-based approximate policy iteration are illustrated by means of two case studies.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolu-tionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
The resurgence of deep neural networks has resulted in impressive advances in natural language processing (NLP). This success, however, is contingent on access to large amounts of structured supervision, often manually constructed and unavailable for many applications and domains. In this thesis, I present novel computational models that integrate reinforcement learning with language understanding to induce grounded representations of semantics. Using unstructured feedback, these techniques not only enable task-optimized representations which reduce dependence on high quality annotations, but also exploit language in adapting control policies across different environments. First, I describe an approach for learning to play text-based games, where all interaction is through natural language and the only source of feedback is in-game rewards. Employing a deep reinforcement learning framework to jointly learn state representations and action policies, our model outperforms several baselines on different domains, demonstrating the importance of learning expressive representations. Second, I exhibit a framework for utilizing textual descriptions to tackle the challenging problem of cross-domain policy transfer for reinforcement learning (RL). We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively make use of text. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. Finally, I demonstrate how reinforcement learning can enhance traditional NLP systems in low resource scenarios. In particular, I describe an autonomous agent that can learn to acquire and integrate external information to enhance information extraction. Our experiments on two databases-shooting incidents and food adulteration cases-demonstrate that our system significantly improves over traditional extractors and a competitive meta-classifier baseline. Acknowledgements This thesis is the culmination of an exciting five-year journey and I am grateful for the support, guidance and love of several mentors, colleagues, friends and family members. The most amazing thing about MIT is the people here-brilliant, visionary, and dedicated to pushing the boundaries of science and technology. A perfect example is my advisor, Regina Barzilay. Her enthusiasm, vision and mentorship have played a huge role in my PhD journey and I am forever indebted to her for helping me evolve into a competent researcher. I would like to thank my wonderful thesis committee of Tommi Jaakkola and Luke Zettlemoyer. In addition to key technical insights, Tommi has always patiently provided sound advice, both research and career related. Luke has been a great mentor, and much of his research was a big source of inspiration during my initial exploration into computational semantics. I have also had some great mentors over the last few years. In particular, SRK B
translated by 谷歌翻译
强化学习中的课程学习是一种培训方法,通过首先对一系列简单任务进行培训并将获得的知识转移到目标任务来加速学习困难的目标任务。自动选择一系列这样的任务(即课程)是一个开放的问题,这是该领域最近的工作的主题。在本文中,我们以最近的课程设计方法为基础,将课程排序问题表述为马尔可夫决策过程。我们扩展了这个模型来处理多种转移学习算法,并且第一次展示了这个MDP的课程政策可以从经验中学习。我们探索使这成为可能的各种表述,并通过在不同领域中学习多个代理的课程策略来评估我们的方法。结果表明,我们的方法生成的课程可以训练代理人在目标任务上执行的速度比现有方法快或快。
translated by 谷歌翻译
We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach is motivated by the least-squares temporal-difference learning algorithm (LSTD) for prediction problems, which is known for its efficient use of sample experiences compared to pure temporal-difference algorithms. Heretofore, LSTD has not had a straightforward application to control problems mainly because LSTD learns the state value function of a fixed policy which cannot be used for action selection and control without a model of the underlying process. Our new algorithm, least-squares policy iteration (LSPI), learns the state-action value function which allows for action selection without a model and for incremental policy improvement within a policy-iteration framework. LSPI is a model-free, off-policy method which can use efficiently (and reuse in each iteration) sample experiences collected in any manner. By separating the sample collection method, the choice of the linear approximation architecture, and the solution method, LSPI allows for focused attention on the distinct elements that contribute to practical reinforcement learning. LSPI is tested on the simple task of balancing an inverted pendulum and the harder task of balancing and riding a bicycle to a target location. In both cases, LSPI learns to control the pendulum or the bicycle by merely observing a relatively small number of trials where actions are selected randomly. LSPI is also compared against Q-learning (both with and without experience replay) using the same value function architecture. While LSPI achieves good performance fairly consistently on the difficult bicycle task, Q-learning variants were rarely able to balance for more than a small fraction of the time needed to reach the target location.
translated by 谷歌翻译
本文从计算机科学的角度对强化学习领域进行了研究。它编写为熟悉机器学习的研究人员可以访问。总结了该领域的历史基础和国外当前工作的选择。强化学习是代理人面临的问题,通过与动态环境的试错法交互来学习行为。这里描述的工作与心理学相似,但在细节和“强化”这个词的使用方面存在很大差异。本文讨论了强化学习的核心问题,包括交易探索和开发,建立了该领域的基础。通过马尔可夫决策理论,从延迟强化中学习,构建经验模型以加速学习,利用泛化和层次,以及应对隐藏状态。最后对一些实施的系统进行了调查,并评估了当前强化学习方法的实用性。
translated by 谷歌翻译
We present Value Propagation (VProp), a set of parameter-efficient differen-tiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.
translated by 谷歌翻译
本文提出了MAXQ分层强化学习方法,该方法基于将目标马尔可夫决策过程(MDP)分解为较小MDP的层次结构,并将目标MDP的值函数分解为较小MDP的值函数的加法组合。本文定义了MAXQ层次结构,证明了其代表性功率的正式结果,并为状态抽象的安全使用建立了五个条件。本文提出了一种在线无模型学习算法MAXQ-Q,并证明它将概率1收敛到一种即使在存在五种状态抽象的情况下,本地最优策略也称为递归最优策略。本文通过三个领域的一系列实验来评估MAXQ表示和MAXQ-Q,并通过实验证明MAXQ-Q(具有状态抽象)收敛于递归最优策略比平坦Q学习更快。 MAXQ学习值函数的表示这一事实具有以下重要优点:它可以通过类似于策略迭代的策略改进步骤的过程来计算和执行改进的非分层策略。本文通过实验证明了这种非等级执行的有效性。最后,本文最后对相关工作进行了比较,并讨论了分层强化学习中的设计权衡。
translated by 谷歌翻译
The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly due to the idea of restricting value function computations to a finite subset of the belief space, permitting only local value updates for this subset. This approach, known as point-based value iteration, avoids the exponential growth of the value function, and is thus applicable for domains with longer horizons, even with relatively large state spaces. Many extensions were suggested to this basic idea, focusing on various aspects of the algorithm-mainly the selection of the belief space subset, and the order of value function updates. In this survey, we walk the reader through the fundamentals of point-based value iteration, explaining the main concepts and ideas. Then, we survey the major extensions to the basic algorithm, discussing their merits. Finally, we include an extensive empirical analysis using well known benchmarks, in order to shed light on the strengths and limitations of the various approaches.
translated by 谷歌翻译
Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we reexamine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.
translated by 谷歌翻译
Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.
translated by 谷歌翻译