近年来,变压器体系结构和变体在许多机器学习任务中取得了巨大的成功。这种成功与关注机制的处理能力和上下文相关权重的存在本质上相关。我们认为这些功能适合元强化学习算法的核心作用。实际上,元素代理需要从一系列轨迹来推断任务。此外,它需要一种快速适应策略来适应其政策,以适应新任务 - 可以使用自我注意机制来实现。在这项工作中,我们介绍了TRMRL(用于元强化学习的变压器),这是一种使用变压器体系结构模拟内存恢复机制的元代理。它将工作记忆的最新过去联系在一起,以通过变压器层递归地构建情节记忆。我们表明,自我发作计算共识表示,该表示将每一层的贝叶斯风险降至最低,并提供有意义的功能来计算最佳动作。我们在高维连续控制环境中进行了实验,以进行运动和灵活的操纵。结果表明,与这些环境中的基准相比,TRMRL提出了可比或上级渐近性能,样本效率和分布外的概括。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
深度强化学习已经证明了通过梯度下降调整的神经网络的潜力,以解决良好的环境中的复杂任务。但是,这些神经系统是缓慢的学习者,生产专门的药物,没有任何机制,无法继续学习培训课程。相反,生物突触可塑性是持久和多种多样的,并被认为在执行功能中起关键作用,例如工作记忆和认知灵活性,可能支持更高效和更通用的学习能力。受此启发的启发,我们建议建立具有动态权重的网络,能够不断执行自反射修改,这是其当前突触状态和动作奖励反馈的函数,而不是固定的网络配置。最终的模型,Metods(用于元优化的动力突触)是一种广泛适用的元强制学习系统,能够在代理策略空间中学习有效而强大的控制规则。具有动态突触的单层可以执行单次学习,将导航原则概括为看不见的环境,并表现出强大的学习自适应运动策略的能力,并与以前的元强化学习方法进行了比较。
translated by 谷歌翻译
Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an offpolicy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both metatraining and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.
translated by 谷歌翻译
人类可以利用先前的经验,并从少数示威活动中学习新颖的任务。与旨在通过更好的算法设计来快速适应的离线元强化学习相反,我们研究了建筑归纳偏见对少量学习能力的影响。我们提出了一个基于及时的决策变压器(提示-DT),该变压器利用了变压器体系结构和及时框架的顺序建模能力,以在离线RL中实现少量适应。我们设计了轨迹提示,其中包含少量演示的片段,并编码特定于任务的信息以指导策略生成。我们在五个Mujoco控制基准中进行的实验表明,提示-DT是一个强大的少数学习者,而没有对看不见的目标任务进行任何额外的填充。提示-DT的表现优于其变体和强大的元线RL基线,只有一个轨迹提示符只包含少量时间段。提示-DT也很健壮,可以提示长度更改并可以推广到分布(OOD)环境。
translated by 谷歌翻译
Meta强化学习(META-RL)旨在学习一项政策,同时并迅速适应新任务。它需要大量从培训任务中汲取的数据,以推断任务之间共享的共同结构。如果没有沉重的奖励工程,长期任务中的稀疏奖励加剧了元RL样品效率的问题。 Meta-RL中的另一个挑战是任务之间难度级别的差异,这可能会导致一个简单的任务主导共享策略的学习,从而排除政策适应新任务。这项工作介绍了一个新颖的目标功能,可以在培训任务中学习动作翻译。从理论上讲,我们可以验证带有操作转换器的传输策略的值可以接近源策略的值和我们的目标函数(大约)上限的值差。我们建议将动作转换器与基于上下文的元元算法相结合,以更好地收集数据,并在元训练期间更有效地探索。我们的方法从经验上提高了稀疏奖励任务上元RL算法的样本效率和性能。
translated by 谷歌翻译
在部分可观察到的马尔可夫决策过程(POMDP)中,代理通常使用过去的表示来近似基础MDP。我们建议利用冷冻验证的语言变压器(PLT)进行病史表示和压缩,以提高样品效率。为了避免对变压器进行训练,我们引入了Frozenhopfield,该菲尔德自动将观察结果与预处理的令牌嵌入相关联。为了形成这些关联,现代的Hopfield网络存储了这些令牌嵌入,这些嵌入是通过查询获得的查询来检索的,这些嵌入者通过随机但固定的观察结果获得。我们的新方法Helm,启用了Actor-Critic网络体系结构,该架构包含用于历史记录表示的历史模块的审计语言变压器。由于不需要学习过去的代表,因此掌舵比竞争对手要高得多。在Miligrid和Procgen环境上,Helm掌舵取得了新的最新结果。我们的代码可在https://github.com/ml-jku/helm上找到。
translated by 谷歌翻译
我们开发了一种新的持续元学习方法,以解决连续多任务学习中的挑战。在此设置中,代理商的目标是快速通过任何任务序列实现高奖励。先前的Meta-Creenifiltive学习算法已经表现出有希望加速收购新任务的结果。但是,他们需要在培训期间访问所有任务。除了简单地将过去的经验转移到新任务,我们的目标是设计学习学习的持续加强学习算法,使用他们以前任务的经验更快地学习新任务。我们介绍了一种新的方法,连续的元策略搜索(Comps),通过以增量方式,在序列中的每个任务上,通过序列的每个任务来消除此限制,而无需重新访问先前的任务。 Comps持续重复两个子程序:使用RL学习新任务,并使用RL的经验完全离线Meta学习,为后续任务学习做好准备。我们发现,在若干挑战性连续控制任务的旧序列上,Comps优于持续的持续学习和非政策元增强方法。
translated by 谷歌翻译
Real-world reinforcement learning tasks often involve some form of partial observability where the observations only give a partial or noisy view of the true state of the world. Such tasks typically require some form of memory, where the agent has access to multiple past observations, in order to perform well. One popular way to incorporate memory is by using a recurrent neural network to access the agent's history. However, recurrent neural networks in reinforcement learning are often fragile and difficult to train, susceptible to catastrophic forgetting and sometimes fail completely as a result. In this work, we propose Deep Transformer Q-Networks (DTQN), a novel architecture utilizing transformers and self-attention to encode an agent's history. DTQN is designed modularly, and we compare results against several modifications to our base model. Our experiments demonstrate the transformer can solve partially observable tasks faster and more stably than previous recurrent approaches.
translated by 谷歌翻译
我们研究离线元加强学习,这是一种实用的强化学习范式,从离线数据中学习以适应新任务。离线数据的分布由行为政策和任务共同确定。现有的离线元强化学习算法无法区分这些因素,从而使任务表示不稳定,不稳定行为策略。为了解决这个问题,我们为任务表示形式提出了一个对比度学习框架,这些框架对培训和测试中行为策略的分布不匹配是可靠的。我们设计了双层编码器结构,使用相互信息最大化来形式化任务表示学习,得出对比度学习目标,并引入了几种方法以近似负面对的真实分布。对各种离线元强化学习基准的实验证明了我们方法比先前方法的优势,尤其是在对分布外行为策略的概括方面。该代码可在https://github.com/pku-ai-ged/corro中找到。
translated by 谷歌翻译
元钢筋学习(Meta-RL)算法使得能够快速适应动态环境中的少量样本的任务。通过代理策略网络中的动态表示(通过推理关于任务上下文,模型参数更新或两者)获得的动态表示来实现这样的壮举。然而,由于在策略网络上满足不同的政策,因此获得了超越简单基准问题的快速适应的丰富动态表示是具有挑战性的。本文通过将神经调节引入模块化组件来解决挑战,以增加调节神经元活动的标准策略网络,以便为任务适应提供有效的动态表示。策略网络的建议扩展是在越来越复杂的多个离散和连续控制环境中进行评估。为了证明在Meta-R1中的延伸的一般性和益处,将神经调序的网络应用于两个最先进的META-RL算法(胱瓦和珍珠)。结果表明,与基线相比,通过神经调节增强的Meta-R1产生明显更好的结果和更丰富的动态表示。
translated by 谷歌翻译
元强化学习(RL)方法可以使用比标准RL少的数据级的元培训策略,但元培训本身既昂贵又耗时。如果我们可以在离线数据上进行元训练,那么我们可以重复使用相同的静态数据集,该数据集将一次标记为不同任务的奖励,以在元测试时间适应各种新任务的元训练策略。尽管此功能将使Meta-RL成为现实使用的实用工具,但离线META-RL提出了除在线META-RL或标准离线RL设置之外的其他挑战。 Meta-RL学习了一种探索策略,该策略收集了用于适应的数据,并元培训策略迅速适应了新任务的数据。由于该策略是在固定的离线数据集上进行了元训练的,因此当适应学识渊博的勘探策略收集的数据时,它可能表现得不可预测,这与离线数据有系统地不同,从而导致分布变化。我们提出了一种混合脱机元元素算法,该算法使用带有奖励的脱机数据来进行自适应策略,然后收集其他无监督的在线数据,而无需任何奖励标签来桥接这一分配变化。通过不需要在线收集的奖励标签,此数据可以便宜得多。我们将我们的方法比较了在模拟机器人的运动和操纵任务上进行离线元rl的先前工作,并发现使用其他无监督的在线数据收集可以显着提高元训练政策的自适应能力,从而匹配完全在线的表现。在一系列具有挑战性的域上,需要对新任务进行概括。
translated by 谷歌翻译
通过回顾一封来自情节记忆的过去的经验,可以通过回忆过去的经验来实现钢筋学习的样本效率。我们提出了一种新的基于模型的轨迹的集体记忆,解决了集体控制的当前限制。我们的记忆估计轨迹值,指导代理人朝着良好的政策。基于内存构建,我们通过动态混合控制统一模型的基于动态和习惯学习来构建互补学习模型,进入单个架构。实验表明,我们的模型可以比各种环境中的其他强力加强学习代理更快,更好地学习,包括随机和非马尔可夫环境。
translated by 谷歌翻译
深入学习的强化学习(RL)的结合导致了一系列令人印象深刻的壮举,许多相信(深)RL提供了一般能力的代理。然而,RL代理商的成功往往对培训过程中的设计选择非常敏感,这可能需要繁琐和易于易于的手动调整。这使得利用RL对新问题充满挑战,同时也限制了其全部潜力。在许多其他机器学习领域,AutomL已经示出了可以自动化这样的设计选择,并且在应用于RL时也会产生有希望的初始结果。然而,自动化强化学习(AutorL)不仅涉及Automl的标准应用,而且还包括RL独特的额外挑战,其自然地产生了不同的方法。因此,Autorl已成为RL中的一个重要研究领域,提供来自RNA设计的各种应用中的承诺,以便玩游戏等游戏。鉴于RL中考虑的方法和环境的多样性,在不同的子领域进行了大部分研究,从Meta学习到进化。在这项调查中,我们寻求统一自动的领域,我们提供常见的分类法,详细讨论每个区域并对研究人员来说是一个兴趣的开放问题。
translated by 谷歌翻译
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
translated by 谷歌翻译
建立可以探索开放式环境的自主机器,发现可能的互动,自主构建技能的曲目是人工智能的一般目标。发展方法争辩说,这只能通过可以生成,选择和学习解决自己问题的自主和本质上动机的学习代理人来实现。近年来,我们已经看到了发育方法的融合,特别是发展机器人,具有深度加强学习(RL)方法,形成了发展机器学习的新领域。在这个新域中,我们在这里审查了一组方法,其中深入RL算法训练,以解决自主获取的开放式曲目的发展机器人问题。本质上动机的目标条件RL算法训练代理商学习代表,产生和追求自己的目标。自我生成目标需要学习紧凑的目标编码以及它们的相关目标 - 成就函数,这导致与传统的RL算法相比,这导致了新的挑战,该算法设计用于使用外部奖励信号解决预定义的目标集。本文提出了在深度RL和发育方法的交叉口中进行了这些方法的类型,调查了最近的方法并讨论了未来的途径。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. * Equal contribution. Order was determined by coin flip.
translated by 谷歌翻译
Hierarchical methods in reinforcement learning have the potential to reduce the amount of decisions that the agent needs to perform when learning new tasks. However, finding a reusable useful temporal abstractions that facilitate fast learning remains a challenging problem. Recently, several deep learning approaches were proposed to learn such temporal abstractions in the form of options in an end-to-end manner. In this work, we point out several shortcomings of these methods and discuss their potential negative consequences. Subsequently, we formulate the desiderata for reusable options and use these to frame the problem of learning options as a gradient-based meta-learning problem. This allows us to formulate an objective that explicitly incentivizes options which allow a higher-level decision maker to adjust in few steps to different tasks. Experimentally, we show that our method is able to learn transferable components which accelerate learning and performs better than existing prior methods developed for this setting. Additionally, we perform ablations to quantify the impact of using gradient-based meta-learning as well as other proposed changes.
translated by 谷歌翻译