安全探索对于使用风险敏感环境中的强化学习(RL)至关重要。最近的工作了解衡量违反限制概率的风险措施,然后可以使用安全性来实现安全性。然而,学习这种风险措施需要与环境的重大互动,从而在学习期间违反违规程度过多。此外,这些措施不易转移到新环境。我们将安全探索作为离线Meta RL问题,目的是利用一系列环境中的安全和不安全行为的例子,以快速将学习风险措施与以前看不见的动态的新环境。然后,我们向安全适应(MESA)提出元学习,这是一个荟萃学习安全RL的风险措施的方法。跨5个连续控制域的仿真实验表明,MESA可以从一系列不同的环境中利用脱机数据,以减少未经调整环境中的约束违规,同时保持任务性能。有关代码和补充材料,请参阅https://tinyurl.com/safe-meta-rl。
translated by 谷歌翻译
元强化学习(RL)方法可以使用比标准RL少的数据级的元培训策略,但元培训本身既昂贵又耗时。如果我们可以在离线数据上进行元训练,那么我们可以重复使用相同的静态数据集,该数据集将一次标记为不同任务的奖励,以在元测试时间适应各种新任务的元训练策略。尽管此功能将使Meta-RL成为现实使用的实用工具,但离线META-RL提出了除在线META-RL或标准离线RL设置之外的其他挑战。 Meta-RL学习了一种探索策略,该策略收集了用于适应的数据,并元培训策略迅速适应了新任务的数据。由于该策略是在固定的离线数据集上进行了元训练的,因此当适应学识渊博的勘探策略收集的数据时,它可能表现得不可预测,这与离线数据有系统地不同,从而导致分布变化。我们提出了一种混合脱机元元素算法,该算法使用带有奖励的脱机数据来进行自适应策略,然后收集其他无监督的在线数据,而无需任何奖励标签来桥接这一分配变化。通过不需要在线收集的奖励标签,此数据可以便宜得多。我们将我们的方法比较了在模拟机器人的运动和操纵任务上进行离线元rl的先前工作,并发现使用其他无监督的在线数据收集可以显着提高元训练政策的自适应能力,从而匹配完全在线的表现。在一系列具有挑战性的域上,需要对新任务进行概括。
translated by 谷歌翻译
Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an offpolicy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both metatraining and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.
translated by 谷歌翻译
安全是自主系统的关键组成部分,仍然是现实世界中要使用的基于学习的政策的挑战。特别是,由于不安全的行为,使用强化学习学习的政策通常无法推广到新的环境。在本文中,我们提出了SIM到LAB到实验室,以弥合现实差距,并提供概率保证的安全意见政策分配。为了提高安全性,我们采用双重政策设置,其中通过累积任务奖励对绩效政策进行培训,并通过根据汉密尔顿 - 雅各布(Hamilton-Jacobi)(HJ)达到可达性分析来培训备用(安全)政策。在SIM到LAB转移中,我们采用监督控制方案来掩盖探索过程中不安全的行动;在实验室到实验室的转移中,我们利用大约正确的(PAC) - 贝斯框架来提供有关在看不见环境中政策的预期性能和安全性的下限。此外,从HJ可达性分析继承,界限说明了每个环境中最坏情况安全性的期望。我们从经验上研究了两种类型的室内环境中的自我视频导航框架,具有不同程度的光真实性。我们还通过具有四足机器人的真实室内空间中的硬件实验来证明强大的概括性能。有关补充材料,请参见https://sites.google.com/princeton.edu/sim-to-lab-to-real。
translated by 谷歌翻译
Meta-Renifiltive学习(Meta-RL)已被证明是利用事先任务的经验,以便快速学习新的相关任务的成功框架,但是,当前的Meta-RL接近在稀疏奖励环境中学习的斗争。尽管现有的Meta-RL算法可以学习适应新的稀疏奖励任务的策略,但是使用手形奖励功能来学习实际适应策略,或者需要简单的环境,其中随机探索足以遇到稀疏奖励。在本文中,我们提出了对Meta-RL的后视抢购的制定,该rl抢购了在Meta培训期间的经验,以便能够使用稀疏奖励完全学习。我们展示了我们的方法在套件挑战稀疏奖励目标达到的环境中,以前需要密集的奖励,以便在Meta训练中解决。我们的方法使用真正的稀疏奖励功能来解决这些环境,性能与具有代理密集奖励功能的培训相当。
translated by 谷歌翻译
值得信赖的强化学习算法应有能力解决挑战性的现实问题,包括{Robustly}处理不确定性,满足{安全}的限制以避免灾难性的失败,以及在部署过程中{prencepentiming}以避免灾难性的失败}。这项研究旨在概述这些可信赖的强化学习的主要观点,即考虑其在鲁棒性,安全性和概括性上的内在脆弱性。特别是,我们给出严格的表述,对相应的方法进行分类,并讨论每个观点的基准。此外,我们提供了一个前景部分,以刺激有希望的未来方向,并简要讨论考虑人类反馈的外部漏洞。我们希望这项调查可以在统一的框架中将单独的研究汇合在一起,并促进强化学习的可信度。
translated by 谷歌翻译
强化学习(RL)原则上可以让机器人自动适应新任务,但是当前的RL方法需要大量的试验来实现这一目标。在本文中,我们通过元学习的框架来快速适应新任务,该框架利用过去的任务学习适应了对工业插入任务的特定关注。快速适应至关重要,因为大量的机器人试验可能会损害硬件件。另外,在不同的插入应用之间的经验中,有效的适应性也可以在很大程度上彼此利用。在这种情况下,我们在应用元学习时解决了两个具体的挑战。首先,传统的元元算法需要冗长的在线元训练。 We show that this can be replaced with appropriately chosen offline data, resulting in an offline meta-RL method that only requires demonstrations and trials from each of the prior tasks, without the need to run costly meta-RL procedures online.其次,元RL方法可能无法推广到与元训练时间时看到的新任务太大的任务,这在高成功率至关重要的工业应用中构成了特定的挑战。我们通过将上下文元学习与直接在线填充结合结合来解决这一问题:如果新任务与先前数据中看到的任务相似,则可以立即适应上下文的元学习者,如果它太不同,它会逐渐通过Finetuning适应。我们表明,我们的方法能够快速适应各种不同的插入任务,成功率为100%仅使用从头开始学习任务所需的样本的一小部分。实验视频和详细信息可从https://sites.google.com/view/offline-metarl-insertion获得。
translated by 谷歌翻译
使用深层生成模型从离线演示中提取策略原始的方法已显示出有望加速增强学习(RL)的新任务。直觉上,这些方法还应该有助于培训宣传员,因为它们可以执行有用的技能。但是,我们确定这些技术没有能力用于安全政策学习的能力,因为它们忽略了负面的经历(例如,不安全或不成功),只专注于积极的经验,这会损害他们安全地将新任务推广到新任务的能力。相反,我们将LettentsAfetyConteDlecting绘制在来自许多任务的演示数据集中,包括负面经验和积极经验,对litentsafetycontastect进行了原则性的对比培训。使用此较晚变量,我们的RL框架,安全技能先验(更安全)提取了特定于任务的安全原始技能,以安全,成功地将其推广到新任务。在推论阶段,接受培训的政策学会学会将安全技能纳入成功的政策。从理论上讲,我们描述了为什么更安全的行为能够实施安全的政策学习,并证明其在受游戏操作启发的几种复杂的至关重要的机器人握把任务上,在这种情况下,Saferoutperforms成功和安全方面的最先进的原始学习方法。
translated by 谷歌翻译
While reinforcement learning (RL) has become a more popular approach for robotics, designing sufficiently informative reward functions for complex tasks has proven to be extremely difficult due their inability to capture human intent and policy exploitation. Preference based RL algorithms seek to overcome these challenges by directly learning reward functions from human feedback. Unfortunately, prior work either requires an unreasonable number of queries implausible for any human to answer or overly restricts the class of reward functions to guarantee the elicitation of the most informative queries, resulting in models that are insufficiently expressive for realistic robotics tasks. Contrary to most works that focus on query selection to \emph{minimize} the amount of data required for learning reward functions, we take an opposite approach: \emph{expanding} the pool of available data by viewing human-in-the-loop RL through the more flexible lens of multi-task learning. Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries. Empirically, we reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$\times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot. Moreover, this reduction in query-complexity allows us to train robot policies from actual human users. Videos of our results and code can be found at https://sites.google.com/view/few-shot-preference-rl/home.
translated by 谷歌翻译
我们开发了一种新的持续元学习方法,以解决连续多任务学习中的挑战。在此设置中,代理商的目标是快速通过任何任务序列实现高奖励。先前的Meta-Creenifiltive学习算法已经表现出有希望加速收购新任务的结果。但是,他们需要在培训期间访问所有任务。除了简单地将过去的经验转移到新任务,我们的目标是设计学习学习的持续加强学习算法,使用他们以前任务的经验更快地学习新任务。我们介绍了一种新的方法,连续的元策略搜索(Comps),通过以增量方式,在序列中的每个任务上,通过序列的每个任务来消除此限制,而无需重新访问先前的任务。 Comps持续重复两个子程序:使用RL学习新任务,并使用RL的经验完全离线Meta学习,为后续任务学习做好准备。我们发现,在若干挑战性连续控制任务的旧序列上,Comps优于持续的持续学习和非政策元增强方法。
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there is still a lack of high-quality evaluation of those algorithms that adheres to safety constraints at each decision step under complex and unknown dynamics. In this paper, we revisit prior work in this scope from the perspective of state-wise safe RL and categorize them as projection-based, recovery-based, and optimization-based approaches, respectively. Furthermore, we propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection. This novel technique explicitly enforces hard constraints via the deep unrolling architecture and enjoys structural advantages in navigating the trade-off between reward improvement and constraint satisfaction. To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit, a toolkit that provides off-the-shelf interfaces and evaluation utilities for safety-critical tasks. We then perform a comparative study of the involved algorithms on six benchmarks ranging from robotic control to autonomous driving. The empirical results provide an insight into their applicability and robustness in learning zero-cost-return policies without task-dependent handcrafting. The project page is available at https://sites.google.com/view/saferlkit.
translated by 谷歌翻译
强化学习(RL)算法有望为机器人系统实现自主技能获取。但是,实际上,现实世界中的机器人RL通常需要耗时的数据收集和频繁的人类干预来重置环境。此外,当部署超出知识的设置超出其学习的设置时,使用RL学到的机器人政策通常会失败。在这项工作中,我们研究了如何通过从先前看到的任务中收集的各种离线数据集的有效利用来应对这些挑战。当面对一项新任务时,我们的系统会适应以前学习的技能,以快速学习执行新任务并将环境返回到初始状态,从而有效地执行自己的环境重置。我们的经验结果表明,将先前的数据纳入机器人增强学习中可以实现自主学习,从而大大提高了学习的样本效率,并可以更好地概括。
translated by 谷歌翻译
移动机器人的成功操作要求它们迅速适应环境变化。为了为移动机器人开发自适应决策工具,我们提出了一种新颖的算法,该算法将元强化学习(META-RL)与模型预测控制(MPC)相结合。我们的方法采用额外的元元素算法作为基线,以使用MPC生成的过渡样本来训练策略,当机器人检测到某些事件可以通过MPC有效处理的某些事件,并明确使用机器人动力学。我们方法的关键思想是以随机和事件触发的方式在元学习策略和MPC控制器之间进行切换,以弥补由有限的预测范围引起的次优MPC动作。在元测试期间,将停用MPC模块,以显着减少运动控制中的计算时间。我们进一步提出了一种在线适应方案,该方案使机器人能够在单个轨迹中推断并适应新任务。通过使用(i)障碍物的合成运动和(ii)现实世界的行人运动数据,使用非线性汽车样的车辆模型来证明我们方法的性能。模拟结果表明,我们的方法在学习效率和导航质量方面优于其他算法。
translated by 谷歌翻译
我们研究离线元加强学习,这是一种实用的强化学习范式,从离线数据中学习以适应新任务。离线数据的分布由行为政策和任务共同确定。现有的离线元强化学习算法无法区分这些因素,从而使任务表示不稳定,不稳定行为策略。为了解决这个问题,我们为任务表示形式提出了一个对比度学习框架,这些框架对培训和测试中行为策略的分布不匹配是可靠的。我们设计了双层编码器结构,使用相互信息最大化来形式化任务表示学习,得出对比度学习目标,并引入了几种方法以近似负面对的真实分布。对各种离线元强化学习基准的实验证明了我们方法比先前方法的优势,尤其是在对分布外行为策略的概括方面。该代码可在https://github.com/pku-ai-ged/corro中找到。
translated by 谷歌翻译
Hierarchical methods in reinforcement learning have the potential to reduce the amount of decisions that the agent needs to perform when learning new tasks. However, finding a reusable useful temporal abstractions that facilitate fast learning remains a challenging problem. Recently, several deep learning approaches were proposed to learn such temporal abstractions in the form of options in an end-to-end manner. In this work, we point out several shortcomings of these methods and discuss their potential negative consequences. Subsequently, we formulate the desiderata for reusable options and use these to frame the problem of learning options as a gradient-based meta-learning problem. This allows us to formulate an objective that explicitly incentivizes options which allow a higher-level decision maker to adjust in few steps to different tasks. Experimentally, we show that our method is able to learn transferable components which accelerate learning and performs better than existing prior methods developed for this setting. Additionally, we perform ablations to quantify the impact of using gradient-based meta-learning as well as other proposed changes.
translated by 谷歌翻译
Learning a risk-aware policy is essential but rather challenging in unstructured robotic tasks. Safe reinforcement learning methods open up new possibilities to tackle this problem. However, the conservative policy updates make it intractable to achieve sufficient exploration and desirable performance in complex, sample-expensive environments. In this paper, we propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent. Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control. Concretely, the baseline agent is responsible for maximizing rewards under standard RL settings. Thus, it is compatible with off-the-shelf training techniques of unconstrained optimization, exploration and exploitation. On the other hand, the safe agent mimics the baseline agent for policy improvement and learns to fulfill safety constraints via off-policy RL tuning. In contrast to training from scratch, safe policy correction requires significantly fewer interactions to obtain a near-optimal policy. The dual policies can be optimized synchronously via a shared replay buffer, or leveraging the pre-trained model or the non-learning-based controller as a fixed baseline agent. Experimental results show that our approach can learn feasible skills without prior knowledge as well as deriving risk-averse counterparts from pre-trained unsafe policies. The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks with respect to both safety constraint satisfaction and sample efficiency.
translated by 谷歌翻译
现有的离线增强学习(RL)方法面临一些主要挑战,尤其是学识渊博的政策与行为政策之间的分配转变。离线Meta-RL正在成为应对这些挑战的一种有前途的方法,旨在从一系列任务中学习信息丰富的元基础。然而,如我们的实证研究所示,离线元RL在具有良好数据集质量的任务上的单个任务RL方法可能胜过,这表明必须在“探索”不合时宜的情况下进行精细的平衡。通过遵循元元素和“利用”离线数据集的分配状态行为,保持靠近行为策略。通过这种经验分析的激励,我们探索了基于模型的离线元RL,并具有正则政策优化(MERPO),该策略优化(MERPO)学习了一种用于有效任务结构推理的元模型,并提供了提供信息的元元素,以安全地探索过分分布状态 - 行为。特别是,我们使用保守的政策评估和正规政策改进,设计了一种新的基于元指数的基于元指数的基于元模型的参与者批判性(RAC),作为MERPO的关键构建块作为Merpo的关键构建块;而其中的内在权衡是通过在两个正规机构之间达到正确的平衡来实现的,一个是基于行为政策,另一个基于元政策。从理论上讲,我们学识渊博的政策可以保证对行为政策和元政策都有保证的改进,从而确保通过离线元RL对新任务的绩效提高。实验证实了Merpo优于现有的离线META-RL方法的出色性能。
translated by 谷歌翻译
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a randomly initialized DNN policy. Existing RL schedulers overlook the importance of learning from historical data and improving upon custom heuristic policies. Offline reinforcement learning presents the prospect of policy optimization from pre-recorded datasets without online environment interaction. Following the recent success of data-driven learning, we explore two RL methods: 1) Behaviour Cloning and 2) Offline RL, which aim to learn policies from logged data without interacting with the environment. These methods address the challenges concerning the cost of data collection and safety, particularly pertinent to real-world applications of RL. Although the data-driven RL methods generate good results, we show that the performance is highly dependent on the quality of the historical datasets. Finally, we demonstrate that by effectively incorporating prior expert demonstrations to pre-train the agent, we short-circuit the random exploration phase to learn a reasonable policy with online training. We utilize Offline RL as a \textbf{launchpad} to learn effective scheduling policies from prior experience collected using Oracle or heuristic policies. Such a framework is effective for pre-training from historical datasets and well suited to continuous improvement with online data collection.
translated by 谷歌翻译
元加强学习(META-RL)是一种方法,即从解决各种任务中获得的经验被蒸馏成元政策。当仅适应一个小(或仅一个)数量的步骤时,元派利赛能够在新的相关任务上近距离执行。但是,采用这种方法来解决现实世界中的问题的主要挑战是,它们通常与稀疏的奖励功能相关联,这些功能仅表示任务是部分或完全完成的。我们考虑到某些数据可能由亚最佳代理生成的情况,可用于每个任务。然后,我们使用示范(EMRLD)开发了一类名为“增强元RL”的算法,即使在训练过程中获得了次优的指导,也可以利用此信息。我们展示了EMRLD如何共同利用RL和在离线数据上进行监督学习,以生成一个显示单调性能改进的元数据。我们还开发了一个称为EMRLD-WS的温暖开始的变体,该变体对于亚最佳演示数据特别有效。最后,我们表明,在包括移动机器人在内的各种稀疏奖励环境中,我们的EMRLD算法显着优于现有方法。
translated by 谷歌翻译