Learning from Demonstration (LfD) is a powerful method for enabling robots to perform novel tasks as it is often more tractable for a non-roboticist end-user to demonstrate the desired skill and for the robot to efficiently learn from the associated data than for a human to engineer a reward function for the robot to learn the skill via reinforcement learning (RL). Safety issues arise in modern LfD techniques, e.g., Inverse Reinforcement Learning (IRL), just as they do for RL; yet, safe learning in LfD has received little attention. In the context of agile robots, safety is especially vital due to the possibility of robot-environment collision, robot-human collision, and damage to the robot. In this paper, we propose a safe IRL framework, CBFIRL, that leverages the Control Barrier Function (CBF) to enhance the safety of the IRL policy. The core idea of CBFIRL is to combine a loss function inspired by CBF requirements with the objective in an IRL method, both of which are jointly optimized via gradient descent. In the experiments, we show our framework performs safer compared to IRL methods without CBF, that is $\sim15\%$ and $\sim20\%$ improvement for two levels of difficulty of a 2D racecar domain and $\sim 50\%$ improvement for a 3D drone domain.
translated by 谷歌翻译
基于能量功能的安全证书可以为复杂机器人系统的安全控制任务提供可证明的安全保证。但是,所有有关基于学习的能量功能合成的最新研究仅考虑可行性,这可能会导致过度保存并导致效率较低的控制器。在这项工作中,我们提出了幅度的正规化技术,以通过降低能量功能内部的保守性,同时保持有希望的可证明的安全保证,以提高安全控制器的效率。具体而言,我们通过能量函数的幅度来量化保守性,并通过在合成损失中增加幅度的正则化项来降低保守性。我们提出了使用加固学习(RL)进行合成的SAFEMR算法来统一安全控制器和能量功能的学习过程。实验结果表明,所提出的方法确实会降低能量功能的保守性,并在控制器效率方面优于基准,同时确保安全性。
translated by 谷歌翻译
Efficient use of the space in an elevator is very necessary for a service robot, due to the need for reducing the amount of time caused by waiting for the next elevator. To provide a solution for this, we propose a hybrid approach that combines reinforcement learning (RL) with voice interaction for robot navigation in the scene of entering the elevator. RL provides robots with a high exploration ability to find a new clear path to enter the elevator compared to traditional navigation methods such as Optimal Reciprocal Collision Avoidance (ORCA). The proposed method allows the robot to take an active clear path action towards the elevator whilst a crowd of people stands at the entrance of the elevator wherein there are still lots of space. This is done by embedding a clear path action (voice prompt) into the RL framework, and the proposed navigation policy helps the robot to finish tasks efficiently and safely. Our model approach provides a great improvement in the success rate and reward of entering the elevator compared to state-of-the-art navigation policies without active clear path operation.
translated by 谷歌翻译
In this paper we examine the problem of determining demonstration sufficiency for AI agents that learn from demonstrations: how can an AI agent self-assess whether it has received enough demonstrations from an expert to ensure a desired level of performance? To address this problem we propose a novel self-assessment approach based on Bayesian inverse reinforcement learning and value-at-risk to enable agents that learn from demonstrations to compute high-confidence bounds on their performance and use these bounds to determine when they have a sufficient number of demonstrations. We propose and evaluate two definitions of sufficiency: (1) normalized expected value difference, which measures regret with respect to the expert's unobserved reward function, and (2) improvement over a baseline policy. We demonstrate how to formulate high-confidence bounds on both of these metrics. We evaluate our approach in simulation and demonstrate the feasibility of developing an AI system that can accurately evaluate whether it has received sufficient training data to guarantee, with high confidence, that it can match an expert's performance or surpass the performance of a baseline policy within some desired safety threshold.
translated by 谷歌翻译
In this paper, we address the problem of safe trajectory planning for autonomous search and exploration in constrained, cluttered environments. Guaranteeing safe navigation is a challenging problem that has garnered significant attention. This work contributes a method that generates guaranteed safety-critical search trajectories in a cluttered environment. Our approach integrates safety-critical constraints using discrete control barrier functions (DCBFs) with ergodic trajectory optimization to enable safe exploration. Ergodic trajectory optimization plans continuous exploratory trajectories that guarantee full coverage of a space. We demonstrate through simulated and experimental results on a drone that our approach is able to generate trajectories that enable safe and effective exploration. Furthermore, we show the efficacy of our approach for safe exploration of real-world single- and multi- drone platforms.
translated by 谷歌翻译
基于屏障函数的控制证书一直是一个强大的工具,可能为动态系统生成可能的安全控制策略。但是,基于屏障证书的现有方法通常用于具有可微差动态的白盒系统,这使得它们可以不适用于系统是黑盒的许多实用应用,并且不能准确地建模。另一方面,黑盒系统的无模型加强学习(RL)方法缺乏安全保证和低采样效率。在本文中,我们提出了一种新的方法,可以为黑盒动态系​​统学习安全控制政策和屏障证书,而无需准确的系统模型。我们的方法即使在黑盒式动态系统是不可差分的情况下,我们也可以重新设计损耗函数以反向传播梯度对控制策略,并且我们表明安全证书在黑盒系统上保持。仿真的经验结果表明,与最先进的黑匣子安全控制方法相比,我们的方法可以通过实现近100%的安全性和目标来实现近100%的安全性和目标达到速度。我们的学习代理商也可以在保持原始性能的同时概括取消观察方案。源代码可以在https://github.com/zengyi-qin/bcbf找到。
translated by 谷歌翻译
这项工作通过建立最近提出的轨迹排名最大的熵深逆增强学习(T-Medirl),为拥挤的环境中具有社会意识的本地规划师的新框架提出了一个新的框架。为了解决社会导航问题,我们的多模式学习计划者明确考虑了社会互动因素以及社会意识因素,以从T-Medirl Pipeline中学习,以从人类的示范中学习奖励功能。此外,我们建议使用机器人周围行人的突然速度变化来解决人类示范中的亚临时性。我们的评估表明,这种方法可以成功地使机器人在拥挤的社交环境中导航,并在成功率,导航时间和入侵率方面胜过最先进的社会导航方法。
translated by 谷歌翻译
在许多情况下,增强学习(RL)已被证明是有效的。但是,通常需要探索足够多的国家行动对,其中一些对不安全。因此,其应用于安全至关重要的系统仍然是一个挑战。解决安全性的越来越普遍的方法涉及将RL动作投射到安全的一组动作上的安全层。反过来,此类框架的困难是如何有效地将RL与安全层搭配以提高学习绩效。在本文中,我们将安全性作为基于型号的RL框架中的可区分强大控制式 - 助推器功能层。此外,我们还提出了一种模块化学习基本奖励驱动的任务的方法,独立于安全限制。我们证明,这种方法既可以确保安全性,又可以有效地指导一系列实验中的训练期间的探索,包括以模块化的方式学习奖励时,包括零拍传递。
translated by 谷歌翻译
当将强化学习(RL)代理部署到物理系统中时,我们必须确保这些代理非常了解基本的约束。但是,在许多现实世界中,遵循的限制因素(例如,人类)通常很难在数学上和RL代理商上指定。为了解决这些问题,约束逆强化学习(CIRL)考虑了约束马尔可夫决策过程(CMDP)的形式主义,并通过学习约束功能来估算专家示范中的约束。作为一个新兴的研究主题,Cirl没有共同的基准测试,以前的作品通过手工制作的环境(例如,网格世界)测试了其算法。在本文中,我们在两个主要的应用域:机器人控制和自动驾驶的背景下构建了CIRL基准。我们为每个环境设计相关的约束,并经验研究不同算法基于尊重这些约束的专家轨迹恢复这些约束的能力。为了处理随机动力学,我们提出了一种差异方法,以扩展约束分布,并通过将其与基准上的其他cirl基线进行比较来证明其性能。基准,包括复制CIRL算法性能的信息,可在https://github.com/guiliang/guiliang/cirl-benchmarks-public上公开获得
translated by 谷歌翻译
Learning a risk-aware policy is essential but rather challenging in unstructured robotic tasks. Safe reinforcement learning methods open up new possibilities to tackle this problem. However, the conservative policy updates make it intractable to achieve sufficient exploration and desirable performance in complex, sample-expensive environments. In this paper, we propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent. Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control. Concretely, the baseline agent is responsible for maximizing rewards under standard RL settings. Thus, it is compatible with off-the-shelf training techniques of unconstrained optimization, exploration and exploitation. On the other hand, the safe agent mimics the baseline agent for policy improvement and learns to fulfill safety constraints via off-policy RL tuning. In contrast to training from scratch, safe policy correction requires significantly fewer interactions to obtain a near-optimal policy. The dual policies can be optimized synchronously via a shared replay buffer, or leveraging the pre-trained model or the non-learning-based controller as a fixed baseline agent. Experimental results show that our approach can learn feasible skills without prior knowledge as well as deriving risk-averse counterparts from pre-trained unsafe policies. The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks with respect to both safety constraint satisfaction and sample efficiency.
translated by 谷歌翻译
多机器人系统(MRS)是一组协调的机器人,旨在相互合作并完成给定的任务。由于操作环境中的不确定性,该系统可能会遇到紧急情况,例如未观察到的障碍物,移动车辆和极端天气。蜂群等动物群体会引发集体紧急反应行为,例如绕过障碍和避免掠食者,类似于肌肉条件的反射,该反射组织局部肌肉以避免在第一反应中避免危害,而不会延迟通过大脑的危害。受此启发,我们开发了一种类似的集体反射机制,以使多机器人系统应对紧急情况。在这项研究中,基于动物集体行为分析和多代理增强学习(MARL),开发了一种由生物启发的紧急反应机制(MARL)开发的集体条件反射(CCR)。该算法使用物理模型来确定机器人是否经历了紧急情况。然后,通过相应的启发式奖励增强了涉及紧急情况的机器人的奖励,该奖励评估紧急情况和后果并决定当地机器人的参与。 CCR在三个典型的紧急情况下进行了验证:\ textit {湍流,强风和隐藏障碍物}。仿真结果表明,与基线方法相比,CCR以更快的反应速度和更安全的轨迹调整来提高机器人团队的紧急反应能力。
translated by 谷歌翻译
模仿学习研究社区最近取得了重大进展,以使人工代理人仅凭视频演示模仿行为。然而,由于视频观察的高维质性质,针对此问题开发的当前最新方法表现出很高的样本复杂性。为了解决这个问题,我们在这里介绍了一种新的算法,称为使用状态观察者VGAIFO-SO从观察中获得的,称为视觉生成对抗性模仿。 Vgaifo-So以此为核心,试图使用一种新型的自我监管的状态观察者来解决样本效率低下,该观察者从高维图像中提供了较低维度的本体感受状态表示的估计。我们在几个连续的控制环境中进行了实验表明,Vgaifo-SO比其他IFO算法更有效地从仅视频演示中学习,有时甚至可以实现与观察(Gaifo)算法的生成对抗性模仿(Gaifo)算法的性能,该算法有特权访问访问权限示威者的本体感知状态信息。
translated by 谷歌翻译
如何在演示相对较大时更加普遍地进行模仿学习一直是强化学习(RL)的持续存在问题。糟糕的示威活动导致狭窄和偏见的日期分布,非马洛维亚人类专家演示使代理商难以学习,而过度依赖子最优轨迹可以使代理商努力提高其性能。为了解决这些问题,我们提出了一种名为TD3FG的新算法,可以平稳地过渡从专家到学习从经验中学习。我们的算法在Mujoco环境中实现了有限的有限和次优的演示。我们使用行为克隆来将网络作为参考动作发生器训练,并在丢失函数和勘探噪声方面使用它。这种创新可以帮助代理商从示威活动中提取先验知识,同时降低了糟糕的马尔科维亚特性的公正的不利影响。与BC +微调和DDPGFD方法相比,它具有更好的性能,特别是当示范相对有限时。我们调用我们的方法TD3FG意味着来自发电机的TD3。
translated by 谷歌翻译
强化学习(RL)是一种有希望的方法,对现实世界的应用程序取得有限,因为确保安全探索或促进充分利用是控制具有未知模型和测量不确定性的机器人系统的挑战。这种学习问题对于连续空间(状态空间和动作空间)的复杂任务变得更加棘手。在本文中,我们提出了一种由几个方面组成的基于学习的控制框架:(1)线性时间逻辑(LTL)被利用,以便于可以通过无限视野的复杂任务转换为新颖的自动化结构; (2)我们为RL-Agent提出了一种创新的奖励计划,正式保证,使全球最佳政策最大化满足LTL规范的概率; (3)基于奖励塑造技术,我们开发了利用自动机构结构的好处进行了模块化的政策梯度架构来分解整体任务,并促进学习控制器的性能; (4)通过纳入高斯过程(GPS)来估计不确定的动态系统,我们使用指数控制屏障功能(ECBF)综合基于模型的保障措施来解决高阶相对度的问题。此外,我们利用LTL自动化和ECBF的性质来构建引导过程,以进一步提高勘探效率。最后,我们通过多个机器人环境展示了框架的有效性。我们展示了这种基于ECBF的模块化深RL算法在训练期间实现了近乎完美的成功率和保护安全性,并且在训练期间具有很高的概率信心。
translated by 谷歌翻译
Reinforcement Learning (RL) can solve complex tasks but does not intrinsically provide any guarantees on system behavior. For real-world systems that fulfill safety-critical tasks, such guarantees on safety specifications are necessary. To bridge this gap, we propose a verifiably safe RL procedure with probabilistic guarantees. First, our approach probabilistically verifies a candidate controller with respect to a temporal logic specification, while randomizing the controller's inputs within a bounded set. Then, we use RL to improve the performance of this probabilistically verified, i.e. safe, controller and explore in the same bounded set around the controller's input as was randomized over in the verification step. Finally, we calculate probabilistic safety guarantees with respect to temporal logic specifications for the learned agent. Our approach is efficient for continuous action and state spaces and separates safety verification and performance improvement into two independent steps. We evaluate our approach on a safe evasion task where a robot has to evade a dynamic obstacle in a specific manner while trying to reach a goal. The results show that our verifiably safe RL approach leads to efficient learning and performance improvements while maintaining safety specifications.
translated by 谷歌翻译
安全是自主系统的关键组成部分,仍然是现实世界中要使用的基于学习的政策的挑战。特别是,由于不安全的行为,使用强化学习学习的政策通常无法推广到新的环境。在本文中,我们提出了SIM到LAB到实验室,以弥合现实差距,并提供概率保证的安全意见政策分配。为了提高安全性,我们采用双重政策设置,其中通过累积任务奖励对绩效政策进行培训,并通过根据汉密尔顿 - 雅各布(Hamilton-Jacobi)(HJ)达到可达性分析来培训备用(安全)政策。在SIM到LAB转移中,我们采用监督控制方案来掩盖探索过程中不安全的行动;在实验室到实验室的转移中,我们利用大约正确的(PAC) - 贝斯框架来提供有关在看不见环境中政策的预期性能和安全性的下限。此外,从HJ可达性分析继承,界限说明了每个环境中最坏情况安全性的期望。我们从经验上研究了两种类型的室内环境中的自我视频导航框架,具有不同程度的光真实性。我们还通过具有四足机器人的真实室内空间中的硬件实验来证明强大的概括性能。有关补充材料,请参见https://sites.google.com/princeton.edu/sim-to-lab-to-real。
translated by 谷歌翻译
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
translated by 谷歌翻译
羊群控制是一个具有挑战性的问题,在维持羊群的同时,需要达到目标位置,并避免了环境中特工之间的障碍和碰撞碰撞。多代理增强学习在羊群控制中取得了有希望的表现。但是,基于传统强化学习的方法需要代理与环境之间的相互作用。本文提出了一项次优政策帮助多代理增强学习算法(SPA-MARL),以提高样本效率。 Spa-Marl直接利用可以通过非学习方法手动设计或解决的先前政策来帮助代理人学习,在这种情况下,该策略的表现可以是最佳的。 SPA-MARL认识到次优政策与本身之间的性能差异,然后模仿次优政策,如果次优政策更好。我们利用Spa-Marl解决羊群控制问题。基于人造潜在领域的传统控制方法用于生成次优政策。实验表明,水疗中心可以加快训练过程,并优于MARL基线和所使用的次优政策。
translated by 谷歌翻译
值得信赖的强化学习算法应有能力解决挑战性的现实问题,包括{Robustly}处理不确定性,满足{安全}的限制以避免灾难性的失败,以及在部署过程中{prencepentiming}以避免灾难性的失败}。这项研究旨在概述这些可信赖的强化学习的主要观点,即考虑其在鲁棒性,安全性和概括性上的内在脆弱性。特别是,我们给出严格的表述,对相应的方法进行分类,并讨论每个观点的基准。此外,我们提供了一个前景部分,以刺激有希望的未来方向,并简要讨论考虑人类反馈的外部漏洞。我们希望这项调查可以在统一的框架中将单独的研究汇合在一起,并促进强化学习的可信度。
translated by 谷歌翻译
安全是每个机器人平台的关键特性:任何控制政策始终遵守执行器限制,并避免与环境和人类发生冲突。在加强学习中,安全对于探索环境而不会造成任何损害更为基础。尽管有许多针对安全勘探问题的建议解决方案,但只有少数可以处理现实世界的复杂性。本文介绍了一种安全探索的新公式,用于强化各种机器人任务。我们的方法适用于广泛的机器人平台,即使在通过探索约束歧管的切线空间从数据中学到的复杂碰撞约束下也可以执行安全。我们提出的方法在模拟的高维和动态任务中实现了最先进的表现,同时避免与环境发生冲突。我们在Tiago ++机器人上展示了安全的现实部署,在操纵和人类机器人交互任务中取得了显着的性能。
translated by 谷歌翻译