现代机器人需要准确的预测才能在现实世界中做出最佳决策。例如,自动驾驶汽车需要对其他代理商的未来行动进行准确的预测来计划安全轨迹。当前方法在很大程度上依赖历史时间序列来准确预测未来。但是,完全依靠观察到的历史是有问题的,因为它可能被噪声损坏,有离群值或不能完全代表所有可能的结果。为了解决这个问题,我们提出了一个新的框架,用于生成用于机器人控制的强大预测。为了建模影响未来预测的现实世界因素,我们介绍了对手的概念,对敌人观察到了历史时间序列,以增加机器人的最终控制成本。具体而言,我们将这种交互作用建模为机器人的预报器和这个假设对手之间的零和两人游戏。我们证明,我们建议的游戏可以使用基于梯度的优化技术来解决本地NASH均衡。此外,我们表明,经过我们方法训练的预报员在分布外现实世界中的变化数据上的效果要比基线比基线更好30.14%。
translated by 谷歌翻译
Today's robots often interface with data-driven perception and planning models with classical model-predictive controllers (MPC). Often, such learned perception/planning models produce erroneous waypoint predictions on out-of-distribution (OoD) or even adversarial visual inputs, which increase control costs. However, today's methods to train robust perception models are largely task-agnostic - they augment a dataset using random image transformations or adversarial examples targeted at the vision model in isolation. As such, they often introduce pixel perturbations that are ultimately benign for control. In contrast to prior work that synthesizes adversarial examples for single-step vision tasks, our key contribution is to synthesize adversarial scenarios tailored to multi-step, model-based control. To do so, we use differentiable MPC methods to calculate the sensitivity of a model-based controller to errors in state estimation. We show that re-training vision models on these adversarial datasets improves control performance on OoD test scenarios by up to 36.2% compared to standard task-agnostic data augmentation. We demonstrate our method on examples of robotic navigation, manipulation in RoboSuite, and control of an autonomous air vehicle.
translated by 谷歌翻译
我们研究了覆盖的阶段 - 避免多个代理的动态游戏,其中多个代理相互作用,并且每种希望满足不同的目标条件,同时避免失败状态。 Reach-避免游戏通常用于表达移动机器人运动计划中发现的安全关键最优控制问题。虽然这些运动计划问题存在各种方法,但我们专注于找到时间一致的解决方案,其中计划未来的运动仍然是最佳的,尽管先前的次优行动。虽然摘要,时间一致性封装了一个非常理想的财产:即使机器人早期从计划发出的机器人的运动发散,即,由于例如内在的动态不确定性或外在环境干扰,即使机器人的运动分歧,时间一致的运动计划也保持最佳。我们的主要贡献是一种计算 - 避免多种代理的算法算法,避免呈现时间一致的解决方案。我们展示了我们在两位和三位玩家模拟驾驶场景中的方法,其中我们的方法为所有代理商提供了安全控制策略。
translated by 谷歌翻译
自动车辆(AVS)必须与异构地理区域的多种人类驱动因素互动。理想情况下,AVS的车队应该共享轨迹数据,以持续地从使用基于云的分布式学习的集体经验来重新列车和改进轨迹预测模型。与此同时,这些机器人应该理想地避免上传原始驱动程序交互数据,以保护专有政策(在与其他公司共享时的见解)或保护驾驶员隐私。联合学习(FL)是一种流行的机制,用于在不泄露私人本地数据的情况下从不同的用户学习来自不同用户的云服务器模型。然而,FL通常不是强大的 - 当用户数据来自高度异构的分布时,它会学习次优模型,这是人机交互的关键标志。在本文中,我们提出了一种小型变种的个性化FL,专门从事强大的机器人学习模型到不同的用户分布。我们的算法在实际用户研究中优于2倍的标准FL基准,我们进行了我们进行的人力操作车辆必须优雅地合并标准Carla和Carlo AV模拟器中的模拟AVS。
translated by 谷歌翻译
游戏理论运动计划者是控制多个高度交互式机器人系统的有效解决方案。大多数现有的游戏理论规划师不切实际地假设所有代理都可以使用先验的目标功能知识。为了解决这个问题,我们提出了一个容忍度的退缩水平游戏理论运动计划者,该计划者利用了与意图假设的可能性相互交流。具体而言,机器人传达其目标函数以结合意图。离散的贝叶斯过滤器旨在根据观察到的轨迹与传达意图的轨迹之间的差异来实时推断目标。在仿真中,我们考虑了三种安全至关重要的自主驾驶场景,即超车,车道交叉和交叉点,以证明我们计划者在存在通信网络中存在错误的传输情况下利用替代意图假设来产生安全轨迹的能力。
translated by 谷歌翻译
强大的强化学习(RL)着重于改善模型错误或对抗性攻击下的性能,这有助于RL代理的现实部署。强大的对抗强化学习(RARL)是RL最受欢迎的框架之一。但是,大多数现有的文献模型都以NASH均衡为解决方案概念的零和同时游戏,可以忽略RL部署的顺序性质,产生过度保守的代理,并引起训练不稳定。在本文中,我们介绍了一种新颖的RL RL的新型分层配方,即一种名为RRL -Stack的通用stackelberg游戏模型 - 以形式化顺序性质,并为健壮的训练提供了额外的灵活性。我们开发了Stackelberg策略梯度算法来解决RRL堆栈,通过考虑对手的反应来利用Stackelberg学习动态。我们的方法产生了有挑战性但可解决的对抗环境,这些环境使RL代理的强大学习受益。我们的算法表明,在单权机器人控制和多机科公路合并任务中,针对不同测试条件的训练稳定性和鲁棒性。
translated by 谷歌翻译
一般而言,融合是人类驱动因素和自治车辆的具有挑战性的任务,特别是在密集的交通中,因为合并的车辆通常需要与其他车辆互动以识别或创造间隙并安全合并。在本文中,我们考虑了强制合并方案的自主车辆控制问题。我们提出了一种新的游戏 - 理论控制器,称为领导者跟随者游戏控制器(LFGC),其中自主EGO车辆和其他具有先验不确定驾驶意图的车辆之间的相互作用被建模为部分可观察到的领导者 - 跟随游戏。 LFGC估计基于观察到的轨迹的其他车辆在线在线,然后预测其未来的轨迹,并计划使用模型预测控制(MPC)来同时实现概率保证安全性和合并目标的自我车辆自己的轨迹。为了验证LFGC的性能,我们在模拟和NGSIM数据中测试它,其中LFGC在合并中展示了97.5%的高成功率。
translated by 谷歌翻译
本文提出了一种新的规划和控制策略,用于赛车场景中的多辆车竞争。所提出的赛车策略在两种模式之间切换。当没有周围的车辆时,使用基于学习的模型预测控制(MPC)轨迹策划器用于保证自助车辆更好地实现了更好的搭接定时。当EGO车辆与其他围绕车辆竞争以超车时,基于优化的策划器通过并行计算产生多个动态可行的轨迹。每个轨迹在MPC配方下进行优化,其具有不同的同型贝塞尔曲线参考路径,横向于周围的车辆之间。选择这些不同的同型轨迹之间的时间最佳轨迹,并使用具有障碍物避免约束的低级MPC控制器来保证系统的安全性能。所提出的算法具有能够生成无碰撞轨迹并跟踪它们,同时提高杠杆定时性能,稳定的低计算复杂性,优于汽车赛车环境的时序和性能中的现有方法。为了展示我们的赛车策略的表现,我们在轨道上模拟了多个随机生成的移动车辆,并测试自我车辆的超越机动。
translated by 谷歌翻译
Many autonomous agents, such as intelligent vehicles, are inherently required to interact with one another. Game theory provides a natural mathematical tool for robot motion planning in such interactive settings. However, tractable algorithms for such problems usually rely on a strong assumption, namely that the objectives of all players in the scene are known. To make such tools applicable for ego-centric planning with only local information, we propose an adaptive model-predictive game solver, which jointly infers other players' objectives online and computes a corresponding generalized Nash equilibrium (GNE) strategy. The adaptivity of our approach is enabled by a differentiable trajectory game solver whose gradient signal is used for maximum likelihood estimation (MLE) of opponents' objectives. This differentiability of our pipeline facilitates direct integration with other differentiable elements, such as neural networks (NNs). Furthermore, in contrast to existing solvers for cost inference in games, our method handles not only partial state observations but also general inequality constraints. In two simulated traffic scenarios, we find superior performance of our approach over both existing game-theoretic methods and non-game-theoretic model-predictive control (MPC) approaches. We also demonstrate our approach's real-time planning capabilities and robustness in two hardware experiments.
translated by 谷歌翻译
Robots such as autonomous vehicles and assistive manipulators are increasingly operating in dynamic environments and close physical proximity to people. In such scenarios, the robot can leverage a human motion predictor to predict their future states and plan safe and efficient trajectories. However, no model is ever perfect -- when the observed human behavior deviates from the model predictions, the robot might plan unsafe maneuvers. Recent works have explored maintaining a confidence parameter in the human model to overcome this challenge, wherein the predicted human actions are tempered online based on the likelihood of the observed human action under the prediction model. This has opened up a new research challenge, i.e., \textit{how to compute the future human states online as the confidence parameter changes?} In this work, we propose a Hamilton-Jacobi (HJ) reachability-based approach to overcome this challenge. Treating the confidence parameter as a virtual state in the system, we compute a parameter-conditioned forward reachable tube (FRT) that provides the future human states as a function of the confidence parameter. Online, as the confidence parameter changes, we can simply query the corresponding FRT, and use it to update the robot plan. Computing parameter-conditioned FRT corresponds to an (offline) high-dimensional reachability problem, which we solve by leveraging recent advances in data-driven reachability analysis. Overall, our framework enables online maintenance and updates of safety assurances in human-robot interaction scenarios, even when the human prediction model is incorrect. We demonstrate our approach in several safety-critical autonomous driving scenarios, involving a state-of-the-art deep learning-based prediction model.
translated by 谷歌翻译
值得信赖的强化学习算法应有能力解决挑战性的现实问题,包括{Robustly}处理不确定性,满足{安全}的限制以避免灾难性的失败,以及在部署过程中{prencepentiming}以避免灾难性的失败}。这项研究旨在概述这些可信赖的强化学习的主要观点,即考虑其在鲁棒性,安全性和概括性上的内在脆弱性。特别是,我们给出严格的表述,对相应的方法进行分类,并讨论每个观点的基准。此外,我们提供了一个前景部分,以刺激有希望的未来方向,并简要讨论考虑人类反馈的外部漏洞。我们希望这项调查可以在统一的框架中将单独的研究汇合在一起,并促进强化学习的可信度。
translated by 谷歌翻译
延迟在迅速变化的环境中运行的自主系统的危害安全性,例如在自动驾驶和高速赛车方面的交通参与者的非确定性。不幸的是,在传统的控制器设计或在物理世界中部署之前,通常不考虑延迟。在本文中,从非线性优化到运动计划和控制以及执行器引起的其他不可避免的延迟的计算延迟被系统地和统一解决。为了处理所有这些延迟,在我们的框架中:1)我们提出了一种新的过滤方法,而没有事先了解动态和干扰分布的知识,以适应,安全地估算时间变化的计算延迟; 2)我们为转向延迟建模驱动动力学; 3)所有约束优化均在强大的管模型预测控制器中实现。对于应用的优点,我们证明我们的方法适合自动驾驶和自动赛车。我们的方法是独立延迟补偿控制器的新型设计。此外,在假设无延迟作为主要控制器的学习控制器的情况下,我们的方法是主要控制器的安全保护器。
translated by 谷歌翻译
轨迹预测对于自动驾驶汽车(AV)是必不可少的,以计划正确且安全的驾驶行为。尽管许多先前的作品旨在达到更高的预测准确性,但很少有人研究其方法的对抗性鲁棒性。为了弥合这一差距,我们建议研究数据驱动的轨迹预测系统的对抗性鲁棒性。我们设计了一个基于优化的对抗攻击框架,该框架利用精心设计的可区分动态模型来生成逼真的对抗轨迹。从经验上讲,我们基于最先进的预测模型的对抗性鲁棒性,并表明我们的攻击使通用指标和计划感知指标的预测错误增加了50%以上和37%。我们还表明,我们的攻击可以导致AV在模拟中驶离道路或碰撞到其他车辆中。最后,我们演示了如何使用对抗训练计划来减轻对抗性攻击。
translated by 谷歌翻译
我们解决了由具有不同驱动程序行为的道路代理人填充的密集模拟交通环境中的自我车辆导航问题。由于其异构行为引起的代理人的不可预测性,这种环境中的导航是挑战。我们提出了一种新的仿真技术,包括丰富现有的交通模拟器,其具有与不同程度的侵略性程度相对应的行为丰富的轨迹。我们在驾驶员行为建模算法的帮助下生成这些轨迹。然后,我们使用丰富的模拟器培训深度加强学习(DRL)策略,包括一组高级车辆控制命令,并在测试时间使用此策略来执行密集流量的本地导航。我们的政策隐含地模拟了交通代理商之间的交互,并计算了自助式驾驶员机动,例如超速,超速,编织和突然道路变化的激进驾驶员演习的安全轨迹。我们增强的行为丰富的模拟器可用于生成由对应于不同驱动程序行为和流量密度的轨迹组成的数据集,我们的行为的导航方案可以与最先进的导航算法相结合。
translated by 谷歌翻译
过去半年来,从控制和强化学习社区的真实机器人部署的安全学习方法的贡献数量急剧上升。本文提供了一种简洁的但整体审查,对利用机器学习实现的最新进展,以实现在不确定因素下的安全决策,重点是统一控制理论和加固学习研究中使用的语言和框架。我们的评论包括:基于学习的控制方法,通过学习不确定的动态,加强学习方法,鼓励安全或坚固性的加固学习方法,以及可以正式证明学习控制政策安全的方法。随着基于数据和学习的机器人控制方法继续获得牵引力,研究人员必须了解何时以及如何最好地利用它们在安全势在必行的现实情景中,例如在靠近人类的情况下操作时。我们突出了一些开放的挑战,即将在未来几年推动机器人学习领域,并强调需要逼真的物理基准的基准,以便于控制和加固学习方法之间的公平比较。
translated by 谷歌翻译
Autonomous vehicle (AV) stacks are typically built in a modular fashion, with explicit components performing detection, tracking, prediction, planning, control, etc. While modularity improves reusability, interpretability, and generalizability, it also suffers from compounding errors, information bottlenecks, and integration challenges. To overcome these challenges, a prominent approach is to convert the AV stack into an end-to-end neural network and train it with data. While such approaches have achieved impressive results, they typically lack interpretability and reusability, and they eschew principled analytical components, such as planning and control, in favor of deep neural networks. To enable the joint optimization of AV stacks while retaining modularity, we present DiffStack, a differentiable and modular stack for prediction, planning, and control. Crucially, our model-based planning and control algorithms leverage recent advancements in differentiable optimization to produce gradients, enabling optimization of upstream components, such as prediction, via backpropagation through planning and control. Our results on the nuScenes dataset indicate that end-to-end training with DiffStack yields substantial improvements in open-loop and closed-loop planning metrics by, e.g., learning to make fewer prediction errors that would affect planning. Beyond these immediate benefits, DiffStack opens up new opportunities for fully data-driven yet modular and interpretable AV architectures. Project website: https://sites.google.com/view/diffstack
translated by 谷歌翻译
Dynamic game arises as a powerful paradigm for multi-robot planning, for which safety constraint satisfaction is crucial. Constrained stochastic games are of particular interest, as real-world robots need to operate and satisfy constraints under uncertainty. Existing methods for solving stochastic games handle chance constraints using exponential penalties with hand-tuned weights. However, finding a suitable penalty weight is nontrivial and requires trial and error. In this paper, we propose the chance-constrained iterative linear-quadratic stochastic games (CCILQGames) algorithm. CCILQGames solves chance-constrained stochastic games using the augmented Lagrangian method. We evaluate our algorithm in three autonomous driving scenarios, including merge, intersection, and roundabout. Experimental results and Monte Carlo tests show that CCILQGames can generate safe and interactive strategies in stochastic environments.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
Trajectory prediction is an integral component of modern autonomous systems as it allows for envisioning future intentions of nearby moving agents. Due to the lack of other agents' dynamics and control policies, deep neural network (DNN) models are often employed for trajectory forecasting tasks. Although there exists an extensive literature on improving the accuracy of these models, there is a very limited number of works studying their robustness against adversarially crafted input trajectories. To bridge this gap, in this paper, we propose a targeted adversarial attack against DNN models for trajectory forecasting tasks. We call the proposed attack TA4TP for Targeted adversarial Attack for Trajectory Prediction. Our approach generates adversarial input trajectories that are capable of fooling DNN models into predicting user-specified target/desired trajectories. Our attack relies on solving a nonlinear constrained optimization problem where the objective function captures the deviation of the predicted trajectory from a target one while the constraints model physical requirements that the adversarial input should satisfy. The latter ensures that the inputs look natural and they are safe to execute (e.g., they are close to nominal inputs and away from obstacles). We demonstrate the effectiveness of TA4TP on two state-of-the-art DNN models and two datasets. To the best of our knowledge, we propose the first targeted adversarial attack against DNN models used for trajectory forecasting.
translated by 谷歌翻译
在多游戏设置中运行的机器人必须同时对共享环境的人类或机器人代理的环境和行为进行建模。通常使用同时定位和映射(SLAM)进行这种建模;但是,SLAM算法通常忽略了多人相互作用。相比之下,运动计划文献经常使用动态游戏理论来在具有完美本地化的已知环境中明确对多个代理的非合作相互作用进行建模。在这里,我们介绍了GTP-Slam,这是一种基于迭代最佳响应的小说最佳SLAM算法,可以准确执行状态定位和映射重建,同时使用游戏理论先验来捕获未知场景中多个代理之间固有的非合作互动。通过将基本的大满贯问题作为潜在游戏,我们继承了强有力的融合保证。经验结果表明,当部署在现实的交通模拟中时,我们的方法比在广泛的噪声水平上的标准捆绑捆绑调整算法更准确地进行本地化和映射。
translated by 谷歌翻译