当机器人与人类伴侣互动时,这些合作伙伴通常会因机器人而改变其行为。一方面,这是具有挑战性的,因为机器人必须学会与动态合作伙伴进行协调。但是,另一方面 - 如果机器人理解这些动态 - 它可以利用自己的行为,影响人类,并指导团队进行有效的协作。先前的研究使机器人能够学会影响其他机器人或模拟药物。在本文中,我们将这些学习方法扩展到现在影响人类。使人类特别难影响的原因是 - 人类不仅对机器人做出反应 - 而且单个用户对机器人的反应可能会随着时间而改变,而且不同的人类会以不同的方式对相同的机器人行为做出反应。因此,我们提出了一种强大的方法,该方法学会影响不断变化的伴侣动态。我们的方法首先在重复互动中与一组合作伙伴进行训练,并学会根据以前的状态,行动和奖励来预测当前伙伴的行为。接下来,我们通过对机器人与原始合作伙伴学习的轨迹进行采样轨迹迅速适应了新合作伙伴,然后利用这些现有行为来影响新的合作伙伴动态。我们将最终的算法与跨模拟环境和用户研究进行比较,并在其中进行了机器人和参与者协作建造塔楼的用户研究。我们发现,即使合作伙伴遵循新的或意外的动态,我们的方法也优于替代方案。用户研究的视频可在此处获得:https://youtu.be/lyswm8an18g
translated by 谷歌翻译
When robots interact with humans in homes, roads, or factories the human's behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to co-adapt alongside dynamic humans (i.e., other agents) using only the robot's low-level states, actions, and rewards. A core challenge is that humans not only react to the robot's behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that -- instead of building an exact model of the human -- robots can learn and reason over high-level representations of the human's policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human's latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that -- given RILI's measured performance with users sampled from an underlying distribution -- we can probabilistically bound RILI's expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ
translated by 谷歌翻译
当人类与机器人互动时,不可避免地会影响。考虑一辆在人类附近行驶的自动驾驶汽车:自动驾驶汽车的速度和转向将影响人类驾驶方式。先前的作品开发了框架,使机器人能够影响人类对所需行为的影响。但是,尽管这些方法在短期(即前几个人类机器人相互作用)中有效,但我们在这里探索了长期影响(即同一人与机器人之间的重复相互作用)。我们的主要见解是,人类是动态的:人们适应机器人,一旦人类学会预见机器人的行为,现在影响力的行为可能会失败。有了这种见解,我们在实验上证明了一种普遍的游戏理论形式主义,用于产生有影响力的机器人行为,而不是重复互动的有效性降低。接下来,我们为Stackelberg游戏提出了三个修改,这些游戏使机器人的政策具有影响力和不可预测性。我们最终在模拟和用户研究中测试了这些修改:我们的结果表明,故意使他们的行为更难预期的机器人能够更好地维持对长期互动的影响。在此处查看视频:https://youtu.be/ydo83cgjz2q
translated by 谷歌翻译
人类可以利用身体互动来教机器人武器。这种物理互动取决于任务,用户以及机器人到目前为止所学的内容。最先进的方法专注于从单一模态学习,或者假设机器人具有有关人类预期任务的先前信息,从而结合了多个互动类型。相比之下,在本文中,我们介绍了一种算法形式主义,该算法从演示,更正和偏好中学习。我们的方法对人类想要教机器人的任务没有任何假设。取而代之的是,我们通过将人类的输入与附近的替代方案进行比较,从头开始学习奖励模型。我们首先得出损失函数,该功能训练奖励模型的合奏,以匹配人类的示范,更正和偏好。反馈的类型和顺序取决于人类老师:我们使机器人能够被动地或积极地收集此反馈。然后,我们应用受约束的优化将我们学习的奖励转换为所需的机器人轨迹。通过模拟和用户研究,我们证明,与现有基线相比,我们提出的方法更准确地从人体互动中学习了操纵任务,尤其是当机器人面临新的或意外的目标时。我们的用户研究视频可在以下网址获得:https://youtu.be/fsujstyveku
translated by 谷歌翻译
虽然多代理学习的进步使得能够培训越来越复杂的代理商,但大多数现有技术都产生了最终政策,该政策不旨在适应新的合作伙伴的战略。但是,我们希望我们的AI代理商根据周围的战略来调整他们的战略。在这项工作中,我们研究了有条件的多代理模仿学习问题,我们可以在培训时间访问联合轨迹演示,我们必须在测试时间与新合作伙伴进行互动并适应新伙伴。这种环境是具有挑战性的,因为我们必须推断新的合作伙伴的战略并使我们的政策适应该战略,而不是了解环境奖励或动态。我们将该条件多代理模仿学习的问题正式化,提出了一种解决可扩展性和数据稀缺的困难的新方法。我们的主要洞察力是,多种代理游戏的合作伙伴的变化通常很高,并且可以通过低秩子空间来表示。利用张量分解的工具,我们的模型在EGO和合作伙伴代理战略上学习了低秩子空间,然后是infers并通过插值在子空间中互动到新的合作伙伴策略。我们用混合协作任务的实验,包括匪徒,粒子和Hanabi环境。此外,我们还测试我们对超级烹饪游戏的用户学习中的真实人体合作​​伙伴的条件政策。与基线相比,我们的模型更好地适应新的合作伙伴,并强大地处理各种设置,从离散/持续的动作和静态/在线评估与AI / Lean Partners。
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
Human and robot partners increasingly need to work together to perform tasks as a team. Robots designed for such collaboration must reason about how their task-completion strategies interplay with the behavior and skills of their human team members as they coordinate on achieving joint goals. Our goal in this work is to develop a computational framework for robot adaptation to human partners in human-robot team collaborations. We first present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task. By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge. Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners. We evaluate our model on a collaborative cooking task using an Overcooked simulator. Results of an online user study with 125 participants demonstrate that our framework improves the task performance and collaborative fluency of human-agent teams, as compared to state of the art reinforcement learning methods.
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
While reinforcement learning (RL) has become a more popular approach for robotics, designing sufficiently informative reward functions for complex tasks has proven to be extremely difficult due their inability to capture human intent and policy exploitation. Preference based RL algorithms seek to overcome these challenges by directly learning reward functions from human feedback. Unfortunately, prior work either requires an unreasonable number of queries implausible for any human to answer or overly restricts the class of reward functions to guarantee the elicitation of the most informative queries, resulting in models that are insufficiently expressive for realistic robotics tasks. Contrary to most works that focus on query selection to \emph{minimize} the amount of data required for learning reward functions, we take an opposite approach: \emph{expanding} the pool of available data by viewing human-in-the-loop RL through the more flexible lens of multi-task learning. Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries. Empirically, we reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$\times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot. Moreover, this reduction in query-complexity allows us to train robot policies from actual human users. Videos of our results and code can be found at https://sites.google.com/view/few-shot-preference-rl/home.
translated by 谷歌翻译
临时团队合作是设计可以与新队友合作而无需事先协调的研究问题的研究问题。这项调查做出了两个贡献:首先,它提供了对临时团队工作问题不同方面的结构化描述。其次,它讨论了迄今为止该领域取得的进展,并确定了临时团队工作中需要解决的直接和长期开放问题。
translated by 谷歌翻译
Recent work in sim2real has successfully enabled robots to act in physical environments by training in simulation with a diverse ''population'' of environments (i.e. domain randomization). In this work, we focus on enabling generalization in assistive tasks: tasks in which the robot is acting to assist a user (e.g. helping someone with motor impairments with bathing or with scratching an itch). Such tasks are particularly interesting relative to prior sim2real successes because the environment now contains a human who is also acting. This complicates the problem because the diversity of human users (instead of merely physical environment parameters) is more difficult to capture in a population, thus increasing the likelihood of encountering out-of-distribution (OOD) human policies at test time. We advocate that generalization to such OOD policies benefits from (1) learning a good latent representation for human policies that test-time humans can accurately be mapped to, and (2) making that representation adaptable with test-time interaction data, instead of relying on it to perfectly capture the space of human policies based on the simulated population only. We study how to best learn such a representation by evaluating on purposefully constructed OOD test policies. We find that sim2real methods that encode environment (or population) parameters and work well in tasks that robots do in isolation, do not work well in assistance. In assistance, it seems crucial to train the representation based on the history of interaction directly, because that is what the robot will have access to at test time. Further, training these representations to then predict human actions not only gives them better structure, but also enables them to be fine-tuned at test-time, when the robot observes the partner act. https://adaptive-caregiver.github.io.
translated by 谷歌翻译
我们展示了单轨道路问题。在这个问题中,两个代理在一条道路的相对位置时面对每个代理,这一次只能有一个试剂通过。我们专注于一个代理人是人类的情景,而另一个是一种自主代的代理人。我们在一个简单的网格域中与人类对象进行实验,这模拟了单轨道路问题。我们表明,当数据有限时,建立准确的人类模型是非常具有挑战性的,并且基于该数据的加强学习代理在实践中表现不佳。但是,我们表明,试图最大限度地提高人力效用和自己的实用程序的线性组合的代理,达到了高分,并且显着优于其他基线,包括试图仅最大化其自身的实用性的代理。
translated by 谷歌翻译
元强化学习(RL)方法可以使用比标准RL少的数据级的元培训策略,但元培训本身既昂贵又耗时。如果我们可以在离线数据上进行元训练,那么我们可以重复使用相同的静态数据集,该数据集将一次标记为不同任务的奖励,以在元测试时间适应各种新任务的元训练策略。尽管此功能将使Meta-RL成为现实使用的实用工具,但离线META-RL提出了除在线META-RL或标准离线RL设置之外的其他挑战。 Meta-RL学习了一种探索策略,该策略收集了用于适应的数据,并元培训策略迅速适应了新任务的数据。由于该策略是在固定的离线数据集上进行了元训练的,因此当适应学识渊博的勘探策略收集的数据时,它可能表现得不可预测,这与离线数据有系统地不同,从而导致分布变化。我们提出了一种混合脱机元元素算法,该算法使用带有奖励的脱机数据来进行自适应策略,然后收集其他无监督的在线数据,而无需任何奖励标签来桥接这一分配变化。通过不需要在线收集的奖励标签,此数据可以便宜得多。我们将我们的方法比较了在模拟机器人的运动和操纵任务上进行离线元rl的先前工作,并发现使用其他无监督的在线数据收集可以显着提高元训练政策的自适应能力,从而匹配完全在线的表现。在一系列具有挑战性的域上,需要对新任务进行概括。
translated by 谷歌翻译
由于行动和状态空间的连续性,策略的多模式,环境中的障碍的存在以及对其他代理的瞬时适应需要,因此协作式携带是一项复杂的任务。在这项工作中,我们提出了一种预测合作人类手机团队的现实运动计划的方法。使用变性复发性神经网络VRNN来对人类机器人团队的轨迹进行建模,随着时间的流逝,我们能够捕获团队未来状态的分布,同时利用交互历史的信息。我们方法的关键是我们模型利用人类示范数据并产生在测试期间与人协同良好的轨迹的能力。我们表明,与基线,基于集中抽样的计划者快速探索的随机树(RRT)相比,该模型会产生更多类似人类的运动。此外,我们通过人类合作伙伴评估了VRNN规划师,并显示出比RRT在与人类计划时能够产生更类似人类的路径并获得更高的任务成功率的能力。最后,我们证明了使用VRNN规划师使用的Lotobot可以通过控制另一个Locot的人来成功完成任务。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
在本文中,我们研究了不确定性下的顺序决策任务中可读性的概念。以前的作品将易读性扩展到了机器人运动以外的方案,要么集中在确定性设置上,要么在计算上太昂贵。我们提出的称为POL-MDP的方法能够处理不确定性,同时保持计算障碍。在几种不同复杂性的模拟场景中,我们建立了反对最新方法的方法的优势。我们还展示了将我们的清晰政策用作反向加强学习代理的示范,并根据最佳政策建立了他们的优越性。最后,我们通过用户研究评估计算政策的可读性,在该研究中,要求人们通过观察其行动来推断移动机器人的目标。
translated by 谷歌翻译
我们提出了一种新型的参数化技能学习算法,旨在学习可转移的参数化技能并将其合成为新的动作空间,以支持长期任务中的有效学习。我们首先提出了新颖的学习目标 - 以轨迹为中心的多样性和平稳性 - 允许代理商能够重复使用的参数化技能。我们的代理商可以使用这些学习的技能来构建时间扩展的参数化行动马尔可夫决策过程,我们为此提出了一种层次的参与者 - 批判算法,旨在通过学习技能有效地学习高级控制政策。我们从经验上证明,所提出的算法使代理能够解决复杂的长途障碍源环境。
translated by 谷歌翻译
我们介绍了语言信息的潜在行动(LILA),这是在人机协作的背景下学习自然语言界面的框架。 Lila落在共享自主范式下:除了提供离散语言输入之外,人类还有低维控制器$ - 例如,可以向左/向右和向右移动2自由度(DOF)操纵杆$ - $操作机器人。 LILA学习使用语言来调制本控制器,为用户提供语言信息的控制空间:给定“将谷物碗放在托盘上的指示”,LILA可以学习一个二维空间,其中一个维度控制距离的距离机器人的末端执行器到碗,另一个维度控制机器人的末端效应器相对于碗上的抓地点。我们使用现实世界的用户学习评估LILA,用户可以在操作7 DOF法兰卡·埃米卡熊猫手臂时提供语言指导,以完成一系列复杂的操作任务。我们表明LILA模型不仅可以比仿制学习和终端效应器控制基线更高效,而且表现不变,但它们也是质疑优选的用户。
translated by 谷歌翻译
High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with fidelity, diversity, and controllability in consideration, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows meet all three design goals, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.
translated by 谷歌翻译
SIM到现实的转移是机器人增强学习的强大范式。在模拟中训练政策的能力可以以低成本快速探索和大规模数据收集。但是,机器人策略的SIM到现实转移的先前工作通常不涉及任何人类机器人的相互作用,因为准确模拟人类行为是一个空旷的问题。在这项工作中,我们的目标是利用模拟的力量来训练熟练在部署时与人类互动的机器人政策。但是有一个鸡肉和鸡蛋问题 - 我们如何收集人与物理机器人互动的例子,以在模拟中对人类行为进行建模,而没有已经有能够与人相互作用的机器人?我们提出的方法,即迭代-SIM-to-real(I-S2R),试图解决这个问题。 I-S2R引导程序来自一个简单的人类行为模型和在模拟和在现实世界中部署的训练之间的交替。在每次迭代中,人类行为模型和政策都得到了完善。我们在现实世界的机器人乒乓球环境中评估我们的方法,该机器人的目标是尽可能长时间与人类玩家合作。乒乓球是一项高速,充满活力的任务,要求两名球员对彼此的举动迅速做出反应,从而使测试床具有挑战性,以研究人类机器人互动。我们在一个工业机器人手臂上介绍了结果,该机器人能够与人类球员合作打乒乓球,平均获得22次连续击球的集会,充其量只有150个。此外,对于80%的球员来说,与SIM-TO-REAL(S2R)基线相比,拉力长度长70%至175%。有关我们系统中的视频,请参见https://sites.google.com/view/is2r。
translated by 谷歌翻译