When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
While reinforcement learning (RL) has become a more popular approach for robotics, designing sufficiently informative reward functions for complex tasks has proven to be extremely difficult due their inability to capture human intent and policy exploitation. Preference based RL algorithms seek to overcome these challenges by directly learning reward functions from human feedback. Unfortunately, prior work either requires an unreasonable number of queries implausible for any human to answer or overly restricts the class of reward functions to guarantee the elicitation of the most informative queries, resulting in models that are insufficiently expressive for realistic robotics tasks. Contrary to most works that focus on query selection to \emph{minimize} the amount of data required for learning reward functions, we take an opposite approach: \emph{expanding} the pool of available data by viewing human-in-the-loop RL through the more flexible lens of multi-task learning. Motivated by the success of meta-learning, we pre-train preference models on prior task data and quickly adapt them for new tasks using only a handful of queries. Empirically, we reduce the amount of online feedback needed to train manipulation policies in Meta-World by 20$\times$, and demonstrate the effectiveness of our method on a real Franka Panda Robot. Moreover, this reduction in query-complexity allows us to train robot policies from actual human users. Videos of our results and code can be found at https://sites.google.com/view/few-shot-preference-rl/home.
translated by 谷歌翻译
人类可以利用身体互动来教机器人武器。这种物理互动取决于任务,用户以及机器人到目前为止所学的内容。最先进的方法专注于从单一模态学习,或者假设机器人具有有关人类预期任务的先前信息,从而结合了多个互动类型。相比之下,在本文中,我们介绍了一种算法形式主义,该算法从演示,更正和偏好中学习。我们的方法对人类想要教机器人的任务没有任何假设。取而代之的是,我们通过将人类的输入与附近的替代方案进行比较,从头开始学习奖励模型。我们首先得出损失函数,该功能训练奖励模型的合奏,以匹配人类的示范,更正和偏好。反馈的类型和顺序取决于人类老师:我们使机器人能够被动地或积极地收集此反馈。然后,我们应用受约束的优化将我们学习的奖励转换为所需的机器人轨迹。通过模拟和用户研究,我们证明,与现有基线相比,我们提出的方法更准确地从人体互动中学习了操纵任务,尤其是当机器人面临新的或意外的目标时。我们的用户研究视频可在以下网址获得:https://youtu.be/fsujstyveku
translated by 谷歌翻译
One of the most successful paradigms for reward learning uses human feedback in the form of comparisons. Although these methods hold promise, human comparison labeling is expensive and time consuming, constituting a major bottleneck to their broader applicability. Our insight is that we can greatly improve how effectively human time is used in these approaches by batching comparisons together, rather than having the human label each comparison individually. To do so, we leverage data dimensionality-reduction and visualization techniques to provide the human with a interactive GUI displaying the state space, in which the user can label subportions of the state space. Across some simple Mujoco tasks, we show that this high-level approach holds promise and is able to greatly increase the performance of the resulting agents, provided the same amount of human labeling time.
translated by 谷歌翻译
本文解决了逆增强学习(IRL)的问题 - 从观察其行为中推断出代理的奖励功能。 IRL可以为学徒学习提供可概括和紧凑的代表,并能够准确推断人的偏好以帮助他们。 %并提供更准确的预测。但是,有效的IRL具有挑战性,因为许多奖励功能可以与观察到的行为兼容。我们专注于如何利用先前的强化学习(RL)经验,以使学习这些偏好更快,更高效。我们提出了IRL算法基础(通过样本中的连续功能意图推断行为获取行为),该算法利用多任务RL预培训和后继功能,使代理商可以为跨越可能的目标建立强大的基础,从而跨越可能的目标。给定的域。当仅接触一些专家演示以优化新颖目标时,代理商会使用其基础快速有效地推断奖励功能。我们的实验表明,我们的方法非常有效地推断和优化显示出奖励功能,从而准确地从少于100个轨迹中推断出奖励功能。
translated by 谷歌翻译
机器人需要能够从用户学习概念,以便将其功能调整到每个用户的唯一任务。但是当机器人在高维输入上运行时,如图像或点云,这是不切实际的:机器人需要一种不切实际的人类努力来学习新概念。为了解决这一挑战,我们提出了一种新方法,其中机器人学习概念的低维变体,并使用它来生成更大的数据集,用于在高维空间中学习概念。这使得只有在训练时间等地访问的语义上有意义的特权信息,如对象姿势和边界框,这允许更丰富的人类交互来加速学习。我们通过学习介词概念来评估我们的方法,这些概念描述了对象状态或多对象关系,如上面,近,近或对齐,这是用户规范任务目标和机器人的执行约束的关键。使用模拟人类,我们表明,与直接在高维空间中的学习概念相比,我们的方法可以提高样本复杂性。我们还展示了学习概念在7 DOF法兰卡熊猫机器人上的运动规划任务中的效用。
translated by 谷歌翻译
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. * Equal contribution. Order was determined by coin flip.
translated by 谷歌翻译
我们调查视觉跨实施的模仿设置,其中代理商学习来自其他代理的视频(例如人类)的策略,示范相同的任务,但在其实施例中具有缺点差异 - 形状,动作,终效应器动态等。在这项工作中,我们证明可以从对这些差异强大的跨实施例证视频自动发现和学习基于视觉的奖励功能。具体而言,我们介绍了一种用于跨实施的跨实施的自我监督方法(XIRL),它利用时间周期 - 一致性约束来学习深度视觉嵌入,从而从多个专家代理的示范的脱机视频中捕获任务进度,每个都执行相同的任务不同的原因是实施例差异。在我们的工作之前,从自我监督嵌入产生奖励通常需要与参考轨迹对齐,这可能难以根据STARK实施例的差异来获取。我们凭经验显示,如果嵌入式了解任务进度,则只需在学习的嵌入空间中占据当前状态和目标状态之间的负距离是有用的,作为培训与加强学习的培训政策的奖励。我们发现我们的学习奖励功能不仅适用于在训练期间看到的实施例,而且还概括为完全新的实施例。此外,在将现实世界的人类示范转移到模拟机器人时,我们发现XIRL比当前最佳方法更具样本。 https://x-irl.github.io提供定性结果,代码和数据集
translated by 谷歌翻译
当从人类行为中推断出奖励功能(无论是演示,比较,物理校正或电子停靠点)时,它已证明对人类进行建模作为做出嘈杂的理性选择,并具有“合理性系数”,以捕获多少噪声或熵我们希望看到人类的行为。无论人类反馈的类型或质量如何,许多现有作品都选择修复此系数。但是,在某些情况下,进行演示可能要比回答比较查询要困难得多。在这种情况下,我们应该期望在示范中看到比比较中更多的噪音或次级临时性,并且应该相应地解释反馈。在这项工作中,我们提倡,将每种反馈类型的实际数据中的理性系数扎根,而不是假设默认值,对奖励学习具有重大的积极影响。我们在模拟反馈以及用户研究的实验中测试了这一点。我们发现,从单一反馈类型中学习时,高估人类理性可能会对奖励准确性和遗憾产生可怕的影响。此外,我们发现合理性层面会影响每种反馈类型的信息性:令人惊讶的是,示威并不总是最有用的信息 - 当人类的行为非常卑鄙时,即使在合理性水平相同的情况下,比较实际上就变得更加有用。 。此外,当机器人确定要要求的反馈类型时,它可以通过准确建模每种类型的理性水平来获得很大的优势。最终,我们的结果强调了关注假定理性级别的重要性,不仅是在从单个反馈类型中学习时,尤其是当代理商从多种反馈类型中学习时,尤其是在学习时。
translated by 谷歌翻译
Recent work in sim2real has successfully enabled robots to act in physical environments by training in simulation with a diverse ''population'' of environments (i.e. domain randomization). In this work, we focus on enabling generalization in assistive tasks: tasks in which the robot is acting to assist a user (e.g. helping someone with motor impairments with bathing or with scratching an itch). Such tasks are particularly interesting relative to prior sim2real successes because the environment now contains a human who is also acting. This complicates the problem because the diversity of human users (instead of merely physical environment parameters) is more difficult to capture in a population, thus increasing the likelihood of encountering out-of-distribution (OOD) human policies at test time. We advocate that generalization to such OOD policies benefits from (1) learning a good latent representation for human policies that test-time humans can accurately be mapped to, and (2) making that representation adaptable with test-time interaction data, instead of relying on it to perfectly capture the space of human policies based on the simulated population only. We study how to best learn such a representation by evaluating on purposefully constructed OOD test policies. We find that sim2real methods that encode environment (or population) parameters and work well in tasks that robots do in isolation, do not work well in assistance. In assistance, it seems crucial to train the representation based on the history of interaction directly, because that is what the robot will have access to at test time. Further, training these representations to then predict human actions not only gives them better structure, but also enables them to be fine-tuned at test-time, when the robot observes the partner act. https://adaptive-caregiver.github.io.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
从演示和成对偏好推断奖励功能是对准与人类意图的强化学习(RL)代理的吉祥方法。然而,最先进的方法通常专注于学习单一奖励模型,从而使得难以从多个专家兑换不同的奖励功能。我们提出了多目标加强主动学习(道德),这是一种将社会规范多样化示范与帕累托最优政策相结合的新方法。通过维持分布在标量化权重,我们的方法能够以各种偏好交互地调整深度RL代理,同时消除了计算多个策略的需求。我们经验展示了道德在两种情景中的有效性,该方案模拟了需要代理人在规范冲突的情况下采取行动的交付和紧急任务。总体而言,我们认为我们的研究迈出了多目标RL的一步,具有学习奖励,弥合当前奖励学习和机器伦理文学之间的差距。
translated by 谷歌翻译
在机器人技术中,以可扩展的方式构建各种操纵技巧的曲目仍然是一个未解决的挑战。解决这一挑战的一种方法是在非结构化的人类游戏中,人类在环境中自由运作以实现未指定的目标。游戏是一种简单且廉价的方法,用于收集各种用户演示,并在环境中进行广泛的状态和目标覆盖。由于这种不同的覆盖范围,现有的从游戏中学习的方法对离线数据分布的在线政策偏差更加牢固。但是,这些方法通常很难在场景变化和具有挑战性的操纵基础上学习,部分原因是将复杂的行为与他们引起的场景变化联系起来。我们的见解是,以对象数据为中心的观点可以帮助将人类的行为和所产生的环境变化联系起来,从而改善多任务策略学习。在这项工作中,我们构建了一个潜在空间来建模对象\ textit {proffances} - 在环境中定义其用途的对象的属性,然后学习实现所需负担的策略。通过对可变范围任务进行建模和预测所需的负担,我们的方法通过以对象为中心的游戏(PLATO)预测潜在的负担,在2D和3D对象操纵模拟和现实世界环境中,在复杂的操纵任务上的现有方法优于现有方法互动。可以在我们的网站上找到视频:https://tinyurl.com/4U23HWFV
translated by 谷歌翻译
从示范中学习(LFD)方法使最终用户能够通过演示所需的行为来教机器人新任务,从而使对机器人技术的访问民主化。但是,当前的LFD框架无法快速适应异质的人类示范,也无法在无处不在的机器人技术应用中进行大规模部署。在本文中,我们提出了一个新型的LFD框架,快速的终身自适应逆增强学习(FLAIR)。我们的方法(1)利用策略来构建政策混合物,以快速适应新的示范,从而快速最终用户个性化; (2)提炼跨示范的常识,实现准确的任务推断; (3)仅在终身部署中需要扩展其模型,并保持一套简洁的原型策略,这些策略可以通过政策混合物近似所有行为。我们从经验上验证了能力可以实现适应能力(即机器人适应异质性,特定用户特定的任务偏好),效率(即机器人实现样本适应性)和可伸缩性(即,模型都会与示范范围增长,同时保持高性能)。 Flair超过了三个连续控制任务的基准测试,其政策收益的平均提高了57%,使用策略混合物进行示范建模所需的次数少78%。最后,我们在现实机器人乒乓球任务中展示了Flair的成功。
translated by 谷歌翻译
元强化学习(RL)方法可以使用比标准RL少的数据级的元培训策略,但元培训本身既昂贵又耗时。如果我们可以在离线数据上进行元训练,那么我们可以重复使用相同的静态数据集,该数据集将一次标记为不同任务的奖励,以在元测试时间适应各种新任务的元训练策略。尽管此功能将使Meta-RL成为现实使用的实用工具,但离线META-RL提出了除在线META-RL或标准离线RL设置之外的其他挑战。 Meta-RL学习了一种探索策略,该策略收集了用于适应的数据,并元培训策略迅速适应了新任务的数据。由于该策略是在固定的离线数据集上进行了元训练的,因此当适应学识渊博的勘探策略收集的数据时,它可能表现得不可预测,这与离线数据有系统地不同,从而导致分布变化。我们提出了一种混合脱机元元素算法,该算法使用带有奖励的脱机数据来进行自适应策略,然后收集其他无监督的在线数据,而无需任何奖励标签来桥接这一分配变化。通过不需要在线收集的奖励标签,此数据可以便宜得多。我们将我们的方法比较了在模拟机器人的运动和操纵任务上进行离线元rl的先前工作,并发现使用其他无监督的在线数据收集可以显着提高元训练政策的自适应能力,从而匹配完全在线的表现。在一系列具有挑战性的域上,需要对新任务进行概括。
translated by 谷歌翻译
When robots interact with humans in homes, roads, or factories the human's behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to co-adapt alongside dynamic humans (i.e., other agents) using only the robot's low-level states, actions, and rewards. A core challenge is that humans not only react to the robot's behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that -- instead of building an exact model of the human -- robots can learn and reason over high-level representations of the human's policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human's latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that -- given RILI's measured performance with users sampled from an underlying distribution -- we can probabilistically bound RILI's expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ
translated by 谷歌翻译
Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research. However, collecting large-scale robotic data is much more expensive and slower as each operator can control only a single robot at a time. To make this costly data collection process efficient and scalable, we propose Policy Assisted TeleOperation (PATO), a system which automates part of the demonstration collection process using a learned assistive policy. PATO autonomously executes repetitive behaviors in data collection and asks for human input only when it is uncertain about which subtask or behavior to execute. We conduct teleoperation user studies both with a real robot and a simulated robot fleet and demonstrate that our assisted teleoperation system reduces human operators' mental load while improving data collection efficiency. Further, it enables a single operator to control multiple robots in parallel, which is a first step towards scalable robotic data collection. For code and video results, see https://clvrai.com/pato
translated by 谷歌翻译
我们研究了从机器人交互的大型离线数据集学习一系列基于视觉的操纵任务的问题。为了实现这一目标,人类需要简单有效地将任务指定给机器人。目标图像是一种流行的任务规范形式,因为它们已经在机器人的观察空间接地。然而,目标图像也有许多缺点:它们对人类提供的不方便,它们可以通过提供导致稀疏奖励信号的所需行为,或者在非目标达到任务的情况下指定任务信息。自然语言为任务规范提供了一种方便而灵活的替代方案,而是随着机器人观察空间的接地语言挑战。为了可扩展地学习此基础,我们建议利用具有人群源语言标签的离线机器人数据集(包括高度最佳,自主收集的数据)。使用此数据,我们学习一个简单的分类器,该分类器预测状态的更改是否完成了语言指令。这提供了一种语言调节奖励函数,然后可以用于离线多任务RL。在我们的实验中,我们发现,在语言条件的操作任务中,我们的方法优于目标 - 图像规格和语言条件仿制技术超过25%,并且能够从自然语言中执行Visuomotor任务,例如“打开右抽屉“和”移动订书机“,在弗兰卡·埃米卡熊猫机器人上。
translated by 谷歌翻译
我们的目标是使机器人能够以情感方式执行功能任务,无论是响应用户的情绪状态还是表达其信心水平。先前的工作已经提出了从用户反馈中每个目标情绪的学习独立成本功能,以便机器人可以在遇到的任何情况下将其与任务和环境特定目标一起优化。但是,在建模多种情绪并且无法推广到新的情绪时,这种方法效率低下。在这项工作中,我们利用了一个事实,即情绪并非彼此独立:它们是通过价值占主导地位的潜在空间(VAD)相关的。我们的关键想法是学习一个模型,以使用用户标签映射到VAD上。考虑到轨迹的映射和目标VAD之间的距离,可以使该单个模型代表所有情绪的成本功能。结果1)所有用户反馈都可以促进学习每一个情绪; 2)机器人可以为空间中的任何情感生成轨迹,而不仅仅是少数预定义的轨迹; 3)机器人可以通过将其映射到目标VAD来对用户生成的自然语言进行情感响应。我们介绍了一种交互式学习将轨迹映射到该潜在空间并在模拟和用户研究中对其进行测试的方法。在实验中,我们使用一个简单的真空机器人以及Cassie Biped。
translated by 谷歌翻译