Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps. Thus, they can easily generalize and adapt to new and changing environments. Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting, making it difficult for them to imitate human behavior in case of versatile demonstrations. Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility. To facilitate generalization to novel task configurations, we do not directly match the agent's and expert's trajectory distributions but rather work with concise geometric descriptors which generalize well to unseen task configurations. We empirically validate our method on various robot tasks using versatile human demonstrations and compare to imitation learning algorithms in a state-action setting as well as a trajectory-based setting. We find that the geometric descriptors greatly help in generalizing to new task configurations and that combining them with our distribution-matching objective is crucial for representing and reproducing versatile behavior.
translated by 谷歌翻译
逆钢筋学习从专家演示中获取奖励功能,旨在编码专家的行为和意图。目前的方法通常用生成和Uni-Modal模型来执行此操作,这意味着它们编码单个行为。在常见的环境中,在有问题的各种解决方案中,专家显示多功能行为,这严重限制了这些方法的泛化能力。我们提出了一种新颖的逆钢筋学习方法,通过将恢复的奖励作为迭代训练的鉴别者的总和制定回收的奖励来提出克服这些问题。我们展示了我们的方法能够恢复一般,高质量的奖励功能,并产生与专为多才多艺行为设计的行为克隆方法相同的质量的政策。
translated by 谷歌翻译
Humans demonstrate a variety of interesting behavioral characteristics when performing tasks, such as selecting between seemingly equivalent optimal actions, performing recovery actions when deviating from the optimal trajectory, or moderating actions in response to sensed risks. However, imitation learning, which attempts to teach robots to perform these same tasks from observations of human demonstrations, often fails to capture such behavior. Specifically, commonly used learning algorithms embody inherent contradictions between the learning assumptions (e.g., single optimal action) and actual human behavior (e.g., multiple optimal actions), thereby limiting robot generalizability, applicability, and demonstration feasibility. To address this, this paper proposes designing imitation learning algorithms with a focus on utilizing human behavioral characteristics, thereby embodying principles for capturing and exploiting actual demonstrator behavioral characteristics. This paper presents the first imitation learning framework, Bayesian Disturbance Injection (BDI), that typifies human behavioral characteristics by incorporating model flexibility, robustification, and risk sensitivity. Bayesian inference is used to learn flexible non-parametric multi-action policies, while simultaneously robustifying policies by injecting risk-sensitive disturbances to induce human recovery action and ensuring demonstration feasibility. Our method is evaluated through risk-sensitive simulations and real-robot experiments (e.g., table-sweep task, shaft-reach task and shaft-insertion task) using the UR5e 6-DOF robotic arm, to demonstrate the improved characterisation of behavior. Results show significant improvement in task performance, through improved flexibility, robustness as well as demonstration feasibility.
translated by 谷歌翻译
许多现有的模仿学习数据集都是从多个演示者那里收集的,每个示威者在环境的不同部分都有不同的专业知识。然而,标准模仿学习算法通常将所有示威者视为同质的,无论其专业知识如何,都会吸收任何次优示威者的弱点。在这项工作中,我们表明,对演示者专业知识的无监督学习可以导致模仿学习算法的性能一致。我们在示威者的学习政策和专业知识水平上开发并优化了联合模型。这使我们的模型能够从最佳行为中学习,并过滤每个演示者的次优行为。我们的模型学会了一项单一的政策,即使是最好的演示者,也可以用来估计任何州的任何演示者的专业知识。我们说明了我们从机器人和离散环境(例如Minigrid和国际象棋)的真实性持续控制任务的发现,以21美元的价格出售$ 23 $设置,平均价格为$ 7 \%\%,最高$ 60 \%\% $根据最终奖励的改进。
translated by 谷歌翻译
机器人的长期愿景是装备机器人,技能与人类的多功能性和精度相匹配。例如,在播放乒乓球时,机器人应该能够以各种方式返回球,同时精确地将球放置在所需位置。模拟这种多功能行为的常见方法是使用专家(MOE)模型的混合,其中每个专家是一个上下文运动原语。然而,由于大多数目标强迫模型涵盖整个上下文空间,因此学习此类MOS是具有挑战性的,这可以防止基元的专业化导致相当低质量的组件。从最大熵增强学习(RL)开始,我们将目标分解为优化每个混合组件的个体下限。此外,我们通过允许组件专注于本地上下文区域来介绍课程,使模型能够学习高度准确的技能表示。为此,我们使用与专家原语共同调整的本地上下文分布。我们的下限主张迭代添加新组件,其中新组件将集中在当前MOE不涵盖的本地上下文区域上。这种本地和增量学习导致高精度和多功能性的模块化MOE模型,其中可以通过在飞行中添加更多组件来缩放两个属性。我们通过广泛的消融和两个具有挑战性的模拟机器人技能学习任务来证明这一点。我们将我们的绩效与Live和Hireps进行了比较,这是一个已知的分层政策搜索方法,用于学习各种技能。
translated by 谷歌翻译
Scenarios requiring humans to choose from multiple seemingly optimal actions are commonplace, however standard imitation learning often fails to capture this behavior. Instead, an over-reliance on replicating expert actions induces inflexible and unstable policies, leading to poor generalizability in an application. To address the problem, this paper presents the first imitation learning framework that incorporates Bayesian variational inference for learning flexible non-parametric multi-action policies, while simultaneously robustifying the policies against sources of error, by introducing and optimizing disturbances to create a richer demonstration dataset. This combinatorial approach forces the policy to adapt to challenging situations, enabling stable multi-action policies to be learned efficiently. The effectiveness of our proposed method is evaluated through simulations and real-robot experiments for a table-sweep task using the UR3 6-DOF robotic arm. Results show that, through improved flexibility and robustness, the learning performance and control safety are better than comparison methods.
translated by 谷歌翻译
尽管行为学习近期取得了令人印象深刻的进步,但由于无法利用大型,人类生成的数据集,它落后于计算机视觉和自然语言处理。人类的行为具有较大的差异,多种模式和人类的示范通常不带有奖励标签。这些属性限制了当前方法在离线RL和行为克隆中的适用性,以从大型预收取的数据集中学习。在这项工作中,我们提出了行为变压器(BET),这是一种用多种模式建模未标记的演示数据的新技术。 BET翻新带有动作离散化的标准变压器体系结构,再加上受对象检测中偏移预测启发的多任务动作校正。这使我们能够利用现代变压器的多模式建模能力来预测多模式的连续动作。我们通过实验评估了各种机器人操作和自动驾驶行为数据集的赌注。我们表明,BET可以显着改善以前的最新工作解决方案,同时捕获预采用的数据集中存在的主要模式。最后,通过一项广泛的消融研究,我们分析了BET中每个关键成分的重要性。 BET生成的行为视频可在https://notmahi.github.io/bet上获得
translated by 谷歌翻译
机器人的共同适应一直是一项长期的研究努力,其目的是将系统的身体和行为适应给定的任务,灵感来自动物的自然演变。共同适应有可能消除昂贵的手动硬件工程,并提高系统性能。共同适应的标准方法是使用奖励功能来优化行为和形态。但是,众所周知,定义和构建这种奖励功能是困难的,并且通常是一项重大的工程工作。本文介绍了关于共同适应问题的新观点,我们称之为共同构图:寻找形态和政策,使模仿者可以紧密匹配演示者的行为。为此,我们提出了一种通过匹配示威者的状态分布来适应行为和形态的共同模拟方法。具体而言,我们专注于两种代理之间的状态和动作空间不匹配的挑战性情况。我们发现,共同映射会增加各种任务和设置的行为相似性,并通过将人的步行,慢跑和踢到模拟的人形生物转移来证明共同映射。
translated by 谷歌翻译
学习敏捷技能是机器人技术的主要挑战之一。为此,加强学习方法取得了令人印象深刻的结果。这些方法需要根据奖励功能或可以在模拟中查询的专家来提供明确的任务信息,以提供目标控制输出,从而限制其适用性。在这项工作中,我们提出了一种生成的对抗方法,用于从部分和潜在的物理不兼容的演示中推断出奖励功能,以成功地获得参考或专家演示的成功技能。此外,我们表明,通过使用Wasserstein gan公式和从以粗糙和部分信息为输入的示范中进行过渡,我们能够提取强大的策略并能够模仿证明的行为。最后,在一个名为Solo 8的敏捷四倍的机器人上测试了所获得的技能,例如后空飞弹,并对手持人类示范的忠实复制进行了测试。
translated by 谷歌翻译
在本文中,我们关注将基于能量的模型(EBM)作为运动优化的指导先验的问题。 EBM是一组神经网络,可以用合适的能量函数参数为参数的GIBBS分布来表示表达概率密度分布。由于其隐含性,它们可以轻松地作为优化因素或运动优化问题中的初始采样分布整合在一起,从而使它们成为良好的候选者,以将数据驱动的先验集成在运动优化问题中。在这项工作中,我们提出了一组所需的建模和算法选择,以使EBMS适应运动优化。我们调查了将其他正规化器在学习EBM中的好处,以将它们与基于梯度的优化器一起使用,并提供一组EBM架构,以学习用于操纵任务的可通用分布。我们提出了多种情况,可以将EBM集成以进行运动优化,并评估学到的EBM的性能,以指导模拟和真实机器人实验的指导先验。
translated by 谷歌翻译
Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.
translated by 谷歌翻译
我们专注于一个典型的物流部门的卸载问题,该问题被建模为顺序的选择任务。在这种类型的任务中,现代的机器学习技术已经显示出比经典系统更好的工作,因为它们更适合随机性,并且能够更好地应对大型不确定性。更具体地说,在这方面,有监督和模仿学习取得了出色的成果,因为需要某种形式的监督,这对于所有设置并不总是可获得的。另一方面,加固学习(RL)需要许多更温和的监督形式,但由于其效率低下仍然不切实际。在本文中,我们提出并理论上激励了一种新颖的无监督奖励构成算法,从专家的观察结果中塑造了算法,该算法放宽了代理商所需的监督水平,并致力于改善我们任务中的RL绩效。
translated by 谷歌翻译
将机器人放置在受控条件外,需要多功能的运动表示,使机器人能够学习新任务并使其适应环境变化。在工作区中引入障碍或额外机器人的位置,由于故障或运动范围限制导致的关节范围的修改是典型的案例,适应能力在安全地执行机器人任务的关键作用。已经提出了代表适应性运动技能的概率动态(PROMP),其被建模为轨迹的高斯分布。这些都是在分析讲道的,可以从少数演示中学习。然而,原始PROMP制定和随后的方法都仅为特定运动适应问题提供解决方案,例如障碍避免,以及普遍的,统一的适应概率方法缺失。在本文中,我们开发了一种用于调整PROMP的通用概率框架。我们统一以前的适应技术,例如,各种类型的避避,通过一个框架,互相避免,在一个框架中,并将它们结合起来解决复杂的机器人问题。另外,我们推导了新颖的适应技术,例如时间上未结合的通量和互相避免。我们制定适应作为约束优化问题,在那里我们最小化适应的分布与原始原始的分布之间的kullback-leibler发散,而我们限制了与不希望的轨迹相关的概率质量为低电平。我们展示了我们在双机器人手臂设置中的模拟平面机器人武器和7-DOF法兰卡 - Emika机器人的若干适应问题的方法。
translated by 谷歌翻译
我们提出了状态匹配的离线分布校正估计(SMODICE),这是一种新颖且基于多功能回归的离线模仿学习(IL)算法,该算法是通过状态占用匹配得出的。我们表明,SMODICE目标通过在表格MDP中的Fenchel二元性和一个分析解决方案的应用来接受一个简单的优化过程。不需要访问专家的行动,可以将Smodice有效地应用于三个离线IL设置:(i)模仿观察值(IFO),(ii)IFO具有动态或形态上不匹配的专家,以及(iii)基于示例的加固学习,这些学习我们表明可以将其公式为州占领的匹配问题。我们在GridWorld环境以及高维离线基准上广泛评估了Smodice。我们的结果表明,Smodice对于所有三个问题设置都有效,并且在前最新情况下均明显胜过。
translated by 谷歌翻译
有效的探索是深度强化学习的关键挑战。几种方法,例如行为先验,能够利用离线数据,以便在复杂任务上有效加速加强学习。但是,如果手动的任务与所证明的任务过度偏离,则此类方法的有效性是有限的。在我们的工作中,我们建议从离线数据中学习功能,这些功能由更加多样化的任务共享,例如动作与定向之间的相关性。因此,我们介绍了无国有先验,该先验直接在显示的轨迹中直接建模时间一致性,并且即使在对简单任务收集的数据进行培训时,也能够在复杂的任务中推动探索。此外,我们通过从政策和行动之前的概率混合物中动态采样动作,引入了一种新颖的集成方案,用于非政策强化学习中的动作研究。我们将我们的方法与强大的基线相提并论,并提供了经验证据,表明它可以在稀疏奖励环境下的长途持续控制任务中加速加强学习。
translated by 谷歌翻译
虽然对理解计算机视觉中的手对象交互进行了重大进展,但机器人执行复杂的灵巧操纵仍然非常具有挑战性。在本文中,我们提出了一种新的平台和管道DEXMV(来自视频的Dexerous操纵)以进行模仿学习。我们设计了一个平台:(i)具有多指机器人手和(ii)计算机视觉系统的复杂灵巧操纵任务的仿真系统,以记录进行相同任务的人类手的大规模示范。在我们的小说管道中,我们从视频中提取3D手和对象姿势,并提出了一种新颖的演示翻译方法,将人类运动转换为机器人示范。然后,我们将多个仿制学习算法与演示进行应用。我们表明,示威活动确实可以通过大幅度提高机器人学习,并解决独自增强学习无法解决的复杂任务。具有视频的项目页面:https://yzqin.github.io/dexmv
translated by 谷歌翻译
在移动操作(MM)中,机器人可以在内部导航并与其环境进行交互,因此能够完成比仅能够导航或操纵的机器人的更多任务。在这项工作中,我们探讨如何应用模仿学习(IL)来学习MM任务的连续Visuo-Motor策略。许多事先工作表明,IL可以为操作或导航域训练Visuo-Motor策略,但很少有效应用IL到MM域。这样做是挑战的两个原因:在数据方面,当前的接口使得收集高质量的人类示范困难,在学习方面,有限数据培训的政策可能会在部署时遭受协变速转变。为了解决这些问题,我们首先提出了移动操作Roboturk(Momart),这是一种新颖的遥控框架,允许同时导航和操纵移动操纵器,并在现实的模拟厨房设置中收集一类大规模的大规模数据集。然后,我们提出了一个学习错误检测系统来解决通过检测代理处于潜在故障状态时的协变量转变。我们从该数据中培训表演者的IL政策和错误探测器,在专家数据培训时,在多个多级任务中达到超过45%的任务成功率和85%的错误检测成功率。 CodeBase,DataSets,Visualization,以及更多可用的https://sites.google.com/view/il-for-mm/home。
translated by 谷歌翻译
我们研究了离线模仿学习(IL)的问题,在该问题中,代理商旨在学习最佳的专家行为政策,而无需其他在线环境互动。取而代之的是,该代理来自次优行为的补充离线数据集。解决此问题的先前工作要么要求专家数据占据离线数据集的大部分比例,要么需要学习奖励功能并在以后执行离线加强学习(RL)。在本文中,我们旨在解决问题,而无需进行奖励学习和离线RL培训的其他步骤,当时示范包含大量次优数据。基于行为克隆(BC),我们引入了一个额外的歧视者,以区分专家和非专家数据。我们提出了一个合作框架,以增强这两个任务的学习,基于此框架,我们设计了一种新的IL算法,其中歧视者的输出是BC损失的权重。实验结果表明,与基线算法相比,我们提出的算法可获得更高的回报和更快的训练速度。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
Imitation learning approaches achieve good generalization within the range of the training data, but tend to generate unpredictable motions when querying outside this range. We present a novel approach to imitation learning with enhanced extrapolation capabilities that exploits the so-called Equation Learner Network (EQLN). Unlike conventional approaches, EQLNs use supervised learning to fit a set of analytical expressions that allows them to extrapolate beyond the range of the training data. We augment the task demonstrations with a set of task-dependent parameters representing spatial properties of each motion and use them to train the EQLN. At run time, the features are used to query the Task-Parameterized Equation Learner Network (TP-EQLN) and generate the corresponding robot trajectory. The set of features encodes kinematic constraints of the task such as desired height or a final point to reach. We validate the results of our approach on manipulation tasks where it is important to preserve the shape of the motion in the extrapolation domain. Our approach is also compared with existing state-of-the-art approaches, in simulation and in real setups. The experimental results show that TP-EQLN can respect the constraints of the trajectory encoded in the feature parameters, even in the extrapolation domain, while preserving the overall shape of the trajectory provided in the demonstrations.
translated by 谷歌翻译