Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译
机器人技术中最重要的挑战之一是产生准确的轨迹并控制其动态参数,以便机器人可以执行不同的任务。提供此类运动控制的能力与此类运动的编码方式密切相关。深度学习的进步在发展动态运动原语的新方法的发展方面产生了强烈的影响。在这项工作中,我们调查了与神经动态运动原始素有关的科学文献,以补充有关动态运动原语的现有调查。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
从示范中学习(LFD)提供了一种方便的手段,可以在机器人固有坐标中获得示范时为机器人提供灵巧的技能。但是,长期和复杂技能中复杂错误的问题减少了其广泛的部署。由于大多数此类复杂的技能由组合的较小运动组成,因此将目标技能作为一系列紧凑的运动原语似乎是合理的。在这里,需要解决的问题是确保电动机以允许成功执行后续原始的状态结束。在这项研究中,我们通过提议学习明确的校正政策来关注这个问题,当时未达到原始人之间的预期过渡状态。校正策略本身是通过使用最先进的运动原始学习结构,条件神经运动原语(CNMP)来学习的。然后,学识渊博的校正政策能够以背景方式产生各种运动轨迹。拟议系统比学习完整任务的优点在模拟中显示了一个台式设置,其中必须以两个步骤将对象通过走廊推动。然后,通过为上身类人生物机器人配备具有在3D空间中的条上打结的技巧,显示了所提出的方法在现实世界中进行双重打结的适用性。实验表明,即使面对校正案例不属于人类示范集的一部分,机器人也可以执行成功的打结。
translated by 谷歌翻译
将机器人放置在受控条件外,需要多功能的运动表示,使机器人能够学习新任务并使其适应环境变化。在工作区中引入障碍或额外机器人的位置,由于故障或运动范围限制导致的关节范围的修改是典型的案例,适应能力在安全地执行机器人任务的关键作用。已经提出了代表适应性运动技能的概率动态(PROMP),其被建模为轨迹的高斯分布。这些都是在分析讲道的,可以从少数演示中学习。然而,原始PROMP制定和随后的方法都仅为特定运动适应问题提供解决方案,例如障碍避免,以及普遍的,统一的适应概率方法缺失。在本文中,我们开发了一种用于调整PROMP的通用概率框架。我们统一以前的适应技术,例如,各种类型的避避,通过一个框架,互相避免,在一个框架中,并将它们结合起来解决复杂的机器人问题。另外,我们推导了新颖的适应技术,例如时间上未结合的通量和互相避免。我们制定适应作为约束优化问题,在那里我们最小化适应的分布与原始原始的分布之间的kullback-leibler发散,而我们限制了与不希望的轨迹相关的概率质量为低电平。我们展示了我们在双机器人手臂设置中的模拟平面机器人武器和7-DOF法兰卡 - Emika机器人的若干适应问题的方法。
translated by 谷歌翻译
我们解决了使四足机器人能够使用强化学习在现实世界中执行精确的射击技巧的问题。开发算法使腿部机器人能够向给定的目标射击足球,这是一个具有挑战性的问题,它将机器人运动控制和计划结合到一项任务中。为了解决这个问题,我们需要考虑控制动态腿部机器人期间的动态限制和运动稳定性。此外,我们需要考虑运动计划,以在地面上射击难以模拟的可变形球,并不确定摩擦到所需的位置。在本文中,我们提出了一个层次结构框架,该框架利用深厚的强化学习来训练(a)强大的运动控制政策,可以跟踪任意动议,以及(b)一项计划政策,以决定所需的踢球运动将足球射击到目标。我们将提议的框架部署在A1四足动物机器人上,使其能够将球准确地射击到现实世界中的随机目标。
translated by 谷歌翻译
Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.
translated by 谷歌翻译
机器人将机器人的无缝集成到人类环境需要机器人来学习如何使用现有的人类工具。学习工具操纵技能的目前方法主要依赖于目标机器人环境中提供的专家演示,例如,通过手动引导机器人操纵器或通过远程操作。在这项工作中,我们介绍了一种自动化方法,取代了一个专家演示,用YouTube视频来学习工具操纵策略。主要贡献是双重的。首先,我们设计一个对齐过程,使模拟环境与视频中观察到的真实世界。这是作为优化问题,找到刀具轨迹的空间对齐,以最大化环境给出的稀疏目标奖励。其次,我们描述了一种专注于工具的轨迹而不是人类的运动的模仿学习方法。为此,我们将加强学习与优化过程相结合,以基于对准环境中的工具运动来找到控制策略和机器人的放置。我们展示了仿真中的铲子,镰刀和锤子工具的建议方法,并展示了训练有素的政策对真正的弗兰卡·埃米卡熊猫机器人示范的卫生政策的有效性。
translated by 谷歌翻译
人类仍在执行许多高精度(DIS)任务,而这是自动化的理想机会。本文提供了一个框架,该框架使非专家的人类操作员能够教机器人手臂执行复杂的精确任务。该框架使用可变的笛卡尔阻抗控制器来执行从动力学人类示范中学到的轨迹。可以给出反馈以进行交互重塑或加快原始演示。董事会本地化是通过对任务委员会位置的视觉估算来完成的,并通过触觉反馈进行了完善。我们的框架在机器人基准拆卸挑战上进行了测试,该机器人必须执行复杂的精确任务,例如关键插入。结果显示每个操纵子任务的成功率很高,包括盒子中新型姿势的情况。还进行了消融研究以评估框架的组成部分。
translated by 谷歌翻译
即使是最强大的自主行为也可能失败。这项研究的目的是在自主任务执行期间恢复和从失败中收集数据,以便将来可以防止它们。我们建议对实时故障恢复和数据收集进行触觉干预。Elly是一个系统,可以在自主机器人行为和人类干预之间进行无缝过渡,同时从人类恢复策略中收集感觉信息。系统和我们的设计选择在单臂任务上进行了实验验证 - 在插座中安装灯泡 - 以及双层任务 - 拧上瓶盖的帽子 - 使用两个配备的4手指握把。在这些示例中,Elly在总共40次运行中实现了超过80%的任务完成。
translated by 谷歌翻译
Learning generalizable insertion skills in a data-efficient manner has long been a challenge in the robot learning community. While the current state-of-the-art methods with reinforcement learning (RL) show promising performance in acquiring manipulation skills, the algorithms are data-hungry and hard to generalize. To overcome the issues, in this paper we present Prim-LAfD, a simple yet effective framework to learn and adapt primitive-based insertion skills from demonstrations. Prim-LAfD utilizes black-box function optimization to learn and adapt the primitive parameters leveraging prior experiences. Human demonstrations are modeled as dense rewards guiding parameter learning. We validate the effectiveness of the proposed method on eight peg-hole and connector-socket insertion tasks. The experimental results show that our proposed framework takes less than one hour to acquire the insertion skills and as few as fifteen minutes to adapt to an unseen insertion task on a physical robot.
translated by 谷歌翻译
在本文中,我们讨论了通过模仿教授双人操作任务的框架。为此,我们提出了一种从人类示范中学习合规和接触良好的机器人行为的系统和算法。提出的系统结合了入学控制和机器学习的见解,以提取控制政策,这些政策可以(a)从时空和空间中恢复并适应各种干扰,同时(b)有效利用与环境的物理接触。我们使用现实世界中的插入任务证明了方法的有效性,该任务涉及操纵对象和插入钉之间的多个同时接触。我们还研究了为这种双人设置收集培训数据的有效方法。为此,我们进行了人类受试者的研究,并分析用户报告的努力和精神需求。我们的实验表明,尽管很难提供,但在遥控演示中可用的其他力/扭矩信息对于阶段估计和任务成功至关重要。最终,力/扭矩数据大大提高了操纵鲁棒性,从而在多点插入任务中获得了90%的成功率。可以在https://bimanualmanipulation.com/上找到代码和视频
translated by 谷歌翻译
Robots need to be able to adapt to unexpected changes in the environment such that they can autonomously succeed in their tasks. However, hand-designing feedback models for adaptation is tedious, if at all possible, making data-driven methods a promising alternative. In this paper we introduce a full framework for learning feedback models for reactive motion planning. Our pipeline starts by segmenting demonstrations of a complete task into motion primitives via a semi-automated segmentation algorithm. Then, given additional demonstrations of successful adaptation behaviors, we learn initial feedback models through learning from demonstrations. In the final phase, a sample-efficient reinforcement learning algorithm fine-tunes these feedback models for novel task settings through few real system interactions. We evaluate our approach on a real anthropomorphic robot in learning a tactile feedback task.
translated by 谷歌翻译
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps. Thus, they can easily generalize and adapt to new and changing environments. Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting, making it difficult for them to imitate human behavior in case of versatile demonstrations. Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility. To facilitate generalization to novel task configurations, we do not directly match the agent's and expert's trajectory distributions but rather work with concise geometric descriptors which generalize well to unseen task configurations. We empirically validate our method on various robot tasks using versatile human demonstrations and compare to imitation learning algorithms in a state-action setting as well as a trajectory-based setting. We find that the geometric descriptors greatly help in generalizing to new task configurations and that combining them with our distribution-matching objective is crucial for representing and reproducing versatile behavior.
translated by 谷歌翻译
SIM到现实的转移是机器人增强学习的强大范式。在模拟中训练政策的能力可以以低成本快速探索和大规模数据收集。但是,机器人策略的SIM到现实转移的先前工作通常不涉及任何人类机器人的相互作用,因为准确模拟人类行为是一个空旷的问题。在这项工作中,我们的目标是利用模拟的力量来训练熟练在部署时与人类互动的机器人政策。但是有一个鸡肉和鸡蛋问题 - 我们如何收集人与物理机器人互动的例子,以在模拟中对人类行为进行建模,而没有已经有能够与人相互作用的机器人?我们提出的方法,即迭代-SIM-to-real(I-S2R),试图解决这个问题。 I-S2R引导程序来自一个简单的人类行为模型和在模拟和在现实世界中部署的训练之间的交替。在每次迭代中,人类行为模型和政策都得到了完善。我们在现实世界的机器人乒乓球环境中评估我们的方法,该机器人的目标是尽可能长时间与人类玩家合作。乒乓球是一项高速,充满活力的任务,要求两名球员对彼此的举动迅速做出反应,从而使测试床具有挑战性,以研究人类机器人互动。我们在一个工业机器人手臂上介绍了结果,该机器人能够与人类球员合作打乒乓球,平均获得22次连续击球的集会,充其量只有150个。此外,对于80%的球员来说,与SIM-TO-REAL(S2R)基线相比,拉力长度长70%至175%。有关我们系统中的视频,请参见https://sites.google.com/view/is2r。
translated by 谷歌翻译
Robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. Automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human-robot systems. However, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. This is because a truly collaborative human-robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. In this paper, we present a system for human-robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. Learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning-based vision system. The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We show successful generalization of the system's operation to changes in the initial and final goal locations through various experiments.
translated by 谷歌翻译
在本次调查中,我们介绍了执行需要不同于环境的操作任务的机器人的当前状态,使得机器人必须隐含地或明确地控制与环境的接触力来完成任务。机器人可以执行越来越多的人体操作任务,并且在1)主题上具有越来越多的出版物,其执行始终需要联系的任务,并且通过利用完美的任务来减轻环境来缓解不确定性信息,可以在没有联系的情况下进行。最近的趋势已经看到机器人在留下的人类留给人类,例如按摩,以及诸如PEG孔的经典任务中,对其他类似任务的概率更有效,更好的误差容忍以及更快的规划或学习任务。因此,在本调查中,我们涵盖了执行此类任务的机器人的当前阶段,从调查开始所有不同的联系方式机器人可以执行,观察这些任务是如何控制和表示的,并且最终呈现所需技能的学习和规划完成这些任务。
translated by 谷歌翻译
学习玩乒乓球是机器人的一个具有挑战性的任务,作为所需的各种笔画。最近的进展表明,深度加强学习(RL)能够在模拟环境中成功地学习最佳动作。然而,由于高勘探努力,RL在实际情况中的适用性仍然有限。在这项工作中,我们提出了一个现实的模拟环境,其中多种模型是为球的动态和机器人的运动学而建立的。代替训练端到端的RL模型,提出了一种具有TD3骨干的新的政策梯度方法,以基于击球时间基于球的预测状态来学习球拍笔划。在实验中,我们表明,所提出的方法显着优于仿真中现有的RL方法。此外,将域从仿真跨越现实,我们采用了一个有效的再培训方法,并在三种实际情况下测试。由此产生的成功率为98%,距离误差约为24.9厘米。总培训时间约为1.5小时。
translated by 谷歌翻译
本文介绍了一种新型的分布式灵巧操纵器:三角洲阵列。每个三角洲阵列都由线性驱动的三角形机器人的网格组成,并具有符合性的3D打印的平行四边形链接。这些阵列可用于执行类似于智能输送机的平面运输任务。但是,三角洲的额外自由度也提供了各种不同的平面操作,以及在三角洲集合之间的预感。因此,三角洲阵列提供了广泛的分布式操作策略。在本文中,我们介绍了三角阵列的设计,包括单个三角洲,模块化阵列结构以及分布式通信和控制。我们还使用拟议的设计构建和评估了8x8阵列。我们的评估表明,由此产生的192 DOF机器人能够对各种对象进行各种协调的分布操作,包括翻译,对齐和预性挤压。
translated by 谷歌翻译
本文为复杂和物理互动的任务提供了用于移动操纵器的混合学习和优化框架。该框架利用了入学型物理接口,以获得直观而简化的人类演示和高斯混合模型(GMM)/高斯混合物回归(GMR),以根据位置,速度和力剖面来编码和生成学习的任务要求。接下来,使用GMM/GMR生成的所需轨迹和力剖面,通过用二次程序加强能量箱增强笛卡尔阻抗控制器的阻抗参数可以在线优化,以确保受控系统的消极性。进行了两个实验以验证框架,将我们的方法与两种恒定刚度(高和低)的方法进行了比较。结果表明,即使在存在诸如意外的最终效应碰撞等干扰的情况下,该方法在轨迹跟踪和生成的相互作用力方面都优于其他两种情况。
translated by 谷歌翻译