Robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. Automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human-robot systems. However, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. This is because a truly collaborative human-robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. In this paper, we present a system for human-robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. Learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning-based vision system. The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We show successful generalization of the system's operation to changes in the initial and final goal locations through various experiments.
translated by 谷歌翻译
我们介绍了基于学习的合规控制器,用于工业机器人的装配操作。我们提出了在从演示(LFD)中的一般环境中的一个解决方案,其中通过专家教师演示提供标称轨迹。这可以用于学习可以概括为组装中涉及的一个部件的新颖的技术的合适的表达,例如钉孔中的孔(PEG)插入任务。在期望中,在视觉或其他感测系统不完全准确地估计这种新颖的位置,机器人需要进一步修改产生的轨迹,以响应通过力 - 扭矩(F / T)传感器测量的力读数安装在机器人的手腕或另一个合适的位置。在组装期间遍历参考轨迹的恒定速度的假设,我们提出了一种新颖的容纳力控制器,其允许机器人安全地探索不同的接触配置。使用该控制器收集的数据用于训练高斯过程模型以预测栓地相对于目标孔的位置的未对准。我们表明所提出的基于学习的方法可以校正由PIH任务中组装部件之间的未对准引起的各种接触配置,在插入期间实现了高成功率。我们使用工业操纵器臂展示结果,并证明所提出的方法可以使用从培训的机器学习模型的力反馈来执行自适应插入。
translated by 谷歌翻译
人类仍在执行许多高精度(DIS)任务,而这是自动化的理想机会。本文提供了一个框架,该框架使非专家的人类操作员能够教机器人手臂执行复杂的精确任务。该框架使用可变的笛卡尔阻抗控制器来执行从动力学人类示范中学到的轨迹。可以给出反馈以进行交互重塑或加快原始演示。董事会本地化是通过对任务委员会位置的视觉估算来完成的,并通过触觉反馈进行了完善。我们的框架在机器人基准拆卸挑战上进行了测试,该机器人必须执行复杂的精确任务,例如关键插入。结果显示每个操纵子任务的成功率很高,包括盒子中新型姿势的情况。还进行了消融研究以评估框架的组成部分。
translated by 谷歌翻译
在本次调查中,我们介绍了执行需要不同于环境的操作任务的机器人的当前状态,使得机器人必须隐含地或明确地控制与环境的接触力来完成任务。机器人可以执行越来越多的人体操作任务,并且在1)主题上具有越来越多的出版物,其执行始终需要联系的任务,并且通过利用完美的任务来减轻环境来缓解不确定性信息,可以在没有联系的情况下进行。最近的趋势已经看到机器人在留下的人类留给人类,例如按摩,以及诸如PEG孔的经典任务中,对其他类似任务的概率更有效,更好的误差容忍以及更快的规划或学习任务。因此,在本调查中,我们涵盖了执行此类任务的机器人的当前阶段,从调查开始所有不同的联系方式机器人可以执行,观察这些任务是如何控制和表示的,并且最终呈现所需技能的学习和规划完成这些任务。
translated by 谷歌翻译
在本文中,我们讨论了通过模仿教授双人操作任务的框架。为此,我们提出了一种从人类示范中学习合规和接触良好的机器人行为的系统和算法。提出的系统结合了入学控制和机器学习的见解,以提取控制政策,这些政策可以(a)从时空和空间中恢复并适应各种干扰,同时(b)有效利用与环境的物理接触。我们使用现实世界中的插入任务证明了方法的有效性,该任务涉及操纵对象和插入钉之间的多个同时接触。我们还研究了为这种双人设置收集培训数据的有效方法。为此,我们进行了人类受试者的研究,并分析用户报告的努力和精神需求。我们的实验表明,尽管很难提供,但在遥控演示中可用的其他力/扭矩信息对于阶段估计和任务成功至关重要。最终,力/扭矩数据大大提高了操纵鲁棒性,从而在多点插入任务中获得了90%的成功率。可以在https://bimanualmanipulation.com/上找到代码和视频
translated by 谷歌翻译
在机器学习中使用大型数据集已导致出色的结果,在某些情况下,在机器上认为不可能的任务中的人数优于人类。但是,在处理身体上的互动任务时,实现人类水平的表现,例如,在接触良好的机器人操作中,仍然是一个巨大的挑战。众所周知,规范笛卡尔阻抗进行此类行动对于成功执行至关重要。加强学习(RL)之类的方法可能是解决此类问题的有希望的范式。更确切地说,在解决新任务具有巨大潜力时,使用任务不足的专家演示的方法可以利用大型数据集。但是,现有的数据收集系统是昂贵,复杂的,或者不允许进行阻抗调节。这项工作是朝着数据收集框架迈出的第一步,适合收集与使用新颖的动作空间的RL问题公式相容的基于阻抗的专家演示的大型数据集。该框架是根据对机器人操纵的可用数据收集框架进行广泛分析后根据要求设计的。结果是一个低成本且开放的远程阻抗框架,它使人类专家能够展示接触式任务。
translated by 谷歌翻译
即使是最强大的自主行为也可能失败。这项研究的目的是在自主任务执行期间恢复和从失败中收集数据,以便将来可以防止它们。我们建议对实时故障恢复和数据收集进行触觉干预。Elly是一个系统,可以在自主机器人行为和人类干预之间进行无缝过渡,同时从人类恢复策略中收集感觉信息。系统和我们的设计选择在单臂任务上进行了实验验证 - 在插座中安装灯泡 - 以及双层任务 - 拧上瓶盖的帽子 - 使用两个配备的4手指握把。在这些示例中,Elly在总共40次运行中实现了超过80%的任务完成。
translated by 谷歌翻译
本文为复杂和物理互动的任务提供了用于移动操纵器的混合学习和优化框架。该框架利用了入学型物理接口,以获得直观而简化的人类演示和高斯混合模型(GMM)/高斯混合物回归(GMR),以根据位置,速度和力剖面来编码和生成学习的任务要求。接下来,使用GMM/GMR生成的所需轨迹和力剖面,通过用二次程序加强能量箱增强笛卡尔阻抗控制器的阻抗参数可以在线优化,以确保受控系统的消极性。进行了两个实验以验证框架,将我们的方法与两种恒定刚度(高和低)的方法进行了比较。结果表明,即使在存在诸如意外的最终效应碰撞等干扰的情况下,该方法在轨迹跟踪和生成的相互作用力方面都优于其他两种情况。
translated by 谷歌翻译
本文介绍了一种系统集成方法,用于一种6-DOF(自由度)协作机器人,以操作移液液的移液液。它的技术发展是三倍。首先,我们设计了用于握住和触发手动移液器的最终效果。其次,我们利用协作机器人的优势来识别基于公认姿势的实验室姿势和计划的机器人运动。第三,我们开发了基于视觉的分类器来预测和纠正定位误差,从而精确地附着在一次性技巧上。通过实验和分析,我们确认开发的系统,尤其是计划和视觉识别方法,可以帮助确保高精度和柔性液体分配。开发的系统适用于低频,高更改的生化液体分配任务。我们预计它将促进协作机器人的部署进行实验室自动化,从而提高实验效率,而不会显着自定义实验室环境。
translated by 谷歌翻译
Robotic teleoperation is a key technology for a wide variety of applications. It allows sending robots instead of humans in remote, possibly dangerous locations while still using the human brain with its enormous knowledge and creativity, especially for solving unexpected problems. A main challenge in teleoperation consists of providing enough feedback to the human operator for situation awareness and thus create full immersion, as well as offering the operator suitable control interfaces to achieve efficient and robust task fulfillment. We present a bimanual telemanipulation system consisting of an anthropomorphic avatar robot and an operator station providing force and haptic feedback to the human operator. The avatar arms are controlled in Cartesian space with a direct mapping of the operator movements. The measured forces and torques on the avatar side are haptically displayed to the operator. We developed a predictive avatar model for limit avoidance which runs on the operator side, ensuring low latency. The system was successfully evaluated during the ANA Avatar XPRIZE competition semifinals. In addition, we performed in lab experiments and carried out a small user study with mostly untrained operators.
translated by 谷歌翻译
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-toend provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.
translated by 谷歌翻译
本文提出了一种移动超级机器人方法,可在人类机器人结合的行动中进行身体援助。该研究从对超人概念的描述开始。这个想法是开发和利用可以遵循人类机器人操作命令的移动协作系统,通过三个主要组件执行工业任务:i)物理界面,ii)人类机器人互动控制器和iii)超级机器人身体。接下来,我们从理论和硬件的角度介绍了框架内的两个可能的实现。第一个系统称为MOCA-MAN,由冗余的扭矩控制机器人组和Omni方向移动平台组成。第二个称为Kairos-Man,由高付费6多速速度控制机器人组和Omni方向移动平台形成。该系统共享相同的接收界面,通过该接口将用户扳手转换为Loco-andipulation命令,该命令由每个系统的全身控制器生成。此外,提出了一个具有多个和跨性别主题的彻底用户研究,以揭示这两个系统在努力和灵活的任务中的定量性能。此外,我们提供了NASA-TLX问卷的定性结果,以证明超级人物的潜力及其从用户的观点中的可接受性。
translated by 谷歌翻译
我们描述了更改 - 联系机器人操作任务的框架,要求机器人与对象和表面打破触点。这种任务的不连续交互动态使得难以构建和使用单个动力学模型或控制策略,并且接触变化期间动态的高度非线性性质可能对机器人和物体造成损害。我们提出了一种自适应控制框架,使机器人能够逐步学习以预测更改联系人任务中的接触变化,从而了解了碎片连续系统的交互动态,并使用任务空间可变阻抗控制器提供平滑且精确的轨迹跟踪。我们通过实验比较我们框架的表现,以确定所需的代表性控制方法,以确定我们框架的自适应控制和增量学习组件需要在变化 - 联系机器人操纵任务中存在不连续动态的平稳控制。
translated by 谷歌翻译
Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.
translated by 谷歌翻译
抓握是通过在一组触点上施加力和扭矩来挑选对象的过程。深度学习方法的最新进展允许在机器人对象抓地力方面快速进步。我们在过去十年中系统地调查了出版物,特别感兴趣使用最终效果姿势的所有6度自由度抓住对象。我们的综述发现了四种用于机器人抓钩的常见方法:基于抽样的方法,直接回归,强化学习和示例方法。此外,我们发现了围绕抓握的两种“支持方法”,这些方法使用深入学习来支持抓握过程,形状近似和负担能力。我们已经将本系统评论(85篇论文)中发现的出版物提炼为十个关键要点,我们认为对未来的机器人抓握和操纵研究至关重要。该调查的在线版本可从https://rhys-newbury.github.io/projects/6dof/获得
translated by 谷歌翻译
本文对人机对象切换的文献进行了调查。切换是一种协作的关节动作,其中代理人,给予者,给予对象给另一代理,接收器。当接收器首先与给予者持有的对象并结束时,当给予者完全将物体释放到接收器时,物理交换开始。然而,重要的认知和物理过程在物理交换之前开始,包括在交换的位置和时间内启动隐含协议。从这个角度来看,我们将审核构成了上述事件界定的两个主要阶段:1)预切换阶段和2)物理交流。我们专注于两位演员(Giver和Receiver)的分析,并报告机器人推动者(机器人到人类切换)和机器人接收器(人到机器人切换)的状态。我们举报了常用于评估互动的全面的定性和定量度量列表。虽然将我们的认知水平(例如,预测,感知,运动规划,学习)和物理水平(例如,运动,抓握,抓取释放)的审查重点,但我们简要讨论了安全的概念,社会背景,和人体工程学。我们将在人对人物助手中显示的行为与机器人助手的最新进行比较,并确定机器人助剂的主要改善领域,以达到与人类相互作用相当的性能。最后,我们提出了一种应使用的最小度量标准,以便在方法之间进行公平比较。
translated by 谷歌翻译
Learning generalizable insertion skills in a data-efficient manner has long been a challenge in the robot learning community. While the current state-of-the-art methods with reinforcement learning (RL) show promising performance in acquiring manipulation skills, the algorithms are data-hungry and hard to generalize. To overcome the issues, in this paper we present Prim-LAfD, a simple yet effective framework to learn and adapt primitive-based insertion skills from demonstrations. Prim-LAfD utilizes black-box function optimization to learn and adapt the primitive parameters leveraging prior experiences. Human demonstrations are modeled as dense rewards guiding parameter learning. We validate the effectiveness of the proposed method on eight peg-hole and connector-socket insertion tasks. The experimental results show that our proposed framework takes less than one hour to acquire the insertion skills and as few as fifteen minutes to adapt to an unseen insertion task on a physical robot.
translated by 谷歌翻译
3D视觉输入的对象操纵对构建可宽大的感知和政策模型构成了许多挑战。然而,现有基准中的3D资产主要缺乏与拓扑和几何中的现实世界内复杂的3D形状的多样性。在这里,我们提出了Sapien操纵技能基准(Manishill)以在全物理模拟器中的各种物体上基准操纵技巧。 Manishill中的3D资产包括大型课堂内拓扑和几何变化。仔细选择任务以涵盖不同类型的操纵挑战。 3D Vision的最新进展也使我们认为我们应该定制基准,以便挑战旨在邀请研究3D深入学习的研究人员。为此,我们模拟了一个移动的全景摄像头,返回以自我为中心的点云或RGB-D图像。此外,我们希望Manishill是为一个对操纵研究感兴趣的广泛研究人员提供服务。除了支持从互动的政策学习,我们还支持学习 - 从演示(LFD)方法,通过提供大量的高质量演示(〜36,000个成功的轨迹,总共〜1.5米点云/ RGB-D帧)。我们提供使用3D深度学习和LFD算法的基线。我们的基准(模拟器,环境,SDK和基线)的所有代码都是开放的,并且将基于基准举办跨学科研究人员面临的挑战。
translated by 谷歌翻译
人类的生活是无价的。当需要完成危险或威胁生命的任务时,机器人平台可能是更换人类运营商的理想选择。我们在这项工作中重点关注的任务是爆炸性的手段。鉴于移动机器人在多种环境中运行时表现出强大的功能,机器人触觉有可能提供安全解决方案。但是,与人类的运作相比,在此阶段,自主权可能具有挑战性和风险。远程运行可能是完整的机器人自主权和人类存在之间的折衷方案。在本文中,我们提出了一种相对便宜的解决方案,可用于远程敏感和机器人远程操作,以使用腿部操纵器(即,腿部四足机器人的机器人和RGB-D传感)来协助爆炸的军械处置。我们提出了一种新型的系统集成,以解决四足动物全身控制的非平凡问题。我们的系统基于可穿戴的基于IMU的运动捕获系统,该系统用于远程操作和视觉触发性的VR耳机。我们在实验中验证了现实世界中的方法,用于需要全身机器人控制和视觉触发的机车操作任务。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译