在机器人和人类运营商之间分享自主权可以促进机器人任务示范的数据收集,以不断改进学习模型。然而,沟通意图的手段和关于未来的原因是人类和机器人之间的差异。我们介绍了辅助Tele-Op,虚拟现实(VR)系统,用于收集展示自主轨迹预测的机器人任务演示,以传达机器人的意图。随着机器人移动,用户可以在需要时切换自主和手动控制。这允许用户通过高成功率和比手动遥操作系统更轻松地收集任务演示。我们的系统由变压器供电,可以为未来提供潜在的状态和行动的窗口 - 几乎没有添加计算时间。密钥识别是,如果用户决定模型预测的操作是不合适的,则可以在变换器序列内的任何位置注入人类意图。在每次步骤中,用户可以(1)无所作为并允许自主操作在观察机器人的未来计划序列时继续,或者(2)接管并暂时规定不同一组动作以使模型返回到轨道上。我们在https://sites.google.com/view/assistive-teleop上托管视频和其他补充材料。
translated by 谷歌翻译
每个房屋都是不同的,每个人都喜欢以特殊方式完成的事情。因此,未来的家庭机器人需要既需要理由就日常任务的顺序性质,又要推广到用户的偏好。为此,我们提出了一个变压器任务计划者(TTP),该计划通过利用基于对象属性的表示来从演示中学习高级动作。TTP可以在多个偏好上进行预训练,并显示了使用单个演示作为模拟洗碗机加载任务中的提示的概括性的概括。此外,我们使用TTP与Franka Panda机器人臂一起展示了现实世界中的重排,并使用单一的人类示范引起了这种情况。
translated by 谷歌翻译
在现实世界中,教授多指的灵巧机器人在现实世界中掌握物体,这是一个充满挑战的问题,由于其高维状态和动作空间。我们提出了一个机器人学习系统,该系统可以进行少量的人类示范,并学会掌握在某些被遮挡的观察结果的情况下掌握看不见的物体姿势。我们的系统利用了一个小型运动捕获数据集,并为多指的机器人抓手生成具有多种多样且成功的轨迹的大型数据集。通过添加域随机化,我们表明我们的数据集提供了可以将其转移到策略学习者的强大抓地力轨迹。我们训练一种灵活的抓紧策略,该策略将对象的点云作为输入,并预测连续的动作以从不同初始机器人状态掌握对象。我们在模拟中评估了系统对22多伏的浮动手的有效性,并在现实世界中带有kuka手臂的23多杆Allegro机器人手。从我们的数据集中汲取的政策可以很好地概括在模拟和现实世界中的看不见的对象姿势
translated by 谷歌翻译
事实证明,在强化学习中使用人类示范可以显着提高剂性能。但是,任何要求人手动“教”该模型的要求与强化学习的目标有些相反。本文试图通过使用通过简单使用的虚拟现实模拟收集的单个人类示例来帮助进行RL培训,以最大程度地减少人类参与学习过程的参与,同时仍保留了绩效优势。我们的方法增加了一次演示,以产生许多类似人类的演示,与深层确定性的政策梯度和事后的经验重播(DDPG + HER)相结合时,可以显着改善对简单任务的训练时间,并允许代理商解决复杂的任务(Block Block堆叠)DDPG +她一个人无法解决。该模型使用单个人类示例实现了这一重要的训练优势,需要少于一分钟的人类输入。
translated by 谷歌翻译
从语言灵活性和组成性中受益,人类自然打算使用语言来指挥体现的代理,以进行复杂的任务,例如导航和对象操纵。在这项工作中,我们旨在填补最后一英里的体现代理的空白 - 通过遵循人类的指导,例如,“将红杯子移到盒子旁边,同时将其保持直立。”为此,我们介绍了一个自动操纵求解器(AMSolver)模拟器,并基于IT构建视觉和语言操纵基准(VLMBENCH),其中包含有关机器人操纵任务的各种语言说明。具体而言,创建基于模块化规则的任务模板是为了自动生成具有语言指令的机器人演示,包括各种对象形状和外观,动作类型和运动约束。我们还开发了一个基于关键点的模型6D-Cliport,以处理多视图观察和语言输入,并输出一个6个自由度(DOF)动作的顺序。我们希望新的模拟器和基准将促进对语言引导机器人操纵的未来研究。
translated by 谷歌翻译
Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research. However, collecting large-scale robotic data is much more expensive and slower as each operator can control only a single robot at a time. To make this costly data collection process efficient and scalable, we propose Policy Assisted TeleOperation (PATO), a system which automates part of the demonstration collection process using a learned assistive policy. PATO autonomously executes repetitive behaviors in data collection and asks for human input only when it is uncertain about which subtask or behavior to execute. We conduct teleoperation user studies both with a real robot and a simulated robot fleet and demonstrate that our assisted teleoperation system reduces human operators' mental load while improving data collection efficiency. Further, it enables a single operator to control multiple robots in parallel, which is a first step towards scalable robotic data collection. For code and video results, see https://clvrai.com/pato
translated by 谷歌翻译
在这项工作中,我们介绍了一种新的方法来从单一人类演示中学习日常的多阶段任务,而无需任何先前的对象知识。灵感灵感来自最近的粗型仿制方法,我们模拟仿制学习作为学习对象达到的阶段,然后是演示者操作的开放循环重放。我们建立在这方面的多阶段任务,在人类演示之后,机器人可以通过在序列中到达下一个对象然后重复演示,然后重复在序列中自动收集整个多级任务的图像数据,然后在a中重复循环任务的所有阶段。我们对一系列类似的多阶段任务进行了真实的实验,我们表明我们的方法可以从单一的演示解决。视频和补充材料可以在https://www.robot-learning.uk/self -replay中找到。
translated by 谷歌翻译
在人类环境中,预计在简单的自然语言指导下,机器人将完成各种操纵任务。然而,机器人的操纵极具挑战性,因为它需要精细颗粒的运动控制,长期记忆以及对以前看不见的任务和环境的概括。为了应对这些挑战,我们提出了一种基于统一的变压器方法,该方法考虑了多个输入。特别是,我们的变压器体系结构集成了(i)自然语言指示和(ii)多视图场景观察,而(iii)跟踪观察和动作的完整历史。这种方法使历史和指示之间的学习依赖性可以使用多个视图提高操纵精度。我们评估我们的方法在具有挑战性的RLBench基准和现实世界机器人方面。值得注意的是,我们的方法扩展到74个不同的RLBench任务,并超越了最新的现状。我们还解决了指导条件的任务,并证明了对以前看不见的变化的出色概括。
translated by 谷歌翻译
机器人操纵可以配制成诱导一系列空间位移:其中移动的空间可以包括物体,物体的一部分或末端执行器。在这项工作中,我们提出了一个简单的模型架构,它重新排列了深度功能,以从视觉输入推断出可视输入的空间位移 - 这可以参数化机器人操作。它没有对象的假设(例如规范姿势,模型或关键点),它利用空间对称性,并且比我们学习基于视觉的操纵任务的基准替代方案更高的样本效率,并且依赖于堆叠的金字塔用看不见的物体组装套件;从操纵可变形的绳索,以将堆积的小物体推动,具有闭环反馈。我们的方法可以表示复杂的多模态策略分布,并推广到多步顺序任务,以及6dof拾取器。 10个模拟任务的实验表明,它比各种端到端基线更快地学习并概括,包括使用地面真实对象姿势的政策。我们在现实世界中使用硬件验证我们的方法。实验视频和代码可在https://transporternets.github.io获得
translated by 谷歌翻译
Language-conditioned policies allow robots to interpret and execute human instructions. Learning such policies requires a substantial investment with regards to time and compute resources. Still, the resulting controllers are highly device-specific and cannot easily be transferred to a robot with different morphology, capability, appearance or dynamics. In this paper, we propose a sample-efficient approach for training language-conditioned manipulation policies that allows for rapid transfer across different types of robots. By introducing a novel method, namely Hierarchical Modularity, and adopting supervised attention across multiple sub-modules, we bridge the divide between modular and end-to-end learning and enable the reuse of functional building blocks. In both simulated and real world robot manipulation experiments, we demonstrate that our method outperforms the current state-of-the-art methods and can transfer policies across 4 different robots in a sample-efficient manner. Finally, we show that the functionality of learned sub-modules is maintained beyond the training process and can be used to introspect the robot decision-making process. Code is available at https://github.com/ir-lab/ModAttn.
translated by 谷歌翻译
人类和许多动物都表现出稳健的能力来操纵不同的物体,通常与他们的身体直接和有时与工具间接地进行操作。这种灵活性可能是由物理处理的基本一致性,例如接触和力闭合。通过将工具视为我们的机构的扩展来启发,我们提出了工具 - 作为实施例(TAE),用于处理同一表示空间中的手动对象和工具对象交互的基于工具的操作策略的参数化。结果是单一策略,可以在机器人上递归地应用于使用结束效果来操纵对象,并使用对象作为工具,即新的最终效果,以操纵其他对象。通过对不同实施例的共享经验进行掌握或推动,我们的政策表现出比训练单独的政策更高的性能。我们的框架可以利用将对启用工具的实施例的不同分辨率的所有经验用于每个操纵技能的单个通用策略。 https://sites.google.com/view/recursivemanipulation的视频
translated by 谷歌翻译
虽然对理解计算机视觉中的手对象交互进行了重大进展,但机器人执行复杂的灵巧操纵仍然非常具有挑战性。在本文中,我们提出了一种新的平台和管道DEXMV(来自视频的Dexerous操纵)以进行模仿学习。我们设计了一个平台:(i)具有多指机器人手和(ii)计算机视觉系统的复杂灵巧操纵任务的仿真系统,以记录进行相同任务的人类手的大规模示范。在我们的小说管道中,我们从视频中提取3D手和对象姿势,并提出了一种新颖的演示翻译方法,将人类运动转换为机器人示范。然后,我们将多个仿制学习算法与演示进行应用。我们表明,示威活动确实可以通过大幅度提高机器人学习,并解决独自增强学习无法解决的复杂任务。具有视频的项目页面:https://yzqin.github.io/dexmv
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Developing robots that are capable of many skills and generalization to unseen scenarios requires progress on two fronts: efficient collection of large and diverse datasets, and training of high-capacity policies on the collected data. While large datasets have propelled progress in other fields like computer vision and natural language processing, collecting data of comparable scale is particularly challenging for physical systems like robotics. In this work, we propose a framework to bridge this gap and better scale up robot learning, under the lens of multi-task, multi-scene robot manipulation in kitchen environments. Our framework, named CACTI, has four stages that separately handle data collection, data augmentation, visual representation learning, and imitation policy training. In the CACTI framework, we highlight the benefit of adapting state-of-the-art models for image generation as part of the augmentation stage, and the significant improvement of training efficiency by using pretrained out-of-domain visual representations at the compression stage. Experimentally, we demonstrate that 1) on a real robot setup, CACTI enables efficient training of a single policy capable of 10 manipulation tasks involving kitchen objects, and robust to varying layouts of distractor objects; 2) in a simulated kitchen environment, CACTI trains a single policy on 18 semantic tasks across up to 50 layout variations per task. The simulation task benchmark and augmented datasets in both real and simulated environments will be released to facilitate future research.
translated by 谷歌翻译
We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
translated by 谷歌翻译
Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译
非结构化环境中的多步操纵任务对于学习的机器人来说非常具有挑战性。这些任务相互作用,包括可以获得的预期状态,可以实现整体任务和低级推理,以确定哪些行动将产生这些国家。我们提出了一种无模型的深度加强学习方法来学习多步理操作任务。我们介绍了一个基于视觉的模型架构的机器人操纵网络(ROMANNET),以了解动作值函数并预测操纵操作候选。我们定义基于Gaussian(TPG)奖励函数的任务进度,基于导致成功的动作原语的行动和实现整体任务目标的进展来计算奖励。为了平衡探索/剥削的比率,我们介绍了一个损失调整后的探索(LAE)政策,根据亏损估计的Boltzmann分配来确定来自行动候选人的行动。我们通过培训ROMANNET来展示我们方法的有效性,以了解模拟和现实世界中的几个挑战的多步机械管理任务。实验结果表明,我们的方法优于现有的方法,并在成功率和行动效率方面实现了最先进的性能。消融研究表明,TPG和LAE对多个块堆叠的任务特别有益。代码可用:https://github.com/skumra/romannet
translated by 谷歌翻译
在移动操作(MM)中,机器人可以在内部导航并与其环境进行交互,因此能够完成比仅能够导航或操纵的机器人的更多任务。在这项工作中,我们探讨如何应用模仿学习(IL)来学习MM任务的连续Visuo-Motor策略。许多事先工作表明,IL可以为操作或导航域训练Visuo-Motor策略,但很少有效应用IL到MM域。这样做是挑战的两个原因:在数据方面,当前的接口使得收集高质量的人类示范困难,在学习方面,有限数据培训的政策可能会在部署时遭受协变速转变。为了解决这些问题,我们首先提出了移动操作Roboturk(Momart),这是一种新颖的遥控框架,允许同时导航和操纵移动操纵器,并在现实的模拟厨房设置中收集一类大规模的大规模数据集。然后,我们提出了一个学习错误检测系统来解决通过检测代理处于潜在故障状态时的协变量转变。我们从该数据中培训表演者的IL政策和错误探测器,在专家数据培训时,在多个多级任务中达到超过45%的任务成功率和85%的错误检测成功率。 CodeBase,DataSets,Visualization,以及更多可用的https://sites.google.com/view/il-for-mm/home。
translated by 谷歌翻译
在工厂或房屋等环境中协助我们的机器人必须学会使用对象作为执行任务的工具,例如使用托盘携带对象。我们考虑了学习常识性知识何时可能有用的问题,以及如何与其他工具一起使用其使用以完成由人类指示的高级任务。具体而言,我们引入了一种新型的神经模型,称为Tooltango,该模型首先预测要使用的下一个工具,然后使用此信息来预测下一项动作。我们表明,该联合模型可以告知学习精细的策略,从而使机器人可以顺序使用特定工具,并在使模型更加准确的情况下增加了重要价值。 Tooltango使用图神经网络编码世界状态,包括对象和它们之间的符号关系,并使用人类教师的演示进行了培训,这些演示是指导物理模拟器中的虚拟机器人的演示。该模型学会了使用目标和动作历史的知识来参加场景,最终将符号动作解码为执行。至关重要的是,我们解决了缺少一些已知工具的看不见的环境的概括,但是存在其他看不见的工具。我们表明,通过通过从知识库中得出的预训练的嵌入来增强环境的表示,该模型可以有效地将其推广到新的环境中。实验结果表明,在预测具有看不见对象的新型环境中模拟移动操纵器的成功符号计划时,至少48.8-58.1%的绝对改善对基准的绝对改善。这项工作朝着使机器人能够快速合成复杂任务的强大计划的方向,尤其是在新颖的环境中
translated by 谷歌翻译
The ability to learn from human demonstration endows robots with the ability to automate various tasks. However, directly learning from human demonstration is challenging since the structure of the human hand can be very different from the desired robot gripper. In this work, we show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning, where a five-finger human dexterous hand robot gradually evolves into a commercial robot, while repeated interacting in a physics simulator to continuously update the policy that is first learned from human demonstration. To deal with the high dimensions of robot parameters, we propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy. Through experiments on human object manipulation datasets, we show that our framework can efficiently transfer the expert human agent policy trained from human demonstrations in diverse modalities to target commercial robots.
translated by 谷歌翻译