Reliably planning fingertip grasps for multi-fingered hands lies as a key challenge for many tasks including tool use, insertion, and dexterous in-hand manipulation. This task becomes even more difficult when the robot lacks an accurate model of the object to be grasped. Tactile sensing offers a promising approach to account for uncertainties in object shape. However, current robotic hands tend to lack full tactile coverage. As such, a problem arises of how to plan and execute grasps for multi-fingered hands such that contact is made with the area covered by the tactile sensors. To address this issue, we propose an approach to grasp planning that explicitly reasons about where the fingertips should contact the estimated object surface while maximizing the probability of grasp success. Key to our method's success is the use of visual surface estimation for initial planning to encode the contact constraint. The robot then executes this plan using a tactile-feedback controller that enables the robot to adapt to online estimates of the object's surface to correct for errors in the initial plan. Importantly, the robot never explicitly integrates object pose or surface estimates between visual and tactile sensing, instead it uses the two modalities in complementary ways. Vision guides the robots motion prior to contact; touch updates the plan when contact occurs differently than predicted from vision. We show that our method successfully synthesises and executes precision grasps for previously unseen objects using surface estimates from a single camera view. Further, our approach outperforms a state of the art multi-fingered grasp planner, while also beating several baselines we propose.
translated by 谷歌翻译
抓握是通过在一组触点上施加力和扭矩来挑选对象的过程。深度学习方法的最新进展允许在机器人对象抓地力方面快速进步。我们在过去十年中系统地调查了出版物,特别感兴趣使用最终效果姿势的所有6度自由度抓住对象。我们的综述发现了四种用于机器人抓钩的常见方法:基于抽样的方法,直接回归,强化学习和示例方法。此外,我们发现了围绕抓握的两种“支持方法”,这些方法使用深入学习来支持抓握过程,形状近似和负担能力。我们已经将本系统评论(85篇论文)中发现的出版物提炼为十个关键要点,我们认为对未来的机器人抓握和操纵研究至关重要。该调查的在线版本可从https://rhys-newbury.github.io/projects/6dof/获得
translated by 谷歌翻译
我们引入了来自多个机器人手的对象的神经隐式表示。多个机器人手之间的不同抓地力被编码为共享的潜在空间。学会了每个潜在矢量以两个3D形状的签名距离函数来解码对象的3D形状和机器人手的3D形状。此外,学会了潜在空间中的距离度量,以保留不同机器人手之间的graSps之间的相似性,其中根据机器人手的接触区域定义了grasps的相似性。该属性使我们能够在包括人手在内的不同抓地力之间转移抓地力,并且GRASP转移有可能在机器人之间分享抓地力,并使机器人能够从人类那里学习掌握技能。此外,我们隐式表示中对象和grasps的编码符号距离函数可用于6D对象姿势估计,并从部分点云中掌握触点优化,这可以在现实世界中启用机器人抓握。
translated by 谷歌翻译
We formulate grasp learning as a neural field and present Neural Grasp Distance Fields (NGDF). Here, the input is a 6D pose of a robot end effector and output is a distance to a continuous manifold of valid grasps for an object. In contrast to current approaches that predict a set of discrete candidate grasps, the distance-based NGDF representation is easily interpreted as a cost, and minimizing this cost produces a successful grasp pose. This grasp distance cost can be incorporated directly into a trajectory optimizer for joint optimization with other costs such as trajectory smoothness and collision avoidance. During optimization, as the various costs are balanced and minimized, the grasp target is allowed to smoothly vary, as the learned grasp field is continuous. In simulation benchmarks with a Franka arm, we find that joint grasping and planning with NGDF outperforms baselines by 63% execution success while generalizing to unseen query poses and unseen object shapes. Project page: https://sites.google.com/view/neural-grasp-distance-fields.
translated by 谷歌翻译
从混乱中挑选特定对象是许多操纵任务的重要组成部分。部分观察结果通常要求机器人在尝试掌握之前收集场景的其他观点。本文提出了一个闭环的下一次最佳策划者,该计划者根据遮挡的对象零件驱动探索。通过不断从最新场景重建中预测抓地力,我们的政策可以在线决定最终确定执行或适应机器人的轨迹以进行进一步探索。我们表明,与常见的相机位置和处理固定基线失败的情况相比,我们的反应性方法会减少执行时间而不会丢失掌握成功率。视频和代码可在https://github.com/ethz-asl/active_grasp上找到。
translated by 谷歌翻译
对于移动机器人而言,与铰接式对象的交互是一项具有挑战性但重要的任务。为了应对这一挑战,我们提出了一条新型的闭环控制管道,该管道将负担能力估计的操纵先验与基于采样的全身控制相结合。我们介绍了完全反映了代理的能力和体现的代理意识提供的概念,我们表明它们的表现优于其最先进的对应物,这些对应物仅以最终效果的几何形状为条件。此外,发现闭环负担推论使代理可以将任务分为多个非连续运动,并从失败和意外状态中恢复。最后,管道能够执行长途移动操作任务,即在现实世界中开放和关闭烤箱,成功率很高(开放:71%,关闭:72%)。
translated by 谷歌翻译
现代的机器人操纵系统缺乏人类的操纵技巧,部分原因是它们依靠围绕视觉数据的关闭反馈循环,这会降低系统的带宽和速度。通过开发依赖于高带宽力,接触和接近数据的自主握力反射,可以提高整体系统速度和鲁棒性,同时减少对视力数据的依赖。我们正在开发一个围绕低渗透的高速手臂建造的新系统,该系统用敏捷的手指结合了一个高级轨迹计划器,以小于1 Hz的速度运行,低级自主反射控制器的运行量超过300 Hz。我们通过将成功的基线控制器和反射握把控制器的变化的成功抓Grasps的体积和反射系统的体积进行比较,从而表征了反射系统,发现我们的控制器将成功的掌握率与基线相比扩大了55%。我们还使用简单的基于视觉的计划者在自主杂波清除任务中部署了反身抓握控制器,在清除100多个项目的同时,达到了超过90%的成功率。
translated by 谷歌翻译
Generating grasp poses is a crucial component for any robot object manipulation task. In this work, we formulate the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled grasps using a grasp evaluator model. Both Grasp Sampler and Grasp Refinement networks take 3D point clouds observed by a depth camera as input. We evaluate our approach in simulation and real-world robot experiments. Our approach achieves 88% success rate on various commonly used objects with diverse appearances, scales, and weights. Our model is trained purely in simulation and works in the real world without any extra steps. The video of our experiments can be found here.
translated by 谷歌翻译
我们探索一种新的方法来感知和操纵3D铰接式物体,该物体可以概括地使机器人阐明看不见的对象。我们提出了一个基于视觉的系统,该系统学会预测各种铰接物体的各个部分的潜在运动,以指导系统的下游运动计划以表达对象。为了预测对象运动,我们训练一个神经网络,以输出一个密集的向量场,代表点云中点云中点的点运动方向。然后,我们根据该向量领域部署一个分析运动计划者,以实现产生最大发音的政策。我们完全在模拟中训练视觉系统,并演示了系统在模拟和现实世界中概括的对象实例和新颖类别的能力,并将我们的政策部署在没有任何填充的锯耶机器人上。结果表明,我们的系统在模拟和现实世界实验中都达到了最先进的性能。
translated by 谷歌翻译
在本文中,我们探讨了机器人是否可以学会重新应用一组多样的物体以实现各种所需的掌握姿势。只要机器人的当前掌握姿势未能执行所需的操作任务,需要重新扫描。具有这种能力的赋予机器人具有在许多领域中的应用,例如制造或国内服务。然而,由于日常物体中的几何形状和状态和行动空间的高维度,这是一个具有挑战性的任务。在本文中,我们提出了一种机器人系统,用于将物体的部分点云和支持环境作为输入,输出序列和放置操作的序列来转换到所需的对象掌握姿势。关键技术包括神经稳定放置预测器,并通过利用和改变周围环境来引发基于图形的解决方案。我们介绍了一个新的和具有挑战性的合成数据集,用于学习和评估所提出的方法。我们展示了我们提出的系统与模拟器和现实世界实验的有效性。我们的项目网页上有更多视频和可视化示例。
translated by 谷歌翻译
为了充分利用多指灵敏机器人手的多功能性进行对象抓握,必须满足手动对象相互作用和对象几何在GRASP计划期间引入的复杂物理约束。我们提出了一种组合生成模型和双重优化的综合方法,以计算新颖看不见的对象的多样化掌握。首先,从仅在六个YCB对象上训练的条件变异自动编码器获得了掌握预测。然后,通过共同求解碰撞感知的逆运动学,力闭合和摩擦约束作为一种非凸双弯曲曲线优化,将预测投射到运动学和动态可行的grasps的歧管上。我们通过成功抓住各种看不见的家庭物体,包括对其他类型的机器人抓手的挑战,来证明我们方法对硬件的有效性。我们的结果的视频摘要可在https://youtu.be/9dtrimbn99i上获得。
translated by 谷歌翻译
机器人需要在约束环境(例如架子和橱柜)中操纵物体,以帮助人类在房屋和办公室等日常设置中。这些限制因减少掌握能力而变得难以操纵,因此机器人需要使用非忽视策略来利用对象环境联系来执行操纵任务。为了应对在这种情况下规划和控制接触性富裕行为的挑战,该工作使用混合力量速度控制器(HFVC)作为技能表示和计划的技能序列,并使用学到的先决条件进行了计划。尽管HFVC自然能够实现稳健且合规的富裕行为,但合成它们的求解器传统上依赖于精确的对象模型和对物体姿势的闭环反馈,这些反馈因遮挡而在约束环境中很难获得。我们首先使用HFVC综合框架放松了HFVC对精确模型和反馈的需求,然后学习一个基于点云的前提函数,以对HFVC执行仍将成功地进行分类,尽管建模不正确。最后,我们在基于搜索的任务计划者中使用学到的前提来完成货架域中的接触式操纵任务。我们的方法达到了$ 73.2 \%$的任务成功率,表现优于基线实现的$ 51.5 \%$,而没有学习的先决条件。在模拟中训练了前提函数时,它也可以转移到现实世界中,而无需进行其他微调。
translated by 谷歌翻译
手动相互作用的研究需要为高维多手指模型产生可行的掌握姿势,这通常依赖于分析抓取的合成,从而产生脆弱且不自然的结果。本文介绍了Grasp'd,这是一种与已知模型和视觉输入的可区分接触模拟的掌握方法。我们使用基于梯度的方法作为基于采样的GRASP合成的替代方法,该方法在没有简化假设的情况下失败,例如预先指定的接触位置和本本特征。这样的假设限制了掌握发现,尤其是排除了高接触功率掌握。相比之下,我们基于模拟的方法允许即使对于具有高度自由度的抓地力形态,也可以稳定,高效,物理逼真,高接触抓紧合成。我们确定并解决了对基于梯度的优化进行掌握模拟的挑战,例如非平滑对象表面几何形状,接触稀疏性和坚固的优化景观。 GRASP-D与人类和机器人手模型的分析掌握合成相比,并且结果抓紧超过4倍,超过4倍,从而导致较高的GRASP稳定性。视频和代码可在https://graspd-eccv22.github.io/上获得。
translated by 谷歌翻译
如今,机器人在我们的日常生活中起着越来越重要的作用。在以人为本的环境中,机器人经常会遇到成堆的对象,包装的项目或孤立的对象。因此,机器人必须能够在各种情况下掌握和操纵不同的物体,以帮助人类进行日常任务。在本文中,我们提出了一种多视图深度学习方法,以处理以人为中心的域中抓住强大的对象。特别是,我们的方法将任意对象的点云作为输入,然后生成给定对象的拼字图。获得的视图最终用于估计每个对象的像素抓握合成。我们使用小对象抓住数据集训练模型端到端,并在模拟和现实世界数据上对其进行测试,而无需进行任何进一步的微调。为了评估所提出方法的性能,我们在三种情况下进行了广泛的实验集,包括孤立的对象,包装的项目和一堆对象。实验结果表明,我们的方法在所有仿真和现实机器人方案中都表现出色,并且能够在各种场景配置中实现新颖对象的可靠闭环抓握。
translated by 谷歌翻译
As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects' center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water.
translated by 谷歌翻译
在现实世界中,教授多指的灵巧机器人在现实世界中掌握物体,这是一个充满挑战的问题,由于其高维状态和动作空间。我们提出了一个机器人学习系统,该系统可以进行少量的人类示范,并学会掌握在某些被遮挡的观察结果的情况下掌握看不见的物体姿势。我们的系统利用了一个小型运动捕获数据集,并为多指的机器人抓手生成具有多种多样且成功的轨迹的大型数据集。通过添加域随机化,我们表明我们的数据集提供了可以将其转移到策略学习者的强大抓地力轨迹。我们训练一种灵活的抓紧策略,该策略将对象的点云作为输入,并预测连续的动作以从不同初始机器人状态掌握对象。我们在模拟中评估了系统对22多伏的浮动手的有效性,并在现实世界中带有kuka手臂的23多杆Allegro机器人手。从我们的数据集中汲取的政策可以很好地概括在模拟和现实世界中的看不见的对象姿势
translated by 谷歌翻译
多步兵的操纵任务(例如打开推动的儿童瓶)需要机器人来做出各种计划选择,这些选择受到在任务期间施加力量的要求所影响的各种计划。机器人必须推荐与动作顺序相关的离散和连续选择,例如是否拾取对象以及每个动作的参数,例如如何掌握对象。为了实现计划和执行有力的操纵,我们通过限制了扭矩和摩擦限制,通过拟议的有力的运动链约束来增强现有的任务和运动计划者。在三个领域,打开一个防儿童瓶,扭动螺母并切割蔬菜,我们演示了系统如何从组合组合组合中进行选择。我们还展示了如何使用成本敏感的计划来查找强大的策略和参数物理参数的不确定性。
translated by 谷歌翻译
Recent 3D-based manipulation methods either directly predict the grasp pose using 3D neural networks, or solve the grasp pose using similar objects retrieved from shape databases. However, the former faces generalizability challenges when testing with new robot arms or unseen objects; and the latter assumes that similar objects exist in the databases. We hypothesize that recent 3D modeling methods provides a path towards building digital replica of the evaluation scene that affords physical simulation and supports robust manipulation algorithm learning. We propose to reconstruct high-quality meshes from real-world point clouds using state-of-the-art neural surface reconstruction method (the Real2Sim step). Because most simulators take meshes for fast simulation, the reconstructed meshes enable grasp pose labels generation without human efforts. The generated labels can train grasp network that performs robustly in the real evaluation scene (the Sim2Real step). In synthetic and real experiments, we show that the Real2Sim2Real pipeline performs better than baseline grasp networks trained with a large dataset and a grasp sampling method with retrieval-based reconstruction. The benefit of the Real2Sim2Real pipeline comes from 1) decoupling scene modeling and grasp sampling into sub-problems, and 2) both sub-problems can be solved with sufficiently high quality using recent 3D learning algorithms and mesh-based physical simulation techniques.
translated by 谷歌翻译
Being able to grasp objects is a fundamental component of most robotic manipulation systems. In this paper, we present a new approach to simultaneously reconstruct a mesh and a dense grasp quality map of an object from a depth image. At the core of our approach is a novel camera-centric object representation called the "object shell" which is composed of an observed "entry image" and a predicted "exit image". We present an image-to-image residual ConvNet architecture in which the object shell and a grasp-quality map are predicted as separate output channels. The main advantage of the shell representation and the corresponding neural network architecture, ShellGrasp-Net, is that the input-output pixel correspondences in the shell representation are explicitly represented in the architecture. We show that this coupling yields superior generalization capabilities for object reconstruction and accurate grasp quality estimation implicitly considering the object geometry. Our approach yields an efficient dense grasp quality map and an object geometry estimate in a single forward pass. Both of these outputs can be used in a wide range of robotic manipulation applications. With rigorous experimental validation, both in simulation and on a real setup, we show that our shell-based method can be used to generate precise grasps and the associated grasp quality with over 90% accuracy. Diverse grasps computed on shell reconstructions allow the robot to select and execute grasps in cluttered scenes with more than 93% success rate.
translated by 谷歌翻译
在本文中,我们提出了一条基于截短的签名距离函数(TSDF)体积的接触点检测的新型抓紧管道,以实现闭环7度自由度(7-DOF)在杂物环境上抓住。我们方法的关键方面是1)提议的管道以多视图融合,接触点采样和评估以及碰撞检查,可提供可靠且无碰撞的7-DOF抓手姿势,并带有真实的碰撞 - 时间性能;2)基于接触的姿势表示有效地消除了基于正常方法的歧义,从而提供了更精确和灵活的解决方案。广泛的模拟和实体机器人实验表明,在模拟和物理场景中,就掌握成功率而言,提出的管道可以选择更多的反物和稳定的抓握姿势,并优于基于正常的基线。
translated by 谷歌翻译