食品包装行业通常使用工厂工人手动包装的季节性成分。对于由体积或重量挑选的小型食物,倾向于使缠绕,棒或聚集在一起,很难预测他们从视觉检查中有多么交流,使其成为准确掌握必要目标大量的挑战。工人依赖于称重鳞片的组合和一系列复杂的操作,以分离食物并达到目标质量。这使得过程自动化是非琐碎的事件。在这项研究中,我们提出了一种结合1)预先抓住以降低缠结程度的方法,2)在掌握量大于掌握量时仔细丢弃多余的食物以调整抓住质量的缠绕。目标质量和3)选择抓取点以抓住可能合理地高于目标抓地质量的量。我们评估了各种食品的方法,缠绕,粘和丛的各种食物,每个食物具有不同的尺寸,形状和材料特性,例如体积质量密度。我们使用我们所提出的方法表现出掌握用户指定目标群众的准确性的显着改进。
translated by 谷歌翻译
从一个或多个未分类桩中挑选一个或多个物体对于机器人系统而言仍然是不平凡的。当桩由包含彼此纠缠的单个项目的颗粒材料(GM)组成时,尤其如此,导致挑选出更多的选择。这种容易发生的GM的关键特征之一是从桩中的主要物体延伸的突起存在。这项工作描述了后者在引起机械纠缠及其对选择一致性的影响方面所扮演的角色。 IT报告了实验,其中采摘具有不同突出长度(PLS)的GMS导致挑选质量差异增加了76%,这表明PL是采摘策略设计中的一项信息功能。此外,为了应对这种效果,它提出了一种新的传播(SNP)方法,可大大减少纠结,从而使选择更加一致。与试图从桩中的无缠结点进行选择的先前方法相比,提出的方法导致选择误差(PE)的降低高达51%,并显示出对先前看不见的GMS的良好概括。
translated by 谷歌翻译
用单个机器人手抓住各种大小和形状的各种物体是一项挑战。为了解决这个问题,我们提出了一只名为“ F3手”的新机器人手,受人食指和拇指的复杂运动的启发。 F3手试图通过将平行运动手指和旋转运动手指与自适应功能结合在一起来实现复杂的人类样运动。为了确认我们的手的性能,我们将其附加到移动操纵器 - 丰田人支持机器人(HSR),并进行了掌握实验。在我们的结果中,我们表明它能够掌握所有YCB对象(总共82个),包括外径的垫圈小至6.4mm。我们还构建了一个用于直观操作的系统,并使用3D鼠标掌握了另外24个对象,包括小牙签和纸夹以及大型投手和饼干盒。即使在不精确的控制和位置偏移量下,F3手也能够在抓住98%的成功率方面取得成功率。此外,由于手指的适应性功能,我们展示了F3手的特征,这些特征促进了在理想的姿势中抓住诸如草莓之类的软物体。
translated by 谷歌翻译
布料的机器人操作的应用包括织物制造业到处理毯子和洗衣。布料操作对于机器人而言是挑战,这主要是由于它们的高度自由度,复杂的动力学和折叠或皱巴巴配置时的严重自我闭合。机器人操作的先前工作主要依赖于视觉传感器,这可能会对细粒度的操纵任务构成挑战,例如从一堆布上抓住所需数量的布料层。在本文中,我们建议将触觉传感用于布操作;我们将触觉传感器(Resin)连接到弗兰卡机器人的两个指尖之一,并训练分类器,以确定机器人是否正在抓住特定数量的布料层。在测试时间实验中,机器人使用此分类器作为其政策的一部分,使用触觉反馈来掌握一两个布层,以确定合适的握把。实验结果超过180次物理试验表明,与使用图像分类器的方法相比,所提出的方法优于不使用触觉反馈并具有更好地看不见布的基准。代码,数据和视频可在https://sites.google.com/view/reskin-cloth上找到。
translated by 谷歌翻译
Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.
translated by 谷歌翻译
我们研究了如何将高分辨率触觉传感器与视觉和深度传感结合使用,以改善掌握稳定性预测。在模拟高分辨率触觉传感的最新进展,尤其是触觉模拟器,使我们能够评估如何结合感应方式训练神经网络。借助训练大型神经网络所需的大量数据,机器人模拟器提供了一种快速自动化数据收集过程的方法。我们通过消融研究扩展现有工作,并增加了从YCB基准组中获取的一组对象。我们的结果表明,尽管视觉,深度和触觉感测的组合为已知对象提供了最佳预测结果,但该网络未能推广到未知对象。我们的工作还解决了触觉模拟中机器人抓握的现有问题以及如何克服它们。
translated by 谷歌翻译
We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
translated by 谷歌翻译
抓握是通过在一组触点上施加力和扭矩来挑选对象的过程。深度学习方法的最新进展允许在机器人对象抓地力方面快速进步。我们在过去十年中系统地调查了出版物,特别感兴趣使用最终效果姿势的所有6度自由度抓住对象。我们的综述发现了四种用于机器人抓钩的常见方法:基于抽样的方法,直接回归,强化学习和示例方法。此外,我们发现了围绕抓握的两种“支持方法”,这些方法使用深入学习来支持抓握过程,形状近似和负担能力。我们已经将本系统评论(85篇论文)中发现的出版物提炼为十个关键要点,我们认为对未来的机器人抓握和操纵研究至关重要。该调查的在线版本可从https://rhys-newbury.github.io/projects/6dof/获得
translated by 谷歌翻译
As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects' center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water.
translated by 谷歌翻译
机器人外科助理(RSAs)通常用于通过专家外科医生进行微创手术。然而,长期以来充满了乏味和重复的任务,如缝合可以导致外科医生疲劳,激励缝合的自动化。随着薄反射针的视觉跟踪极具挑战性,在未反射对比涂料的情况下修改了针。作为朝向无修改针的缝合子任务自动化的步骤,我们提出了休斯顿:切换未经修改,外科手术,工具障碍针,一个问题和算法,它使用学习的主动传感策略与立体声相机本地化并对齐针头进入另一臂的可见和可访问的姿势。为了补偿机器人定位和针头感知误差,然后算法执行使用多个摄像机的高精度抓握运动。在使用Da Vinci研究套件(DVRK)的物理实验中,休斯顿成功通过了96.7%的成功率,并且能够在故障前平均地在臂32.4倍之间顺序地执行切换。在培训中看不见的针头,休斯顿实现了75-92.9%的成功率。据我们所知,这项工作是第一个研究未修改的手术针的切换。查看https://tinyurl.com/huston-surgery用于额外​​的材料。
translated by 谷歌翻译
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
translated by 谷歌翻译
Cloth in the real world is often crumpled, self-occluded, or folded in on itself such that key regions, such as corners, are not directly graspable, making manipulation difficult. We propose a system that leverages visual and tactile perception to unfold the cloth via grasping and sliding on edges. By doing so, the robot is able to grasp two adjacent corners, enabling subsequent manipulation tasks like folding or hanging. As components of this system, we develop tactile perception networks that classify whether an edge is grasped and estimate the pose of the edge. We use the edge classification network to supervise a visuotactile edge grasp affordance network that can grasp edges with a 90% success rate. Once an edge is grasped, we demonstrate that the robot can slide along the cloth to the adjacent corner using tactile pose estimation/control in real time. See http://nehasunil.com/visuotactile/visuotactile.html for videos.
translated by 谷歌翻译
Vascular shunt insertion is a fundamental surgical procedure used to temporarily restore blood flow to tissues. It is often performed in the field after major trauma. We formulate a problem of automated vascular shunt insertion and propose a pipeline to perform Automated Vascular Shunt Insertion (AVSI) using a da Vinci Research Kit. The pipeline uses a learned visual model to estimate the locus of the vessel rim, plans a grasp on the rim, and moves to grasp at that point. The first robot gripper then pulls the rim to stretch open the vessel with a dilation motion. The second robot gripper then proceeds to insert a shunt into the vessel phantom (a model of the blood vessel) with a chamfer tilt followed by a screw motion. Results suggest that AVSI achieves a high success rate even with tight tolerances and varying vessel orientations up to 30{\deg}. Supplementary material, dataset, videos, and visualizations can be found at https://sites.google.com/berkeley.edu/autolab-avsi.
translated by 谷歌翻译
本文介绍了DGBench,这是一种完全可重现的开源测试系统,可在机器人和对象之间具有不可预测的相对运动的环境中对动态抓握进行基准测试。我们使用拟议的基准比较几种视觉感知布置。由于传感器的最小范围,遮挡和有限的视野,用于静态抓握的传统感知系统无法在掌握的最后阶段提供反馈。提出了一个多摄像机的眼睛感知系统,该系统具有比常用的相机配置具有优势。我们用基于图像的视觉宣传控制器进行定量评估真实机器人的性能,并在动态掌握任务上显示出明显提高的成功率。
translated by 谷歌翻译
我们提出了一个本体感受的远程操作系统,该系统使用反身握把算法来增强拾取任务的速度和稳健性。该系统由两个使用准直接驱动驱动的操纵器组成,以提供高度透明的力反馈。末端效应器具有双峰力传感器,可测量3轴力信息和2维接触位置。此信息用于防滑和重新磨碎反射。当用户与所需对象接触时,重新抓紧反射将抓地力的手指与对象上的抗肌点对齐,以最大程度地提高抓握稳定性。反射仅需150毫秒即可纠正用户选择的不准确的grasps,因此用户的运动仅受到Re-Grasp的执行的最小干扰。一旦建立了抗焦点接触,抗滑动反射将确保抓地力施加足够的正常力来防止物体从抓地力中滑出。本体感受器的操纵器和反射抓握的结合使用户可以高速完成远程操作的任务。
translated by 谷歌翻译
如今,机器人在我们的日常生活中起着越来越重要的作用。在以人为本的环境中,机器人经常会遇到成堆的对象,包装的项目或孤立的对象。因此,机器人必须能够在各种情况下掌握和操纵不同的物体,以帮助人类进行日常任务。在本文中,我们提出了一种多视图深度学习方法,以处理以人为中心的域中抓住强大的对象。特别是,我们的方法将任意对象的点云作为输入,然后生成给定对象的拼字图。获得的视图最终用于估计每个对象的像素抓握合成。我们使用小对象抓住数据集训练模型端到端,并在模拟和现实世界数据上对其进行测试,而无需进行任何进一步的微调。为了评估所提出方法的性能,我们在三种情况下进行了广泛的实验集,包括孤立的对象,包装的项目和一堆对象。实验结果表明,我们的方法在所有仿真和现实机器人方案中都表现出色,并且能够在各种场景配置中实现新颖对象的可靠闭环抓握。
translated by 谷歌翻译
软机器人抓手有助于富含接触的操作,包括对各种物体的强大抓握。然而,软抓手的有益依从性也会导致重大变形,从而使精确的操纵具有挑战性。我们提出视觉压力估计与控制(VPEC),这种方法可以使用外部摄像头的RGB图像施加的软握力施加的压力。当气动抓地力和肌腱握力与平坦的表面接触时,我们为视觉压力推断提供了结果。我们还表明,VPEC可以通过对推断压力图像的闭环控制进行精确操作。在我们的评估中,移动操纵器(来自Hello Robot的拉伸RE1)使用Visual Servoing在所需的压力下进行接触;遵循空间压力轨迹;并掌握小型低调的物体,包括microSD卡,一分钱和药丸。总体而言,我们的结果表明,对施加压力的视觉估计可以使软抓手能够执行精确操作。
translated by 谷歌翻译
在密集的混乱中抓住是自动机器人的一项基本技能。但是,在混乱的情况下,拥挤性和遮挡造成了很大的困难,无法在没有碰撞的情况下产生有效的掌握姿势,这会导致低效率和高失败率。为了解决这些问题,我们提出了一个名为GE-GRASP的通用框架,用于在密集的混乱中用于机器人运动计划,在此,我们利用各种动作原始素来遮挡对象去除,并呈现发电机 - 评估器架构以避免空间碰撞。因此,我们的ge-grasp能够有效地抓住密集的杂物中的物体,并有希望的成功率。具体而言,我们定义了三个动作基础:面向目标的抓握,用于捕获,推动和非目标的抓握,以减少拥挤和遮挡。发电机有效地提供了参考空间信息的各种动作候选者。同时,评估人员评估了所选行动原始候选者,其中最佳动作由机器人实施。在模拟和现实世界中进行的广泛实验表明,我们的方法在运动效率和成功率方面优于杂乱无章的最新方法。此外,我们在现实世界中实现了可比的性能,因为在模拟环境中,这表明我们的GE-Grasp具有强大的概括能力。补充材料可在以下网址获得:https://github.com/captainwudaokou/ge-grasp。
translated by 谷歌翻译
Being able to grasp objects is a fundamental component of most robotic manipulation systems. In this paper, we present a new approach to simultaneously reconstruct a mesh and a dense grasp quality map of an object from a depth image. At the core of our approach is a novel camera-centric object representation called the "object shell" which is composed of an observed "entry image" and a predicted "exit image". We present an image-to-image residual ConvNet architecture in which the object shell and a grasp-quality map are predicted as separate output channels. The main advantage of the shell representation and the corresponding neural network architecture, ShellGrasp-Net, is that the input-output pixel correspondences in the shell representation are explicitly represented in the architecture. We show that this coupling yields superior generalization capabilities for object reconstruction and accurate grasp quality estimation implicitly considering the object geometry. Our approach yields an efficient dense grasp quality map and an object geometry estimate in a single forward pass. Both of these outputs can be used in a wide range of robotic manipulation applications. With rigorous experimental validation, both in simulation and on a real setup, we show that our shell-based method can be used to generate precise grasps and the associated grasp quality with over 90% accuracy. Diverse grasps computed on shell reconstructions allow the robot to select and execute grasps in cluttered scenes with more than 93% success rate.
translated by 谷歌翻译
当人类掌握现实世界中的物体时,我们经常移动手臂将物体固定在可以使用它的不同姿势中。相比之下,典型的实验室设置仅研究举起后立即研究抓握的稳定性,而没有任何随后的臂重置。但是,由于重力扭矩和握力接触力可能会完全改变,因此抓紧稳定性可能会根据物体的固定姿势而差异很大。为了促进对持有姿势如何影响掌握稳定性的研究,我们提出了Poseit,这是一种新型的多模式数据集,其中包含从抓住对象的完整周期收集的视觉和触觉数据,将手臂重新放置到其中一个采样姿势,并将其重新放置到其中一个采样的姿势中,并摇动物体。使用Poseit的数据,我们可以制定和应对预测特定固定姿势是否稳定的抓握对象的任务。我们培训一个LSTM分类器,该分类器在拟议的任务上达到85%的准确性。我们的实验结果表明,接受Poseit训练的多模式模型比使用唯一视觉或触觉数据具有更高的精度,并且我们的分类器也可以推广到看不见的对象和姿势。
translated by 谷歌翻译