本文介绍了一种系统集成方法,用于一种6-DOF(自由度)协作机器人,以操作移液液的移液液。它的技术发展是三倍。首先,我们设计了用于握住和触发手动移液器的最终效果。其次,我们利用协作机器人的优势来识别基于公认姿势的实验室姿势和计划的机器人运动。第三,我们开发了基于视觉的分类器来预测和纠正定位误差,从而精确地附着在一次性技巧上。通过实验和分析,我们确认开发的系统,尤其是计划和视觉识别方法,可以帮助确保高精度和柔性液体分配。开发的系统适用于低频,高更改的生化液体分配任务。我们预计它将促进协作机器人的部署进行实验室自动化,从而提高实验效率,而不会显着自定义实验室环境。
translated by 谷歌翻译
本文介绍了一个用于电缆线束的自主垃圾衬板 - 一个极具挑战性的垃圾桶采摘任务。目前,由于其长度和难以捉摸的结构,目前的电缆线束不适合进口到自动化生产。考虑到机器人垃圾箱拾取的任务,其中线束严重纠缠在一起,使用传统的箱拣选方法将机器人挑选一个机器人挑战。在本文中,我们提出了一种克服缠结易受零件时克服困难的有效方法。我们为机器人开发了几种运动方案,以拾取单个线束,避免任何缠结。此外,我们提出了一种基于学习的垃圾箱采摘策略,可在合理的顺序中选择掌握和设计的运动方案。由于用于充分解决拣选杂乱电缆线束中的缠结问题,我们的方法是独一无二的。我们在一组现实世界实验中展示了我们的方法,在此期间,该方法能够在各种杂乱的场景下具有效率和准确性的顺序箱拣选任务。
translated by 谷歌翻译
可变形的物体操纵(DOM)是机器人中的新兴研究问题。操纵可变形对象的能力赋予具有更高自主权的机器人,并承诺在工业,服务和医疗领域中的新应用。然而,与刚性物体操纵相比,可变形物体的操纵相当复杂,并且仍然是开放的研究问题。解决DOM挑战在机器人学的几乎各个方面,即硬件设计,传感,(变形)建模,规划和控制的挑战突破。在本文中,我们审查了最近的进步,并在考虑每个子场中的变形时突出主要挑战。我们论文的特殊焦点在于讨论这些挑战并提出未来的研究方向。
translated by 谷歌翻译
Classification bandits are multi-armed bandit problems whose task is to classify a given set of arms into either positive or negative class depending on whether the rate of the arms with the expected reward of at least h is not less than w for given thresholds h and w. We study a special classification bandit problem in which arms correspond to points x in d-dimensional real space with expected rewards f(x) which are generated according to a Gaussian process prior. We develop a framework algorithm for the problem using various arm selection policies and propose policies called FCB and FTSV. We show a smaller sample complexity upper bound for FCB than that for the existing algorithm of the level set estimation, in which whether f(x) is at least h or not must be decided for every arm's x. Arm selection policies depending on an estimated rate of arms with rewards of at least h are also proposed and shown to improve empirical sample complexity. According to our experimental results, the rate-estimation versions of FCB and FTSV, together with that of the popular active learning policy that selects the point with the maximum variance, outperform other policies for synthetic functions, and the version of FTSV is also the best performer for our real-world dataset.
translated by 谷歌翻译
The long-standing theory that a colour-naming system evolves under the dual pressure of efficient communication and perceptual mechanism is supported by more and more linguistic studies including the analysis of four decades' diachronic data from the Nafaanra language. This inspires us to explore whether artificial intelligence could evolve and discover a similar colour-naming system via optimising the communication efficiency represented by high-level recognition performance. Here, we propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining the accuracy of machine recognition on the quantised images. Given an RGB image, Annotation Branch maps it into an index map before generating the quantised image with a colour palette, meanwhile the Palette Branch utilises a key-point detection way to find proper colours in palette among whole colour space. By interacting with colour annotation, CQFormer is able to balance both the machine vision accuracy and colour perceptual structure such as distinct and stable colour distribution for discovered colour system. Very interestingly, we even observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages. Besides, our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage while maintaining a high performance in high-level recognition tasks such as classification and detection. Extensive experiments demonstrate the superior performance of our method with extremely low bit-rate colours. We will release the source code soon.
translated by 谷歌翻译
Robots such as autonomous vehicles and assistive manipulators are increasingly operating in dynamic environments and close physical proximity to people. In such scenarios, the robot can leverage a human motion predictor to predict their future states and plan safe and efficient trajectories. However, no model is ever perfect -- when the observed human behavior deviates from the model predictions, the robot might plan unsafe maneuvers. Recent works have explored maintaining a confidence parameter in the human model to overcome this challenge, wherein the predicted human actions are tempered online based on the likelihood of the observed human action under the prediction model. This has opened up a new research challenge, i.e., \textit{how to compute the future human states online as the confidence parameter changes?} In this work, we propose a Hamilton-Jacobi (HJ) reachability-based approach to overcome this challenge. Treating the confidence parameter as a virtual state in the system, we compute a parameter-conditioned forward reachable tube (FRT) that provides the future human states as a function of the confidence parameter. Online, as the confidence parameter changes, we can simply query the corresponding FRT, and use it to update the robot plan. Computing parameter-conditioned FRT corresponds to an (offline) high-dimensional reachability problem, which we solve by leveraging recent advances in data-driven reachability analysis. Overall, our framework enables online maintenance and updates of safety assurances in human-robot interaction scenarios, even when the human prediction model is incorrect. We demonstrate our approach in several safety-critical autonomous driving scenarios, involving a state-of-the-art deep learning-based prediction model.
translated by 谷歌翻译
我们提出了一种轻巧,准确的方法,用于检测视频中的异常情况。现有方法使用多个实体学习(MIL)来确定视频每个段的正常/异常状态。最近的成功研​​究认为,学习细分市场之间的时间关系很重要,以达到高精度,而不是只关注单个细分市场。因此,我们分析了近年来成功的现有方法,并发现同时学习所有细分市场确实很重要,但其中的时间顺序与实现高准确性无关。基于这一发现,我们不使用MIL框架,而是提出具有自发机制的轻质模型,以自动提取对于确定所有输入段正常/异常非常重要的特征。结果,我们的神经网络模型具有现有方法的参数数量的1.3%。我们在三个基准数据集(UCF-Crime,Shanghaitech和XD-Violence)上评估了方法的帧级检测准确性,并证明我们的方法可以比最新方法实现可比或更好的准确性。
translated by 谷歌翻译
现有的视频域改编(DA)方法需要存储视频帧的所有时间组合或配对源和目标视频,这些视频和目标视频成本昂贵,无法扩展到长时间的视频。为了解决这些局限性,我们建议采用以下记忆高效的基于图形的视频DA方法。首先,我们的方法模型每个源或目标视频通过图:节点表示视频帧和边缘表示帧之间的时间或视觉相似性关系。我们使用图形注意力网络来了解单个帧的重量,并同时将源和目标视频对齐到域不变的图形特征空间中。我们的方法没有存储大量的子视频,而是仅构建一个图形,其中一个视频的图形注意机制,从而大大降低了内存成本。广泛的实验表明,与最先进的方法相比,我们在降低内存成本的同时取得了卓越的性能。
translated by 谷歌翻译
图像恢复算法(例如超级分辨率(SR))是低质量图像中对象检测的必不可少的预处理模块。这些算法中的大多数假定降解是固定的,并且已知先验。但是,实际上,实际降解或最佳的上采样率是未知或与假设不同的,导致预处理模块和随之而来的高级任务(例如对象检测)的性能恶化。在这里,我们提出了一个新颖的自我监督框架,以检测低分辨率图像降解的对象。我们利用下采样降解作为一种自我监督信号的一种转换,以探索针对各种分辨率和其他退化条件的模棱两可的表示。自我设计(AERIS)框架中的自动编码分辨率可以进一步利用高级SR体系结构,并使用任意分辨率恢复解码器,以从退化的输入图像中重建原始对应关系。表示学习和对象检测均以端到端的培训方式共同优化。通用AERIS框架可以在具有不同骨架的各种主流对象检测架构上实现。广泛的实验表明,与现有方法相比,我们的方法在面对变化降解情况时取得了卓越的性能。代码将在https://github.com/cuiziteng/eccv_aeris上发布。
translated by 谷歌翻译
本文为一组移动机器人提供了一种算法,可以同时学习域上的空间字段,并在空间上分发自己以最佳覆盖。从以前的方法通过集中式高斯过程估算空间场的方法,这项工作利用了覆盖范围问题的空间结构,并提出了一种分散的策略,其中样本通过通过Voronoi分区的边界来建立通信在本地汇总。我们提出了一种算法,每个机器人都通过其自身测量值和Voronoi邻居提供的局部高斯流程运行局部高斯过程,该过程仅在提供足够新颖的信息时才将其纳入单个机器人的高斯过程中。在模拟中评估算法的性能,并与集中式方法进行比较。
translated by 谷歌翻译