人类广泛利用视觉和触摸作为互补的感官,视觉提供有关场景的全球信息,并在操纵过程中触摸当地信息而不会受到阻塞。在这项工作中,我们提出了一个新颖的框架,用于以一种自我监督的方式学习多任务视觉执行表示。我们设计了一种机制,该机制使机器人能够自主收集空间对齐的视觉和触觉数据,这是下游任务的关键属性。然后,我们使用交叉模式对比损失训练视觉和触觉编码器将这些配对的感觉输入嵌入共享潜在空间中。对学习的表示形式进行评估,而无需对5个感知和控制任务进行微调,涉及可变形表面:触觉分类,接触定位,异常检测(例如,手术幻影肿瘤触诊),触觉搜索,例如,视觉疑问(例如,在遮挡的情况下,都可以从视觉询问中进行触觉搜索),以及沿布边缘和电缆的触觉伺服。博学的表示形式在毛巾功能分类上达到了80%的成功率,手术材料中异常检测的平均成功率为73%,视觉引导触觉搜索的平均成功率和87.8%的平均伺服距离沿电缆和服装的平均伺服距离为87.8%。接缝。这些结果表明,学习的表示形式的灵活性,并朝着对机器人控制的任务不合时宜的视觉表达表示迈出了一步。
translated by 谷歌翻译
模拟到现实的转移已成为一种流行且非常成功的方法,用于培训各种任务的机器人控制政策。但是,确定在模拟中训练的政策何时准备将其转移到物理世界通常是一个挑战。部署经过很少的模拟数据训练的策略可能会导致物理硬件的不可靠和危险行为。另一方面,模拟中的过度训练会导致策略过度拟合模拟器的视觉外观和动力学。在这项工作中,我们研究了自动确定在模拟中训练的策略何时可以可靠地转移到物理机器人的策略。我们在机器人织物操纵的背景下专门研究了这些思想,因为成功建模织物的动力学和视觉外观的困难,成功的SIM2Real转移尤其具有挑战性。导致织物平滑任务表明我们的切换标准与实际的性能很好地相关。特别是,我们基于信心的切换标准在培训总预算的55-60%之内达到了87.2-93.7%的平均最终面料覆盖率。有关代码和补充材料,请参见https://tinyurl.com/lsc-case。
translated by 谷歌翻译
机器人舰队的商业和工业部署在处决期间通常会落在遥远的人类遥控者身上,当时机器人处于危险之中或无法取得任务进展。通过持续学习,随着时间的推移,从偏远人类的干预措施也可以用来改善机器人机队控制政策。一个核心问题是如何有效地将人类关注分配给单个机器人。先前的工作在单机器人的单人类设置中解决了这一点。我们正式化了交互式车队学习(IFL)设置,其中多个机器人可以交互查询并向多个人类主管学习。我们提出了一个完全实施的开源IFL基准套件,以评估IFL算法的GPU加速ISAAC健身环境。我们提出了Fleet-Dagger,这是一个IFL算法的家庭,并将一种新颖的Fleet Dagger算法与模拟中的4个基准进行了比较。我们还使用4个ABB Yumi机器人臂进行了1000个物理块式实验试验。实验表明,人类向机器人的分配显着影响机器人车队的性能,并且我们的算法比基线的算法获得了人类努力回报的8.8倍。有关代码,视频和补充材料,请参见https://tinyurl.com/fleet-dagger。
translated by 谷歌翻译
Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields and factoring them into a set of axis-aligned triplane feature representations. Thus, our 3D training scenes are all represented by 2D feature planes, and we can directly train existing 2D diffusion models on these representations to generate 3D neural fields with high quality and diversity, outperforming alternative approaches to 3D-aware generation. Our approach requires essential modifications to existing triplane factorization pipelines to make the resulting features easy to learn for the diffusion model. We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
translated by 谷歌翻译
DeepAngle is a machine learning-based method to determine the contact angles of different phases in the tomography images of porous materials. Measurement of angles in 3--D needs to be done within the surface perpendicular to the angle planes, and it could become inaccurate when dealing with the discretized space of the image voxels. A computationally intensive solution is to correlate and vectorize all surfaces using an adaptable grid, and then measure the angles within the desired planes. On the contrary, the present study provides a rapid and low-cost technique powered by deep learning to estimate the interfacial angles directly from images. DeepAngle is tested on both synthetic and realistic images against the direct measurement technique and found to improve the r-squared by 5 to 16% while lowering the computational cost 20 times. This rapid method is especially applicable for processing large tomography data and time-resolved images, which is computationally intensive. The developed code and the dataset are available at an open repository on GitHub (https://www.github.com/ArashRabbani/DeepAngle).
translated by 谷歌翻译
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model's representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that exactly resembles a transformer's self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence's syntax tree is mostly extractable by our probe, suggesting these models have access to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.
translated by 谷歌翻译
We develop a new framework for trajectory planning on predefined paths, for general N-link manipulators. Different from previous approaches generating open-loop minimum time controllers or pre-tuned motion profiles by time-scaling, we establish analytic algorithms that recover all initial conditions that can be driven to the desirable target set while adhering to environment constraints. More technologically relevant, we characterise families of corresponding safe state-feedback controllers with several desirable properties. A key enabler in our framework is the introduction of a state feedback template, that induces ordering properties between trajectories of the resulting closed-loop system. The proposed structure allows working on the nonlinear system directly in both the analysis and synthesis problems. Both offline computations and online implementation are scalable with respect to the number of links of the manipulator. The results can potentially be used in a series of challenging problems: Numerical experiments on a commercial robotic manipulator demonstrate that efficient online implementation is possible.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We present SLATE, a sequence labeling approach for extracting tasks from free-form content such as digitally handwritten (or "inked") notes on a virtual whiteboard. Our approach allows us to create a single, low-latency model to simultaneously perform sentence segmentation and classification of these sentences into task/non-task sentences. SLATE greatly outperforms a baseline two-model (sentence segmentation followed by classification model) approach, achieving a task F1 score of 84.4\%, a sentence segmentation (boundary similarity) score of 88.4% and three times lower latency compared to the baseline. Furthermore, we provide insights into tackling challenges of performing NLP on the inking domain. We release both our code and dataset for this novel task.
translated by 谷歌翻译
Reinforcement learning (RL) operating on attack graphs leveraging cyber terrain principles are used to develop reward and state associated with determination of surveillance detection routes (SDR). This work extends previous efforts on developing RL methods for path analysis within enterprise networks. This work focuses on building SDR where the routes focus on exploring the network services while trying to evade risk. RL is utilized to support the development of these routes by building a reward mechanism that would help in realization of these paths. The RL algorithm is modified to have a novel warm-up phase which decides in the initial exploration which areas of the network are safe to explore based on the rewards and penalty scale factor.
translated by 谷歌翻译