人类广泛利用视觉和触摸作为互补的感官,视觉提供有关场景的全球信息,并在操纵过程中触摸当地信息而不会受到阻塞。在这项工作中,我们提出了一个新颖的框架,用于以一种自我监督的方式学习多任务视觉执行表示。我们设计了一种机制,该机制使机器人能够自主收集空间对齐的视觉和触觉数据,这是下游任务的关键属性。然后,我们使用交叉模式对比损失训练视觉和触觉编码器将这些配对的感觉输入嵌入共享潜在空间中。对学习的表示形式进行评估,而无需对5个感知和控制任务进行微调,涉及可变形表面:触觉分类,接触定位,异常检测(例如,手术幻影肿瘤触诊),触觉搜索,例如,视觉疑问(例如,在遮挡的情况下,都可以从视觉询问中进行触觉搜索),以及沿布边缘和电缆的触觉伺服。博学的表示形式在毛巾功能分类上达到了80%的成功率,手术材料中异常检测的平均成功率为73%,视觉引导触觉搜索的平均成功率和87.8%的平均伺服距离沿电缆和服装的平均伺服距离为87.8%。接缝。这些结果表明,学习的表示形式的灵活性,并朝着对机器人控制的任务不合时宜的视觉表达表示迈出了一步。
translated by 谷歌翻译
深入学习的强化学习(RL)的结合导致了一系列令人印象深刻的壮举,许多相信(深)RL提供了一般能力的代理。然而,RL代理商的成功往往对培训过程中的设计选择非常敏感,这可能需要繁琐和易于易于的手动调整。这使得利用RL对新问题充满挑战,同时也限制了其全部潜力。在许多其他机器学习领域,AutomL已经示出了可以自动化这样的设计选择,并且在应用于RL时也会产生有希望的初始结果。然而,自动化强化学习(AutorL)不仅涉及Automl的标准应用,而且还包括RL独特的额外挑战,其自然地产生了不同的方法。因此,Autorl已成为RL中的一个重要研究领域,提供来自RNA设计的各种应用中的承诺,以便玩游戏等游戏。鉴于RL中考虑的方法和环境的多样性,在不同的子领域进行了大部分研究,从Meta学习到进化。在这项调查中,我们寻求统一自动的领域,我们提供常见的分类法,详细讨论每个区域并对研究人员来说是一个兴趣的开放问题。
translated by 谷歌翻译
机器人的形态和行为的互相适应变得与快速的3D-制造方法和高效的深强化学习算法的出现越来越重要。对于互相适应的方法应用到真实世界的一个主要挑战是由于模型和仿真不准确的模拟到现实的差距。然而,以前的工作主要集中在形态开发的分析模型,并用大量的用户群(微)模拟器的进化适应的研究,忽视的模拟到现实差距的存在和在现实世界中制造周期的成本。本文提出了一种新的办法,结合经典的高频率计算昂贵的图形神经网络的代理数据高效互相适应深层神经网络具有不同度的自由度数。在仿真结果表明,新方法可以通过有效的设计优化与离线强化学习相结合共同适应的生产周期这样一个有限的数量中的代理程序,它允许在今后的工作中直接应用到真实世界的互相适应任务评估
translated by 谷歌翻译
演奏家以热情,诗歌和非凡的技术能力弹奏钢琴。正如李斯特(Liszt)所说(一个演奏家)必须呼唤气味和开花,并呼吸生命的呼吸。可以弹钢琴的最强机器人是基于专业机器人手/钢琴和硬编码计划算法的组合。与此相反,在本文中,我们演示了代理如何直接从机器可读的音乐得分中学习,以使用SCRATCH的增强钢琴学习(RL)在模拟的钢琴上弹奏钢琴。我们证明RL代理不仅可以找到正确的关键位置,还可以处理各种节奏,音量和指法,要求。我们通过使用接触式的奖励和新的任务课程来实现这一目标。最后,我们通过仔细研究重要方面来实现此类学习算法,并有可能阐明朝这个方向上的未来研究。
translated by 谷歌翻译
Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).
translated by 谷歌翻译
One of the common traits of past and present approaches for Semantic Role Labeling (SRL) is that they rely upon discrete labels drawn from a predefined linguistic inventory to classify predicate senses and their arguments. However, we argue this need not be the case. In this paper, we present an approach that leverages Definition Modeling to introduce a generalized formulation of SRL as the task of describing predicate-argument structures using natural language definitions instead of discrete labels. Our novel formulation takes a first step towards placing interpretability and flexibility foremost, and yet our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance. We release our software for research purposes at https://github.com/SapienzaNLP/dsrl.
translated by 谷歌翻译
As robotic systems continue to address emerging issues in areas such as logistics, mobility, manufacturing, and disaster response, it is increasingly important to rapidly generate safe and energy-efficient trajectories. In this article, we present a new approach to plan energy-optimal trajectories through cluttered environments containing polygonal obstacles. In particular, we develop a method to quickly generate optimal trajectories for a double-integrator system, and we show that optimal path planning reduces to an integer program. To find an efficient solution, we present a distance-informed prefix search to efficiently generate optimal trajectories for a large class of environments. We demonstrate that our approach, while matching the performance of RRT* and Probabilistic Road Maps in terms of path length, outperforms both in terms of energy cost and computational time by up to an order of magnitude. We also demonstrate that our approach yields implementable trajectories in an experiment with a Crazyflie quadrotor.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Streets networks provide an invaluable source of information about the different temporal and spatial patterns emerging in our cities. These streets are often represented as graphs where intersections are modelled as nodes and streets as links between them. Previous work has shown that raster representations of the original data can be created through a learning algorithm on low-dimensional representations of the street networks. In contrast, models that capture high-level urban network metrics can be trained through convolutional neural networks. However, the detailed topological data is lost through the rasterisation of the street network. The models cannot recover this information from the image alone, failing to capture complex street network features. This paper proposes a model capable of inferring good representations directly from the street network. Specifically, we use a variational autoencoder with graph convolutional layers and a decoder that outputs a probabilistic fully-connected graph to learn latent representations that encode both local network structure and the spatial distribution of nodes. We train the model on thousands of street network segments and use the learnt representations to generate synthetic street configurations. Finally, we proposed a possible application to classify the urban morphology of different network segments by investigating their common characteristics in the learnt space.
translated by 谷歌翻译
An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.
translated by 谷歌翻译