We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems.
translated by 谷歌翻译
Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.
translated by 谷歌翻译
在许多领域,包括强化学习和控制在内的许多领域,从一系列高维观测中学习或识别动力学是一个困难的挑战。最近通过潜在动力学从生成的角度研究了这个问题:将高维观测结果嵌入到较低维的空间中,可以在其中学习动力学。尽管取得了一些成功,但尚未将潜在动力学模型应用于现实世界的机器人系统,在这些机器人系统中,学习的表示形式必须适合各种感知混杂和噪声源。在本文中,我们提出了一种共同学习潜在状态表示的方法以及在感知困难条件下的长期计划和闭环控制的相关动力。作为我们的主要贡献,我们描述了我们的表示如何能够通过检测新颖或分布(OOD)输入来捕获测试时间的异质或输入特异性不确定性的概念。我们介绍了有关两个基于图像的任务的预测和控制实验的结果:一个模拟的摆平衡任务和实现任务的现实世界机器人操纵器。我们证明,与仅在不同程度的输入降解的情况下,我们的模型可产生更准确的预测,并表现出改善的控制性能。
translated by 谷歌翻译
这篇综述解决了在深度强化学习(DRL)背景下学习测量数据的抽象表示的问题。尽管数据通常是模棱两可,高维且复杂的解释,但许多动态系统可以通过一组低维状态变量有效地描述。从数据中发现这些状态变量是提高数据效率,稳健性和DRL方法的概括,应对维度的诅咒以及将可解释性和见解带入Black-Box DRL的关键方面。这篇综述通过描述用于学习世界的学习代表的主要深度学习工具,提供对方法和原则的系统观点,总结应用程序,基准和评估策略,并讨论开放的方式,从而提供了DRL中无监督的代表性学习的全面概述,挑战和未来的方向。
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
我们引入了变分状态空间过滤器(VSSF),这是从原始像素的无监督学习,识别和过滤潜伏的Larkov状态空间模型的新方法。在异构传感器配置下,我们为潜在的状态空间推断提出了理论上的声音框架。得到的模型可以集成训练期间使用的传感器测量的任意子集,从而实现半监督状态表示的学习,从而强制执行学习潜在状态空间的某些组件来达成可解释的测量。从此框架中,我们派生了L-VSSF,这是一个用线性潜在动态和高斯分布参数化的本模型的明确实例化。我们通过实验演示了L-VSSF在几个不同的测试环境中过滤超出训练数据集的序列长度的潜伏空间的能力。
translated by 谷歌翻译
人类是熟练的导航员:我们恰当地在新的地方进行了操纵,意识到我们回到以前见过的位置,甚至可以想到经历我们从未参观过的部分环境的捷径。另一方面,基于模型的强化学习中的当前方法与从训练分布中概括环境动态的努力。我们认为,两个原则可以帮助弥合这一差距:潜在的学习和简约的动态。人类倾向于以简单的术语来思考环境动态 - 我们认为轨迹不是指我们期望在路径上看到的东西,而是在抽象的潜在空间中,其中包含有关该位置的空间坐标的信息。此外,我们假设在环境的新颖部分中四处走动的工作方式与我们所熟悉的部分相同。这两个原则在串联中共同起作用:在潜在空间中,动态表现出了简约的特征。我们开发了一种学习这种简约动态的模型。使用一个变异目标,我们的模型经过培训,可以使用本地线性转换在潜在空间中重建经验丰富的过渡,同时鼓励尽可能少地调用不同的变换。使用我们的框架,我们演示了在一系列政策学习和计划任务中学习简化潜在动态模型的实用性。
translated by 谷歌翻译
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-toend provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.
translated by 谷歌翻译
有效推论是一种数学框架,它起源于计算神经科学,作为大脑如何实现动作,感知和学习的理论。最近,已被证明是在不确定性下存在国家估算和控制问题的有希望的方法,以及一般的机器人和人工代理人的目标驱动行为的基础。在这里,我们审查了最先进的理论和对国家估计,控制,规划和学习的积极推断的实现;描述当前的成就,特别关注机器人。我们展示了相关实验,以适应,泛化和稳健性而言说明其潜力。此外,我们将这种方法与其他框架联系起来,并讨论其预期的利益和挑战:使用变分贝叶斯推理具有功能生物合理性的统一框架。
translated by 谷歌翻译
复发状态空间模型(RSSM)是时间序列数据和系统标识中学习模式的高度表达模型。但是,这些模型假定动力学是固定和不变的,在现实世界中,这种动力学很少发生。许多控制应用程序通常表现出具有相似但不相同动力学的任务,这些任务可以建模为潜在变量。我们介绍了隐藏的参数复发状态空间模型(HIP-RSSM),该框架为具有低维的潜在因素集的相关动态系统的家庭参数。我们提出了一种对这种高斯图形模型的学习和执行推理的简单有效方法,该模型避免了诸如变异推理之类的近似值。我们表明,HIP-RSSM在现实世界系统和仿真上的几个挑战性机器人基准上都优于RSSM和竞争性的多任务模型。
translated by 谷歌翻译
Sampling-based methods have become a cornerstone of contemporary approaches to Model Predictive Control (MPC), as they make no restrictions on the differentiability of the dynamics or cost function and are straightforward to parallelize. However, their efficacy is highly dependent on the quality of the sampling distribution itself, which is often assumed to be simple, like a Gaussian. This restriction can result in samples which are far from optimal, leading to poor performance. Recent work has explored improving the performance of MPC by sampling in a learned latent space of controls. However, these methods ultimately perform all MPC parameter updates and warm-starting between time steps in the control space. This requires us to rely on a number of heuristics for generating samples and updating the distribution and may lead to sub-optimal performance. Instead, we propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution. Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time. By using a normalizing flow parameterization of the distribution, we can leverage its tractable density to avoid requiring differentiability of the dynamics and cost function. Finally, we evaluate the proposed approach on simulated robotics tasks and demonstrate its ability to surpass the performance of prior methods and scale better with a reduced number of samples.
translated by 谷歌翻译
预测驾驶行为或其他传感器测量是自主驱动系统的基本组成部分。通常是现实世界多变量序列数据难以模拟,因为潜在的动态是非线性的,并且观察是嘈杂的。此外,驾驶数据通常可以在分布中多传,这意味着存在不同的预测,但平均可能会损害模型性能。为解决此问题,我们提出了对非线性和多模态时间序列数据的有效推理和预测的转换复发性卡尔曼网络(SRKN)。该模型在几个卡尔曼滤波器之间切换,该滤波器以分解潜在状态模拟动态的不同方面。我们经验测试了在玩具数据集上产生的可扩展和可解释的深度状态空间模型,并在波尔图中的出租车实际驾驶数据。在所有情况下,该模型可以捕获数据中动态的多模式性质。
translated by 谷歌翻译
在本文中,我们得出了一种新方法,作为对LCES(例如E2C)的概括。该方法通过添加多步预测来开发学习局部线性状态空间的想法,从而可以对曲率进行更明确的控制。我们表明,该方法在没有其他作品(例如PCC和P3C)的情况下优于E2C。我们讨论了E2C和提出的方法和派生的更新方程之间的关系。我们提供经验证据,这表明,通过考虑多步骤预测,我们的方法MS -E2C-可以从曲率和下一个状态可预测性方面学习更好的潜在状态空间。最后,我们还讨论了我们遇到的某些稳定挑战,并通过多步预测以及减轻它们的方式。
translated by 谷歌翻译
当前独立于域的经典计划者需要问题域和实例作为输入的符号模型,从而导致知识采集瓶颈。同时,尽管深度学习在许多领域都取得了重大成功,但知识是在与符号系统(例如计划者)不兼容的亚符号表示中编码的。我们提出了Latplan,这是一种无监督的建筑,结合了深度学习和经典计划。只有一组未标记的图像对,显示了环境中允许的过渡子集(训练输入),Latplan学习了环境的完整命题PDDL动作模型。稍后,当给出代表初始状态和目标状态(计划输入)的一对图像时,Latplan在符号潜在空间中找到了目标状态的计划,并返回可视化的计划执行。我们使用6个计划域的基于图像的版本来评估LATPLAN:8个插头,15个式嘴,Blockworld,Sokoban和两个LightsOut的变体。
translated by 谷歌翻译
神经网络在许多科学学科中发挥着越来越大的作用,包括物理学。变形AutoEncoders(VAE)是能够表示在低维潜空间中的高维数据的基本信息,该神经网络具有概率解释。特别是所谓的编码器网络,VAE的第一部分,其将其输入到潜伏空间中的位置,另外在该位置的方差方面提供不确定性信息。在这项工作中,介绍了对AutoEncoder架构的扩展,渔民。在该架构中,借助于Fisher信息度量,不使用编码器中的附加信息信道生成潜在空间不确定性,而是从解码器导出。这种架构具有来自理论观点的优点,因为它提供了从模型的直接不确定性量化,并且还考虑不确定的交叉相关。我们可以通过实验表明,渔民生产比可比较的VAE更准确的数据重建,并且其学习性能也明显较好地缩放了潜伏空间尺寸的数量。
translated by 谷歌翻译
Spatiotemporal imaging has applications in e.g. cardiac diagnostics, surgical guidance, and radiotherapy monitoring, In this paper, we explain the temporal motion by identifying the underlying dynamics, only based on the sequential images. Our dynamical model maps the inputs of observed high-dimensional sequential images to a low-dimensional latent space wherein a linear relationship between a hidden state process and the lower-dimensional representation of the inputs holds. For this, we use a conditional variational auto-encoder (CVAE) to nonlinearly map the higher-dimensional image to a lower-dimensional space, wherein we model the dynamics with a linear Gaussian state-space model (LG-SSM). The model, a modified version of the Kalman variational auto-encoder, is end-to-end trainable, and the weights, both in the CVAE and LG-SSM, are simultaneously updated by maximizing the evidence lower bound of the marginal likelihood. In contrast to the original model, we explain the motion with a spatial transformation from one image to another. This results in sharper reconstructions and the possibility of transferring auxiliary information, such as segmentation, through the image sequence. Our experiments, on cardiac ultrasound time series, show that the dynamic model outperforms traditional image registration in execution time, to a similar performance. Further, our model offers the possibility to impute and extrapolate for missing samples.
translated by 谷歌翻译
我们为具有高维状态空间的复杂操纵任务的视觉动作计划提供了一个框架,重点是操纵可变形物体。我们为任务计划提出了一个潜在的空间路线图(LSR),这是一个基于图的结构,在全球范围内捕获了低维潜在空间中的系统动力学。我们的框架由三个部分组成:(1)映射模块(mm),该模块以图像的形式映射观测值,以提取各个状态的结构化潜在空间,并从潜在状态产生观测值,(2)LSR,LSR的LSR构建并连接包含相似状态的群集,以找到MM提取的开始和目标状态之间的潜在计划,以及(3)与LSR相应的潜在计划与相应的操作相辅相成的动作提案模块。我们对模拟的盒子堆叠和绳索/盒子操纵任务进行了彻底的调查,以及在真实机器人上执行的折叠任务。
translated by 谷歌翻译
纵向生物医学数据通常是稀疏时间网格和个体特定发展模式的特征。具体而言,在流行病学队列研究和临床登记处,我们面临的问题是在研究早期阶段中可以从数据中学到的问题,只有基线表征和一个后续测量。灵感来自最近的进步,允许将深度学习与动态建模相结合,我们调查这些方法是否可用于揭示复杂结构,特别是对于每个单独的两个观察时间点的极端小数据设置。然后,通过利用个体的相似性,可以使用不规则间距来获得有关个体动态的更多信息。我们简要概述了变形的自动化器(VAES)如何作为深度学习方法,可以与普通微分方程(ODES)相关联用于动态建模,然后具体研究这种方法的可行性,即提供个人特定的潜在轨迹的方法通过包括规律性假设和个人的相似性。我们还提供了对这种深度学习方法的描述作为过滤任务,以提供统计的视角。使用模拟数据,我们展示了方法可以在多大程度上从多大程度上恢复具有两个和四个未知参数的颂歌系统的单个轨迹,以及使用具有类似轨迹的个体群体,以及其崩溃的地方。结果表明,即使在极端的小数据设置中,这种动态深度学习方法也可能是有用的,但需要仔细调整。
translated by 谷歌翻译
Recently, model-based agents have achieved better performance than model-free ones using the same computational budget and training time in single-agent environments. However, due to the complexity of multi-agent systems, it is tough to learn the model of the environment. The significant compounding error may hinder the learning process when model-based methods are applied to multi-agent tasks. This paper proposes an implicit model-based multi-agent reinforcement learning method based on value decomposition methods. Under this method, agents can interact with the learned virtual environment and evaluate the current state value according to imagined future states in the latent space, making agents have the foresight. Our approach can be applied to any multi-agent value decomposition method. The experimental results show that our method improves the sample efficiency in different partially observable Markov decision process domains.
translated by 谷歌翻译
Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.
translated by 谷歌翻译