A Differentiable Neural Computer (DNC) is a neural network with an external memory which allows for iterative content modification via read, write and delete operations. We show that information theoretic properties of the memory contents play an important role in the performance of such architectures. We introduce a novel concept of memory demon to DNC architectures which modifies the memory contents implicitly via additive input encoding. The goal of the memory demon is to maximize the expected sum of mutual information of the consecutive external memory contents.
translated by 谷歌翻译
在许多顺序任务中,模型需要记住遥远过去的相关事件,以做出正确的预测。不幸的是,基于梯度的训练的直接应用需要为序列的每个元素存储中间计算。如果一个序列由数千甚至数百万个元素组成,则需要过大的计算记忆,因此,学习非常长期的依赖性不可行。但是,通常只能考虑到时间上的局部信息来预测大多数序列元素。另一方面,仅在局部信息的情况下,受长期依赖性影响的预测稀疏,其特征是高不确定性。我们提出了一种新的培训方法,该方法允许一次学习长期依赖性,而无需一次通过整个序列进行反向传播梯度。该方法可以潜在地应用于任何基于梯度的序列学习。复发体系结构的磁化实现更好或与基线相媲美,同时需要大大减少计算内存。
translated by 谷歌翻译
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency. As the systems grow in complexity, fine-tuning architectural parameters across multiple sub-systems (e.g., datapath, memory blocks in different hierarchies, interconnects, compiler optimization, etc.) quickly results in a combinatorial explosion of design space. This makes domain-specific customization an extremely challenging task. Prior work explores using reinforcement learning (RL) and other optimization methods to automatically explore the large design space. However, these methods have traditionally relied on single-agent RL/ML formulations. It is unclear how scalable single-agent formulations are as we increase the complexity of the design space (e.g., full stack System-on-Chip design). Therefore, we propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem. The key idea behind using MARL is an observation that parameters across different sub-systems are more or less independent, thus allowing a decentralized role assigned to each agent. We test this hypothesis by designing domain-specific DRAM memory controller for several workload traces. Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines such as Proximal Policy Optimization and Soft Actor-Critic over different target objectives such as low power and latency. To this end, this work opens the pathway for new and promising research in MARL solutions for hardware architecture search.
translated by 谷歌翻译
近年来,变压器体系结构和变体在许多机器学习任务中取得了巨大的成功。这种成功与关注机制的处理能力和上下文相关权重的存在本质上相关。我们认为这些功能适合元强化学习算法的核心作用。实际上,元素代理需要从一系列轨迹来推断任务。此外,它需要一种快速适应策略来适应其政策,以适应新任务 - 可以使用自我注意机制来实现。在这项工作中,我们介绍了TRMRL(用于元强化学习的变压器),这是一种使用变压器体系结构模拟内存恢复机制的元代理。它将工作记忆的最新过去联系在一起,以通过变压器层递归地构建情节记忆。我们表明,自我发作计算共识表示,该表示将每一层的贝叶斯风险降至最低,并提供有意义的功能来计算最佳动作。我们在高维连续控制环境中进行了实验,以进行运动和灵活的操纵。结果表明,与这些环境中的基准相比,TRMRL提出了可比或上级渐近性能,样本效率和分布外的概括。
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
钢筋学习的长期目标是建立智能代理,表现出快速学习,灵活地转移适于人类和动物的技能。本文调查了两个框架来解决这些目标的框架:情节控制和继承功能。epiSodic控制是一种认知的灵感方法,依赖于情节内存,是代理经历的基于实例的内存模型。同时,继承者功能和广义政策改进(SF&GPI)是一个元和传输学习框架,允许学习可以有效地重复使用不同奖励功能的稍后任务的任务的策略。单独地,这两种技术表明令人印象深刻的结果,从而大大提高了样本效率和优雅的重复使用了先前学习的政策。因此,我们概述了两种方法中的两种方法的组合,并经验证明其益处。
translated by 谷歌翻译
为了基于深度加强学习(RL)来增强目标驱动的视觉导航的交叉目标和跨场景,我们将信息理论正则化术语引入RL目标。正则化最大化导航动作与代理的视觉观察变换之间的互信息,从而促进更明智的导航决策。这样,代理通过学习变分生成模型来模拟动作观察动态。基于该模型,代理生成(想象)从其当前观察和导航目标的下一次观察。这样,代理学会了解导航操作与其观察变化之间的因果关系,这允许代理通过比较当前和想象的下一个观察来预测导航的下一个动作。 AI2-Thor框架上的交叉目标和跨场景评估表明,我们的方法在某些最先进的模型上获得了平均成功率的10美元。我们进一步评估了我们的模型在两个现实世界中:来自离散的活动视觉数据集(AVD)和带有TurtleBot的连续现实世界环境中的看不见的室内场景导航。我们证明我们的导航模型能够成功实现导航任务这些情景。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
深度加强学习(RL)是一种优化驱动的框架,用于生产一般动力系统的控制策略,而无明确依赖过程模型。仿真报告了良好的结果。在这里,我们展示了在真实物理系统上实现了艺术深度RL算法状态的挑战。方面包括软件与现有硬件之间的相互作用;实验设计和样品效率;培训受输入限制;和算法和控制法的解释性。在我们的方法中,我们的方法是使用PID控制器作为培训RL策略。除了简单性之外,这种方法还具有多种吸引力功能:无需将额外的硬件添加到控制系统中,因为PID控制器可以通过标准可编程逻辑控制器轻松实现;控制法可以在参数空间的“安全”区域中很容易初始化;最终产品 - 一个调整良好的PID控制器 - 有一种形式,从业者可以充分推理和部署。
translated by 谷歌翻译
深度强化学习已经证明了通过梯度下降调整的神经网络的潜力,以解决良好的环境中的复杂任务。但是,这些神经系统是缓慢的学习者,生产专门的药物,没有任何机制,无法继续学习培训课程。相反,生物突触可塑性是持久和多种多样的,并被认为在执行功能中起关键作用,例如工作记忆和认知灵活性,可能支持更高效和更通用的学习能力。受此启发的启发,我们建议建立具有动态权重的网络,能够不断执行自反射修改,这是其当前突触状态和动作奖励反馈的函数,而不是固定的网络配置。最终的模型,Metods(用于元优化的动力突触)是一种广泛适用的元强制学习系统,能够在代理策略空间中学习有效而强大的控制规则。具有动态突触的单层可以执行单次学习,将导航原则概括为看不见的环境,并表现出强大的学习自适应运动策略的能力,并与以前的元强化学习方法进行了比较。
translated by 谷歌翻译
Advances in reinforcement learning (RL) often rely on massive compute resources and remain notoriously sample inefficient. In contrast, the human brain is able to efficiently learn effective control strategies using limited resources. This raises the question whether insights from neuroscience can be used to improve current RL methods. Predictive processing is a popular theoretical framework which maintains that the human brain is actively seeking to minimize surprise. We show that recurrent neural networks which predict their own sensory states can be leveraged to minimise surprise, yielding substantial gains in cumulative reward. Specifically, we present the Predictive Processing Proximal Policy Optimization (P4O) agent; an actor-critic reinforcement learning agent that applies predictive processing to a recurrent variant of the PPO algorithm by integrating a world model in its hidden state. P4O significantly outperforms a baseline recurrent variant of the PPO algorithm on multiple Atari games using a single GPU. It also outperforms other state-of-the-art agents given the same wall-clock time and exceeds human gamer performance on multiple games including Seaquest, which is a particularly challenging environment in the Atari domain. Altogether, our work underscores how insights from the field of neuroscience may support the development of more capable and efficient artificial agents.
translated by 谷歌翻译
Real-world reinforcement learning tasks often involve some form of partial observability where the observations only give a partial or noisy view of the true state of the world. Such tasks typically require some form of memory, where the agent has access to multiple past observations, in order to perform well. One popular way to incorporate memory is by using a recurrent neural network to access the agent's history. However, recurrent neural networks in reinforcement learning are often fragile and difficult to train, susceptible to catastrophic forgetting and sometimes fail completely as a result. In this work, we propose Deep Transformer Q-Networks (DTQN), a novel architecture utilizing transformers and self-attention to encode an agent's history. DTQN is designed modularly, and we compare results against several modifications to our base model. Our experiments demonstrate the transformer can solve partially observable tasks faster and more stably than previous recurrent approaches.
translated by 谷歌翻译
解决时间扩展的任务是大多数增强学习(RL)算法的挑战[ARXIV:1906.07343]。我们研究了RL代理商学会提出自然语言问题的能力,以了解其环境并在新颖,时间扩展的环境中实现更大的概括性能。我们通过赋予该代理商的能力向全知的甲骨文提出“是,不”问题来做到这一点。这使代理商可以获得有关手头任务的指导,同时限制了对新信息的访问。为了在时间扩展的任务的背景下研究这种自然语言问题的出现,我们首先在迷你网格环境中训练代理商。然后,我们将受过训练的代理转移到另一个更艰难的环境中。与无法提出问题的基线代理相比,我们观察到概括性能的显着提高。通过将其对自然语言在其环境中的理解,代理可以推理其环境的动态,以至于在新型环境中部署时可以提出新的,相关的问题。
translated by 谷歌翻译
我们为政策梯度方法介绍了一种新颖的训练程序,其中用于在飞行中优化强化学习算法的超参数。与其他HyperParameter搜索不同,我们将HyperParameter调度标记为标准的Markov决策过程,并使用epiSodic内存来存储所使用的超参数和培训背景的结果。在任何策略更新步骤中,策略学习者都指的是存储的经验,并自适应地将其学习算法与存储器确定的新的超参数重新配置。这种机制被称为epiSodic政策梯度训练(EPGT),可以联合学习单个运行中的策略和学习算法的封面。连续和离散环境的实验结果证明了利用所提出的方法促进各种政策梯度算法的性能的优点。
translated by 谷歌翻译
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.
translated by 谷歌翻译
We present MEM: Multi-view Exploration Maximization for tackling complex visual control tasks. To the best of our knowledge, MEM is the first approach that combines multi-view representation learning and intrinsic reward-driven exploration in reinforcement learning (RL). More specifically, MEM first extracts the specific and shared information of multi-view observations to form high-quality features before performing RL on the learned features, enabling the agent to fully comprehend the environment and yield better actions. Furthermore, MEM transforms the multi-view features into intrinsic rewards based on entropy maximization to encourage exploration. As a result, MEM can significantly promote the sample-efficiency and generalization ability of the RL agent, facilitating solving real-world problems with high-dimensional observations and spare-reward space. We evaluate MEM on various tasks from DeepMind Control Suite and Procgen games. Extensive simulation results demonstrate that MEM can achieve superior performance and outperform the benchmarking schemes with simple architecture and higher efficiency.
translated by 谷歌翻译
强化学习中的信用作业是衡量行动对未来奖励的影响的问题。特别是,这需要从运气中分离技能,即解除外部因素和随后的行动对奖励行动的影响。为实现这一目标,我们将来自因果关系的反事件的概念调整为无模型RL设置。关键思想是通过学习从轨迹中提取相关信息来应对未来事件的价值函数。我们制定了一系列政策梯度算法,这些算法使用这些未来条件的价值函数作为基准或批评,并表明它们是可怕的差异。为避免对未来信息的调理潜在偏见,我们将后视信息限制为不包含有关代理程序行为的信息。我们展示了我们对许多说明性和具有挑战性问题的算法的功效和有效性。
translated by 谷歌翻译
This paper proposes a novel model-based policy gradient algorithm for tracking dynamic targets using a mobile robot, equipped with an onboard sensor with limited field of view. The task is to obtain a continuous control policy for the mobile robot to collect sensor measurements that reduce uncertainty in the target states, measured by the target distribution entropy. We design a neural network control policy with the robot $SE(3)$ pose and the mean vector and information matrix of the joint target distribution as inputs and attention layers to handle variable numbers of targets. We also derive the gradient of the target entropy with respect to the network parameters explicitly, allowing efficient model-based policy gradient optimization.
translated by 谷歌翻译
Drug dosing is an important application of AI, which can be formulated as a Reinforcement Learning (RL) problem. In this paper, we identify two major challenges of using RL for drug dosing: delayed and prolonged effects of administering medications, which break the Markov assumption of the RL framework. We focus on prolongedness and define PAE-POMDP (Prolonged Action Effect-Partially Observable Markov Decision Process), a subclass of POMDPs in which the Markov assumption does not hold specifically due to prolonged effects of actions. Motivated by the pharmacology literature, we propose a simple and effective approach to converting drug dosing PAE-POMDPs into MDPs, enabling the use of the existing RL algorithms to solve such problems. We validate the proposed approach on a toy task, and a challenging glucose control task, for which we devise a clinically-inspired reward function. Our results demonstrate that: (1) the proposed method to restore the Markov assumption leads to significant improvements over a vanilla baseline; (2) the approach is competitive with recurrent policies which may inherently capture the prolonged effect of actions; (3) it is remarkably more time and memory efficient than the recurrent baseline and hence more suitable for real-time dosing control systems; and (4) it exhibits favorable qualitative behavior in our policy analysis.
translated by 谷歌翻译
注意力是一种令人震惊的状态,能够通过在一条信息上选择性地关注一个信息,同时忽略其他可察觉的信息,能够在人类中处理有限的处理瓶颈。几十年来,在哲学,心理学,神经科学和计算中研究了注意的概念和函数。目前,这家酒店已广泛探索深神经网络。现在可以使用许多不同的神经关注模型,并且在过去六年中是一个非常活跃的研究区域。从关注的理论观点来看,该调查对主要神经关注模型进行了批判性分析。在这里,我们提出了一种与预测深度学习的理论方面的分类学。我们的分类系统提供了一个组织结构,提出了新问题和结构对现有的注意机制的理解。特别地,17种来自心理学和神经科学的标准和神经科学经典研究的标准用于分析一组超过650篇论文的51个主要模型的定性比较和批判性分析。此外,我们突出了尚未探索的几个理论问题,包括讨论生物合理性,突出目前的研究趋势,并为未来提供见解。
translated by 谷歌翻译
众所周知,在漫长的地平线和稀疏的奖励任务中,加强学习(RL)是困难的,需要大量的培训步骤。加快该过程的标准解决方案是利用额外的奖励信号,将其塑造以更好地指导学习过程。在语言条件的RL的背景下,语言输入的抽象和概括属性为更有效地塑造奖励的方式提供了机会。在本文中,我们利用这一想法并提出了一种自动奖励塑形方法,代理商从一般语言目标中提取辅助目标。这些辅助目标使用问题生成(QG)和问题答案(QA)系统:它们包括导致代理商尝试使用其自己的轨迹重建有关全球目标的部分信息的问题。当它成功时,它会获得与对答案的信心成正比的内在奖励。这激励代理生成轨迹,这些轨迹明确解释了一般语言目标的各个方面。我们的实验研究表明,这种方法不需要工程师干预来设计辅助目标,可以通过有效指导探索来提高样品效率。
translated by 谷歌翻译