深度加强学习(RL)最近在机器人连续控制任务中表现出很大的承诺。尽管如此,在该静脉中心围绕集中式学习设置的研究,这在很大程度上依赖于机器人的所有组件之间的通信可用性。然而,现实世界中的代理商经常以分散的方式运作,由于潜伏期要求,有限的电力预算和安全问题。通过将机器人组件作为分​​散剂的系统配制,这项工作提出了一种用于连续控制的分散的多效增强学习框架。为此,我们首先开发一个合作的多眼PPO框架,允许在执行期间训练和分散的操作期间集中优化。但是,该系统仅接收全局奖励信号,该信号不会归因于每个代理。为了解决这一挑战,我们进一步提出了一个通用的游戏理论信用分配框架,它计算特定于代理的奖励信号。最后但并非最不重要的是,我们还将基于模型的RL模块纳入了我们的信用分配框架,这导致采样效率的显着提高。我们展示了我们对Mujoco机器人控制任务的实验结果框架的有效性。对于演示视频,请访问:https://youtu.be/gfyvpm4svey。
translated by 谷歌翻译
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.
translated by 谷歌翻译
缩放多智能体增强学习的卓越障碍之一是为大量代理商分配给个别代理的行动。在本文中,我们通过呼叫\ yrest {部分奖励去耦}(prd)的方法来解决这一信用分配问题,该方法试图将大型合作多代理RL问题分解成涉及代理子集的解耦子问题,从而简化了信用分配。我们经验证明使用PRD在演员 - 批评算法中分解RL问题导致较低的差异策略梯度估计,这提高了各种其他跨越多个代理RL任务的数据效率,学习稳定性和渐近性能。演员 - 评论家方法。此外,我们还将我们的反事实多代理政策梯度(COMA),最先进的MARL算法以及经验证明我们的方法通过更好地利用代理商奖励流的信息来实现昏迷状态,以及启用最近的优势估计的进步。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
政策梯度方法在多智能体增强学习中变得流行,但由于存在环境随机性和探索代理(即非公平性​​),它们遭受了高度的差异,这可能因信用分配难度而受到困扰。结果,需要一种方法,该方法不仅能够有效地解决上述两个问题,而且需要足够强大地解决各种任务。为此,我们提出了一种新的多代理政策梯度方法,称为强大的本地优势(ROLA)演员 - 评论家。 Rola允许每个代理人将个人动作值函数作为当地评论家,以及通过基于集中评论家的新型集中培训方法来改善环境不良。通过使用此本地批评,每个代理都计算基准,以减少对其策略梯度估计的差异,这导致含有其他代理的预期优势动作值,这些选项可以隐式提高信用分配。我们在各种基准测试中评估ROLA,并在许多最先进的多代理政策梯度算法上显示其鲁棒性和有效性。
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
在合作多智能体增强学习(Marl)中的代理商的创造和破坏是一个批判性的研究领域。当前的Marl算法通常认为,在整个实验中,组内的代理数量仍然是固定的。但是,在许多实际问题中,代理人可以在队友之前终止。这次早期终止问题呈现出挑战:终止的代理人必须从本集团的成功或失败中学习,这是超出其自身存在的成败。我们指代薪资奖励的传播价值作为遣返代理商作为追索的奖励作为追索权。当前的MARL方法通过将这些药剂放在吸收状态下,直到整组试剂达到终止条件,通过将这些药剂置于终止状态来处理该问题。虽然吸收状态使现有的算法和API能够在没有修改的情况下处理终止的代理,但存在实际培训效率和资源使用问题。在这项工作中,我们首先表明样本复杂性随着系统监督学习任务中的吸收状态的数量而增加,同时对变量尺寸输入更加强大。然后,我们为现有的最先进的MARL算法提出了一种新颖的架构,它使用注意而不是具有吸收状态的完全连接的层。最后,我们展示了这一新颖架构在剧集中创建或销毁的任务中的标准架构显着优于标准架构以及标准的多代理协调任务。
translated by 谷歌翻译
我们将记住和忘记的经验重播(Ref-ER)算法扩展到多代理增强学习(MARL)。参考器被证明超过了最先进的算法状态,以连续控制从OpenAI健身房到复杂的流体流动。在MARL中,代理之间的依赖项包括在州值估计器中,环境动力学是通过参考文献使用的重要性权重对其建模的。在协作环境中,当使用个人奖励估算值时,我们发现最佳性能,并且我们忽略了其他动作对过渡图的影响。我们基准在斯坦福大学智能系统实验室(SISL)环境中进行参考文献的性能。我们发现,采用单个馈送前馈神经网络来进行策略和参考文献中的价值函数,优于依靠复杂的神经网络体系结构的最先进的算法状态。
translated by 谷歌翻译
流动性和流量的许多方案都涉及多种不同的代理,需要合作以找到共同解决方案。行为计划的最新进展使用强化学习以寻找有效和绩效行为策略。但是,随着自动驾驶汽车和车辆对X通信变得越来越成熟,只有使用单身独立代理的解决方案在道路上留下了潜在的性能增长。多代理增强学习(MARL)是一个研究领域,旨在为彼此相互作用的多种代理找到最佳解决方案。这项工作旨在将该领域的概述介绍给研究人员的自主行动能力。我们首先解释Marl并介绍重要的概念。然后,我们讨论基于Marl算法的主要范式,并概述每个范式中最先进的方法和思想。在这种背景下,我们调查了MAL在自动移动性场景中的应用程序,并概述了现有的场景和实现。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
资产分配(或投资组合管理)是确定如何最佳将有限预算的资金分配给一系列金融工具/资产(例如股票)的任务。这项研究调查了使用无模型的深RL代理应用于投资组合管理的增强学习(RL)的性能。我们培训了几个RL代理商的现实股票价格,以学习如何执行资产分配。我们比较了这些RL剂与某些基线剂的性能。我们还比较了RL代理,以了解哪些类别的代理表现更好。从我们的分析中,RL代理可以执行投资组合管理的任务,因为它们的表现明显优于基线代理(随机分配和均匀分配)。四个RL代理(A2C,SAC,PPO和TRPO)总体上优于最佳基线MPT。这显示了RL代理商发现更有利可图的交易策略的能力。此外,基于价值和基于策略的RL代理之间没有显着的性能差异。演员批评者的表现比其他类型的药物更好。同样,在政策代理商方面的表现要好,因为它们在政策评估方面更好,样品效率在投资组合管理中并不是一个重大问题。这项研究表明,RL代理可以大大改善资产分配,因为它们的表现优于强基础。基于我们的分析,在政策上,参与者批评的RL药物显示出最大的希望。
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译
近年来近年来,加固学习方法已经发展了一系列政策梯度方法,主要用于建模随机政策的高斯分布。然而,高斯分布具有无限的支持,而现实世界应用通常具有有限的动作空间。如果它提供有限支持,则该解剖会导致可以消除的估计偏差,因为它提出了有限的支持。在这项工作中,我们调查如何在Openai健身房的两个连续控制任务中训练该测试策略在训练时执行该测试策略。对于这两个任务来说,测试政策在代理人的最终预期奖励方面优于高斯政策,也显示出更多的稳定性和更快的培训过程融合。对于具有高维图像输入的卡路里环境,在高斯政策中,代理的成功率提高了63%。
translated by 谷歌翻译
强化学习的主要方法是根据预期的回报将信贷分配给行动。但是,我们表明回报可能取决于政策,这可能会导致价值估计的过度差异和减慢学习的速度。取而代之的是,我们证明了优势函数可以解释为因果效应,并与因果关系共享相似的属性。基于此洞察力,我们提出了直接优势估计(DAE),这是一种可以对优势函数进行建模并直接从政策数据进行估算的新方法,同时同时最大程度地减少了返回的方差而无需(操作 - )值函数。我们还通过显示如何无缝整合到DAE中来将我们的方法与时间差异方法联系起来。所提出的方法易于实施,并且可以通过现代参与者批评的方法很容易适应。我们对三个离散控制域进行经验评估DAE,并表明它可以超过广义优势估计(GAE),这是优势估计的强大基线,当将大多数环境应用于策略优化时。
translated by 谷歌翻译
机器人和与世界相互作用或互动的机器人和智能系统越来越多地被用来自动化各种任务。这些系统完成这些任务的能力取决于构成机器人物理及其传感器物体的机械和电气部件,例如,感知算法感知环境,并计划和控制算法以生产和控制算法来生产和控制算法有意义的行动。因此,通常有必要在设计具体系统时考虑这些组件之间的相互作用。本文探讨了以端到端方式对机器人系统进行任务驱动的合作的工作,同时使用推理或控制算法直接优化了系统的物理组件以进行任务性能。我们首先考虑直接优化基于信标的本地化系统以达到本地化准确性的问题。设计这样的系统涉及将信标放置在整个环境中,并通过传感器读数推断位置。在我们的工作中,我们开发了一种深度学习方法,以直接优化信标的放置和位置推断以达到本地化精度。然后,我们将注意力转移到了由任务驱动的机器人及其控制器优化的相关问题上。在我们的工作中,我们首先提出基于多任务增强学习的数据有效算法。我们的方法通过利用能够在物理设计的空间上概括设计条件的控制器,有效地直接优化了物理设计和控制参数,以直接优化任务性能。然后,我们对此进行跟进,以允许对离散形态参数(例如四肢的数字和配置)进行优化。最后,我们通过探索优化的软机器人的制造和部署来得出结论。
translated by 谷歌翻译
In this paper, we consider the problem of path finding for a set of homogeneous and autonomous agents navigating a previously unknown stochastic environment. In our problem setting, each agent attempts to maximize a given utility function while respecting safety properties. Our solution is based on ideas from evolutionary game theory, namely replicating policies that perform well and diminishing ones that do not. We do a comprehensive comparison with related multiagent planning methods, and show that our technique beats state of the art RL algorithms in minimizing path length by nearly 30% in large spaces. We show that our algorithm is computationally faster than deep RL methods by at least an order of magnitude. We also show that it scales better with an increase in the number of agents as compared to other methods, path planning methods in particular. Lastly, we empirically prove that the policies that we learn are evolutionarily stable and thus impervious to invasion by any other policy.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
我们呈现协调的近端策略优化(COPPO),该算法将原始近端策略优化(PPO)扩展到多功能代理设置。关键的想法在于多个代理之间的策略更新过程中的步骤大小的协调适应。当优化理论上接地的联合目标时,我们证明了政策改进的单调性,并基于一组近似推导了简化的优化目标。然后,我们解释了Coppo中的这种目标可以在代理商之间实现动态信用分配,从而减轻了代理政策的同时更新期间的高方差问题。最后,我们证明COPPO优于几种强大的基线,并且在典型的多代理设置下,包括最新的多代理PPO方法(即MAPPO),包括合作矩阵游戏和星际争霸II微管理任务。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
强化学习(RL)通过与环境相互作用的试验过程解决顺序决策问题。尽管RL在玩复杂的视频游戏方面取得了巨大的成功,但在现实世界中,犯错误总是不希望的。为了提高样本效率并从而降低错误,据信基于模型的增强学习(MBRL)是一个有前途的方向,它建立了环境模型,在该模型中可以进行反复试验,而无需实际成本。在这项调查中,我们对MBRL进行了审查,重点是Deep RL的最新进展。对于非壮观环境,学到的环境模型与真实环境之间始终存在概括性错误。因此,非常重要的是分析环境模型中的政策培训与实际环境中的差异,这反过来又指导了更好的模型学习,模型使用和政策培训的算法设计。此外,我们还讨论了其他形式的RL,包括离线RL,目标条件RL,多代理RL和Meta-RL的最新进展。此外,我们讨论了MBRL在现实世界任务中的适用性和优势。最后,我们通过讨论MBRL未来发展的前景来结束这项调查。我们认为,MBRL在被忽略的现实应用程序中具有巨大的潜力和优势,我们希望这项调查能够吸引更多关于MBRL的研究。
translated by 谷歌翻译