We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate endto-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.
translated by 谷歌翻译
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.
translated by 谷歌翻译
This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. We introduce a set of cooperative control tasks that includes tasks with discrete and continuous actions, as well as tasks that involve hundreds of agents. The three approaches are evaluated against each other using different neural architectures, training procedures, and reward structures. Using deep reinforcement learning with a curriculum learning scheme, our approach can solve problems that were previously considered intractable by most multi-agent reinforcement learning algorithms. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods when using feed-forward neural architectures. We also show that recurrent policies, while more difficult to train, outperform feed-forward policies on our evaluation tasks.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
translated by 谷歌翻译
Multi-agent settings remain a fundamental challenge in the reinforcement learning (RL) domain due to the partial observability and the lack of accurate real-time interactions across agents. In this paper, we propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge within a large number of agents coexisting. First, we design a new communication protocol that exploits the ability of depthwise convolution to efficiently extract local relations and learn local communication between neighboring agents. To facilitate multi-agent coordination, we explicitly learn the effect of joint actions by taking the policies of neighboring agents as inputs. Second, we introduce the mean-field approximation into our method to reduce the scale of agent interactions. To more effectively coordinate behaviors of neighboring agents, we enhance the mean-field approximation by a supervised policy rectification network (PRN) for rectifying real-time agent interactions and by a learnable compensation term for correcting the approximation bias. The proposed method enables efficient coordination as well as outperforms several baseline approaches on the adaptive traffic signal control (ATSC) task and the StarCraft II multi-agent challenge (SMAC).
translated by 谷歌翻译
代理商通信可能会显着提高需要协调以实现共享目标的多代理任务的性能。事先工作表明,可以使用多智能体增强学习和消息传递网络架构学习代理商通信协议。然而,这些模型使用不受约束的广播通信模型,其中代理在每个步骤中与所有其他代理通信,即使任务不需要它。在现实世界应用中,如果通信可以受系统限制的限制,如带宽,电源和网络容量,则可能需要减少发送的消息的数量。在这项工作中,我们探讨了最大限度地减少通信的简单方法,同时在多任务学习中最大化性能:同时优化特定于任务的目标和通信惩罚。我们表明,目的可以使用强化和Gumbel-Softmax Reparameterization优化。我们介绍了两种稳定培训的技术:50%的培训和消息转发。在仅50%的剧集中培训沟通惩罚可防止我们的模型关闭外向消息。其次,重复消息先前接收的消息有助于模型保留信息,并进一步提高性能。通过这些技术,我们表明我们可以减少75%的通信,没有损失。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
在人工多智能体系中,学习协作政策的能力是基于代理商的沟通技巧,他们必须能够编码从环境中收到的信息,并学习如何与手头任务所要求的其他代理分享它。我们介绍了一个深度加强学习方法,连接驱动的通信(CDC),促进了多种子体协作行为的出现,仅通过经验。代理被建模为加权图的节点,其状态相关的边缘编码可以交换的对方式。我们介绍了一种依赖于图形的关注机制,可以控制代理的传入消息如何加权。此机制完全核对图表所表示的系统的当前状态,并在捕获信息如何在图中流动的扩散过程中构建。图形拓扑未被假定已知先验,但在代理人的观察中动态依赖于代理人,并以端到端的方式与注意机制和政策同时学习。我们的经验结果表明,CDC能够学习有效的协作政策,并可以在合作导航任务上过度执行竞争学习算法。
translated by 谷歌翻译
多代理游戏中的均衡选择是指选择帕累托最佳平衡的问题。已经表明,由于每个代理商在训练过程中对其他代理商的政策的不确定性,许多最先进的多机构增强学习(MARL)算法容易融合到帕累托主导的平衡。为了解决次优的平衡选择,我们提出了一种使用无关紧要游戏的简单原则(具有相同奖励的超级合作游戏)的参与者批评算法(PAC):每个代理人都可以假设其他人会选择动作的动作这将导致帕累托最佳平衡。我们评估了PAC在一系列多种多样的游戏中,并表明与替代MARL算法相比,它会收敛到更高的情节回报,并在一系列矩阵游戏中成功收敛到帕累托优势。最后,我们提出了一个图形神经网络扩展,该扩展可以在具有多达15个代理商的游戏中有效地扩展。
translated by 谷歌翻译
多代理深度增强学习(Marl)缺乏缺乏共同使用的评估任务和标准,使方法之间的比较困难。在这项工作中,我们提供了一个系统评估,并比较了三种不同类别的Marl算法(独立学习,集中式多代理政策梯度,价值分解)在各种协作多智能经纪人学习任务中。我们的实验是在不同学习任务中作为算法的预期性能的参考,我们为不同学习方法的有效性提供了见解。我们开源EPYMARL,它将Pymarl CodeBase扩展到包括其他算法,并允许灵活地配置算法实现细节,例如参数共享。最后,我们开源两种环境,用于多智能经纪研究,重点关注稀疏奖励下的协调。
translated by 谷歌翻译
在多机构强化学习中,沟通对于鼓励代理商之间的合作至关重要。由于网络条件随代理的移动性而变化,并且在传输过程中的随机性变化,因此现实无线网络中的通信可能非常不可靠。我们提出一个框架来通过解决三个基本问题来学习实用的沟通策略:(1)何时:代理商不仅基于消息重要性,而且是无线渠道条件来学习沟通时间。 (2)什么:代理增强了带有无线网络测量结果的消息内容,以更好地选择游戏和通信操作。 (3)如何:代理使用新颖的神经信息编码器来保存从接收到的消息中保留所有信息,而不管消息的数量和顺序如何。与最新的ART相比,在逼真的无线网络设置下模拟标准基准测试,我们在游戏性能,收敛速度和沟通效率方面取得了重大改进。
translated by 谷歌翻译
独立的强化学习算法没有理论保证,用于在多代理设置中找到最佳策略。然而,在实践中,先前的作品报告了在某些域中的独立算法和其他方面的良好性能。此外,文献中缺乏对独立算法的优势和弱点的全面研究。在本文中,我们对四个Pettingzoo环境进行了独立算法的性能的实证比较,这些环境跨越了三种主要类别的多助理环境,即合作,竞争和混合。我们表明,在完全可观察的环境中,独立的算法可以在协作和竞争环境中与多代理算法进行同步。对于混合环境,我们表明通过独立算法培训的代理商学会单独执行,但未能学会与盟友合作并与敌人竞争。我们还表明,添加重复性提高了合作部分可观察环境中独立算法的学习。
translated by 谷歌翻译
在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
加固学习在机器学习中推动了令人印象深刻的进步。同时,量子增强机学习算法使用量子退火的底层划伤。最近,已经提出了一种组合两个范例的多代理强化学习(MARL)架构。这种新的算法利用Q值近似的量子Boltzmann机器(QBMS)在收敛所需的时间步长方面具有优于常规的深度增强学习。但是,该算法仅限于单代理和小型2x2多代理网格域。在这项工作中,我们提出了对原始概念的延伸,以解决更具挑战性问题。类似于Classic DQN,我们添加了重播缓冲区的体验,并使用不同的网络来估计目标和策略值。实验结果表明,学习变得更加稳定,使代理能够在具有更高复杂性的网格域中找到最佳策略。此外,我们还评估参数共享如何影响多代理域中的代理行为。量子采样证明是一种有希望的加强学习任务的方法,但目前受到QPU尺寸的限制,因此通过输入和Boltzmann机器的大小。
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
我们开发了一个多功能辅助救援学习(MARL)方法,以了解目标跟踪的可扩展控制策略。我们的方法可以处理任意数量的追求者和目标;我们显示出现的任务,该任务包括高达1000追踪跟踪1000个目标。我们使用分散的部分可观察的马尔可夫决策过程框架来模拟追求者作为接受偏见观察(范围和轴承)的代理,了解使用固定的未知政策的目标。注意机制用于参数化代理的价值函数;这种机制允许我们处理任意数量的目标。熵 - 正规的脱助政策RL方法用于培训随机政策,我们讨论如何在追求者之间实现对冲行为,尽管有完全分散的控制执行,但仍然导致合作较弱的合作形式。我们进一步开发了一个掩蔽启发式,允许训练较少的问题,少量追求目标和在更大的问题上执行。进行彻底的仿真实验,消融研究和对现有技术算法的比较,以研究对不同数量的代理和目标性能的方法和鲁棒性的可扩展性。
translated by 谷歌翻译