本文考虑了合作多智能经纪增强学习,重点是在多对独立学习者以不同频率交互的情况下进行的紧急通信。在此上下文中,可以出现多种不同的和不兼容的语言。当代理遇到替代语言的扬声器时,在可以有效地逆转之前,需要一段适应时期。这种适应导致新语言的出现和忘记以前的语言。原则上,这是灾难性遗忘问题的示例,可以通过使代理能够学习和维护多种语言来减轻。我们从持续的学习文献中获取灵感,并用多头神经网络装备了我们的代理,使我们的代理能够成为多语言。我们的方法在基于参考MNIST的通信游戏中经验验证,并且被证明能够维护现有方法不能的多种语言。
translated by 谷歌翻译
We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate endto-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
Ad Hoc团队合作问题描述了代理商必须与以前看不见的代理商合作以实现共同目标的情况。对于在这些场景中成功的代理商,它必须具有合适的合作技能。可以通过使用域知识来设计代理人的行为来实现协作技巧的合作技能。但是,在复杂的域中,可能无法使用域知识。因此,值得探索如何直接从数据中学习合作技能。在这项工作中,我们在临时团队合作问题的背景下申请元加强学习(Meta-RL)制定。我们的经验结果表明,这种方法可以在两个合作环境中产生具有不同合作环境的强大合作社:社会合议和语言解释。(这是扩展抽象版的全文。)
translated by 谷歌翻译
建模其他代理的行为对于了解代理商互动和提出有效决策至关重要。代理模型的现有方法通常假设在执行期间对所建模代理的本地观测和所选操作的知识。为了消除这种假设,我们使用编码器解码器体系结构从受控代理的本地信息中提取表示。在培训期间使用所建模代理的观测和动作,我们的模型学会仅在受控剂的局部观察中提取有关所建模代理的表示。这些陈述用于增加受控代理的决定政策,这些政策通过深度加强学习培训;因此,在执行期间,策略不需要访问其他代理商的信息。我们提供合作,竞争和混合多种子体环境中的全面评估和消融研究,表明我们的方法比不使用所学习表示的基线方法实现更高的回报。
translated by 谷歌翻译
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
translated by 谷歌翻译
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package. The benchmarks we provide are designed to evaluate different aspects of the continual RL challenge, such as catastrophic forgetting, plasticity, ability to generalize, and sample-efficient learning. Three of the benchmarks utilize video game environments (Atari, Procgen, NetHack). The fourth benchmark, CHORES, consists of four different task sequences in a visually realistic home simulator, drawn from a diverse set of task and scene parameters. To compare continual RL methods on these benchmarks, we prepare three metrics in CORA: Continual Evaluation, Isolated Forgetting, and Zero-Shot Forward Transfer. Finally, CORA includes a set of performant, open-source baselines of existing algorithms for researchers to use and expand on. We release CORA and hope that the continual RL community can benefit from our contributions, to accelerate the development of new continual RL algorithms.
translated by 谷歌翻译
独立的强化学习算法没有理论保证,用于在多代理设置中找到最佳策略。然而,在实践中,先前的作品报告了在某些域中的独立算法和其他方面的良好性能。此外,文献中缺乏对独立算法的优势和弱点的全面研究。在本文中,我们对四个Pettingzoo环境进行了独立算法的性能的实证比较,这些环境跨越了三种主要类别的多助理环境,即合作,竞争和混合。我们表明,在完全可观察的环境中,独立的算法可以在协作和竞争环境中与多代理算法进行同步。对于混合环境,我们表明通过独立算法培训的代理商学会单独执行,但未能学会与盟友合作并与敌人竞争。我们还表明,添加重复性提高了合作部分可观察环境中独立算法的学习。
translated by 谷歌翻译
In general-sum games, the interaction of self-interested learning agents commonly leads to collectively worst-case outcomes, such as defect-defect in the iterated prisoner's dilemma (IPD). To overcome this, some methods, such as Learning with Opponent-Learning Awareness (LOLA), shape their opponents' learning process. However, these methods are myopic since only a small number of steps can be anticipated, are asymmetric since they treat other agents as naive learners, and require the use of higher-order derivatives, which are calculated through white-box access to an opponent's differentiable learning algorithm. To address these issues, we propose Model-Free Opponent Shaping (M-FOS). M-FOS learns in a meta-game in which each meta-step is an episode of the underlying inner game. The meta-state consists of the inner policies, and the meta-policy produces a new inner policy to be used in the next episode. M-FOS then uses generic model-free optimisation methods to learn meta-policies that accomplish long-horizon opponent shaping. Empirically, M-FOS near-optimally exploits naive learners and other, more sophisticated algorithms from the literature. For example, to the best of our knowledge, it is the first method to learn the well-known Zero-Determinant (ZD) extortion strategy in the IPD. In the same settings, M-FOS leads to socially optimal outcomes under meta-self-play. Finally, we show that M-FOS can be scaled to high-dimensional settings.
translated by 谷歌翻译
虽然多智能体增强学习被用作学习代理之间的紧急沟通的有效手段,但现有的工作几乎专注于与离散符号的沟通。人类的沟通通常在连续声道上发生(和出现);人类婴儿通过与他们的照顾者连续的信号传导来获得语言。因此,我们问:我们是否能够通过加强学习培训的连续沟通渠道在代理之间观察到的紧急语言?如果是这样,渠道特征对新兴语言的影响是什么?我们提出了一种环境和培训方法,以作为对这些问题进行初步探索的手段。我们使用一个简单的消息环境,其中“扬声器”代理需要将概念传达给“侦听器”。扬声器配备了一个声码器,将符号映射到连续波形,这通过有损的连续通道,听众需要将连续信号映射到概念。使用Deep Q-Learning,我们表明基本的组成性以读取的语言表示出现。我们发现在传送未经证明的概念组合时,噪音在通信渠道中必不可少。我们展示我们可以通过将倾向于“听到”或“口语”英语的护理人员来实现紧急沟通。最后,我们描述了我们的平台是如何作为未来工作的起点,这些工作采用深度加强学习和多种子体系统的组合来研究我们在语言学习和出现中持续信令的问题。
translated by 谷歌翻译
代理商通信可能会显着提高需要协调以实现共享目标的多代理任务的性能。事先工作表明,可以使用多智能体增强学习和消息传递网络架构学习代理商通信协议。然而,这些模型使用不受约束的广播通信模型,其中代理在每个步骤中与所有其他代理通信,即使任务不需要它。在现实世界应用中,如果通信可以受系统限制的限制,如带宽,电源和网络容量,则可能需要减少发送的消息的数量。在这项工作中,我们探讨了最大限度地减少通信的简单方法,同时在多任务学习中最大化性能:同时优化特定于任务的目标和通信惩罚。我们表明,目的可以使用强化和Gumbel-Softmax Reparameterization优化。我们介绍了两种稳定培训的技术:50%的培训和消息转发。在仅50%的剧集中培训沟通惩罚可防止我们的模型关闭外向消息。其次,重复消息先前接收的消息有助于模型保留信息,并进一步提高性能。通过这些技术,我们表明我们可以减少75%的通信,没有损失。
translated by 谷歌翻译
临时团队合作是设计可以与新队友合作而无需事先协调的研究问题的研究问题。这项调查做出了两个贡献:首先,它提供了对临时团队工作问题不同方面的结构化描述。其次,它讨论了迄今为止该领域取得的进展,并确定了临时团队工作中需要解决的直接和长期开放问题。
translated by 谷歌翻译
我们研究多个代理商在多目标环境的同时学习的问题。具体来说,我们考虑两种药剂重复播放一个多目标的正常形式的游戏。在这样的游戏,从联合行动所产生的收益都向量值。以基于效用的方法,我们假设效用函数存在映射向量标公用事业和考虑旨在最大限度地提高预期收益载体的效用代理。作为代理商不一定知道他们的对手的效用函数或策略,他们必须学会互动的最佳策略对方。为了帮助代理商在适当的解决办法到达,我们介绍四种新型偏好通信协议双方的合作以及自身利益的沟通。每一种方法描述了一个代理在他们的行动以及如何另一代理响应通信偏好的特定协议。这些协议是一组对不沟通基线代理5个标杆游戏随后对其进行评估。我们发现,偏好通信可以彻底改变学习的过程,并导致其没有在此设置先前观测环纳什均衡的出现。另外,还要在那里代理商必须学会当通信的通信方案。对于与纳什均衡游戏的代理,我们发现通信可以是有益的,但很难知道什么时候剂有不同的最佳平衡。如果不是这种情况,代理变得冷漠通信。在游戏没有纳什均衡,我们的结果表明,整个学习率的差异。当使用更快的学习者,我们观察到明确的沟通,在50%左右的时间变得越来越普遍,因为它可以帮助他们在学习的妥协联合政策。较慢的学生保留这种模式在较小的程度,但显示增加的冷漠。
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
多智能体增强学习(MARL)的最新进展提供了各种工具,支持代理能力适应其环境中的意外变化,并鉴于环境的动态性质(可能会通过其他情况加剧代理商)。在这项工作中,我们强调了集团有效合作的能力与集团的弹性之间的关系,我们衡量了该集团适应环境扰动的能力。为了促进恢复力,我们建议通过新的基于混乱的通信协议进行协作,这是根据其以前经验中未对准的观察结果。我们允许有关代理人自主学习的信息的宽度和频率的决定,这被激活以减少混淆。我们在各种MARL设置中展示了我们的方法的实证评估。
translated by 谷歌翻译
在加固学习中的代理商中设计有效的沟通机制一直是一个具有挑战性的任务,特别是对于现实世界的应用。代理人的数量可以增长或环境有时需要与现实世界情景中的变化数量的代理商进行互动。为此,在尺度和动态方面,需要处理各种代理框架的各种方案,以便对现实世界的应用来说是实用的。我们制定多种代理环境,具有不同数量的代理作为多任务问题,提出了一个元增强学习(Meta-RL)框架来解决这个问题。所提出的框架采用Meta学习的通信模式识别(CPR)模块来识别促进培训过程的通信行为和提取信息。实验结果旨在证明所提出的框架(A)推广到看不见的更大量的药剂,(B)允许代理的数量在发作之间发生变化。还提供了烧蚀研究,以推理拟议的CPR设计并显示这种设计是有效的。
translated by 谷歌翻译
In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.
translated by 谷歌翻译
Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.
translated by 谷歌翻译
安全是空中交通时的主要问题。通过成对分离最小值确保无人驾驶飞机(无人机)之间的飞行安全性,利用冲突检测和分辨方法。现有方法主要处理成对冲突,但由于交通密度的预期增加,可能会发生两个以上的无人机的遇到。在本文中,我们将多UAV冲突解决模型作为多功能加强学习问题。我们实现了一种基于图形神经网络的算法,配合代理可以与共同生成分辨率的操作进行通信。该模型在具有3和4个当前代理的情况下进行评估。结果表明,代理商能够通过合作策略成功解决多UV冲突。
translated by 谷歌翻译
在强化学习培训的设置代理神经学可以通过分立令牌相互通信,实现作为一个团队有哪些代理将无法独自做到。然而,使用一个热向量作为离散的通信的当前标准从获取作为零次理解通信这样的更理想的方面令牌防止剂。通过嵌入一词从自然语言处理技术的启发,我们提出了神经代理架构,使他们能够通过从了解到,连续的空间衍生离散令牌进行通信。我们显示了在决策理论框架,我们的技术优化通信在大范围的场景,而一个热令牌是唯一最佳的下严格的假设。在自我发挥的实验,我们验证了我们的培训的工作人员学习集群令牌语义有意义的方式,让他们在其他技术无法嘈杂的环境中交流。最后,我们证明这两种,用我们的方法代理可以有效地应对新的人际交往和人类可以理解未标记的应急代理通信,跑赢使用一个热的沟通。
translated by 谷歌翻译