在加固学习中的代理商中设计有效的沟通机制一直是一个具有挑战性的任务,特别是对于现实世界的应用。代理人的数量可以增长或环境有时需要与现实世界情景中的变化数量的代理商进行互动。为此,在尺度和动态方面,需要处理各种代理框架的各种方案,以便对现实世界的应用来说是实用的。我们制定多种代理环境,具有不同数量的代理作为多任务问题,提出了一个元增强学习(Meta-RL)框架来解决这个问题。所提出的框架采用Meta学习的通信模式识别(CPR)模块来识别促进培训过程的通信行为和提取信息。实验结果旨在证明所提出的框架(A)推广到看不见的更大量的药剂,(B)允许代理的数量在发作之间发生变化。还提供了烧蚀研究,以推理拟议的CPR设计并显示这种设计是有效的。
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an offpolicy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both metatraining and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.
translated by 谷歌翻译
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
translated by 谷歌翻译
分散的学习对合作多代理增强学习(MARL)表现出了巨大的希望。但是,非平稳性仍然是分散学习的重大挑战。在论文中,我们以最简单和基本的方式解决了非平稳性问题,并提出\ textit {多代理替代Q学习}(MA2QL),在那里,代理商轮流通过Q学习来更新其Q-函数。MA2QL是完全分散合作MARL的一种\ Textit {Minimalist}方法,但理论上是基础的。我们证明,当每个代理商在每个回合都保证$ \ varepsilon $ -Convergence时,他们的联合政策会收敛到NASH平衡。实际上,MA2QL仅需要对独立Q学习(IQL)的最小变化。我们经验评估MA2QL对各种合作的多代理任务。结果表明,MA2QL始终胜过IQL,尽管这种变化很小,但它验证了MA2QL的有效性。
translated by 谷歌翻译
建模其他代理的行为对于了解代理商互动和提出有效决策至关重要。代理模型的现有方法通常假设在执行期间对所建模代理的本地观测和所选操作的知识。为了消除这种假设,我们使用编码器解码器体系结构从受控代理的本地信息中提取表示。在培训期间使用所建模代理的观测和动作,我们的模型学会仅在受控剂的局部观察中提取有关所建模代理的表示。这些陈述用于增加受控代理的决定政策,这些政策通过深度加强学习培训;因此,在执行期间,策略不需要访问其他代理商的信息。我们提供合作,竞争和混合多种子体环境中的全面评估和消融研究,表明我们的方法比不使用所学习表示的基线方法实现更高的回报。
translated by 谷歌翻译
将深度强化学习(DRL)扩展到多代理领域的研究已经解决了许多复杂的问题,并取得了重大成就。但是,几乎所有这些研究都只关注离散或连续的动作空间,而且很少有作品曾经使用过多代理的深度强化学习来实现现实世界中的环境问题,这些问题主要具有混合动作空间。因此,在本文中,我们提出了两种算法:深层混合软性角色批评(MAHSAC)和多代理混合杂种深层确定性政策梯度(MAHDDPG)来填补这一空白。这两种算法遵循集中式培训和分散执行(CTDE)范式,并可以解决混合动作空间问题。我们的经验在多代理粒子环境上运行,这是一个简单的多代理粒子世界,以及一些基本的模拟物理。实验结果表明,这些算法具有良好的性能。
translated by 谷歌翻译
多代理增强学习(MARL)在价值函数分解方法的发展中见证了重大进展。由于单调性,它可以通过最大程度地分解每个代理实用程序来优化联合动作值函数。在本文中,我们表明,在部分可观察到的MARL问题中,代理商对自己的行为的订购可能会对代表功能类施加并发约束(跨不同状态),从而在培训期间造成重大估计错误。我们解决了这一限制,并提出了PAC,PAC是一个新的框架,利用了最佳联合行动选择的反事实预测产生的辅助信息,这可以通过新颖的反事实损失通过新颖的辅助来实现价值功能分解。开发了一种基于变异推理的信息编码方法,以从估计的基线收集和编码反事实预测。为了实现分散的执行,我们还得出了受最大收入MARL框架启发的分级分配的代理策略。我们评估了有关多代理捕食者捕食者和一组Starcraft II微管理任务的PAC。经验结果表明,在所有基准上,PAC对基于最先进的价值和基于策略的多代理增强学习算法的结果得到了改善。
translated by 谷歌翻译
Communication is supposed to improve multi-agent collaboration and overall performance in cooperative Multi-agent reinforcement learning (MARL). However, such improvements are prevalently limited in practice since most existing communication schemes ignore communication overheads (e.g., communication delays). In this paper, we demonstrate that ignoring communication delays has detrimental effects on collaborations, especially in delay-sensitive tasks such as autonomous driving. To mitigate this impact, we design a delay-aware multi-agent communication model (DACOM) to adapt communication to delays. Specifically, DACOM introduces a component, TimeNet, that is responsible for adjusting the waiting time of an agent to receive messages from other agents such that the uncertainty associated with delay can be addressed. Our experiments reveal that DACOM has a non-negligible performance improvement over other mechanisms by making a better trade-off between the benefits of communication and the costs of waiting for messages.
translated by 谷歌翻译
Ad Hoc团队合作问题描述了代理商必须与以前看不见的代理商合作以实现共同目标的情况。对于在这些场景中成功的代理商,它必须具有合适的合作技能。可以通过使用域知识来设计代理人的行为来实现协作技巧的合作技能。但是,在复杂的域中,可能无法使用域知识。因此,值得探索如何直接从数据中学习合作技能。在这项工作中,我们在临时团队合作问题的背景下申请元加强学习(Meta-RL)制定。我们的经验结果表明,这种方法可以在两个合作环境中产生具有不同合作环境的强大合作社:社会合议和语言解释。(这是扩展抽象版的全文。)
translated by 谷歌翻译
Adequate strategizing of agents behaviors is essential to solving cooperative MARL problems. One intuitively beneficial yet uncommon method in this domain is predicting agents future behaviors and planning accordingly. Leveraging this point, we propose a two-level hierarchical architecture that combines a novel information-theoretic objective with a trajectory prediction model to learn a strategy. To this end, we introduce a latent policy that learns two types of latent strategies: individual $z_A$, and relational $z_R$ using a modified Graph Attention Network module to extract interaction features. We encourage each agent to behave according to the strategy by conditioning its local $Q$ functions on $z_A$, and we further equip agents with a shared $Q$ function that conditions on $z_R$. Additionally, we introduce two regularizers to allow predicted trajectories to be accurate and rewarding. Empirical results on Google Research Football (GRF) and StarCraft (SC) II micromanagement tasks show that our method establishes a new state of the art being, to the best of our knowledge, the first MARL algorithm to solve all super hard SC II scenarios as well as the GRF full game with a win rate higher than $95\%$, thus outperforming all existing methods. Videos and brief overview of the methods and results are available at: https://sites.google.com/view/hier-strats-marl/home.
translated by 谷歌翻译
离线强化学习利用静态数据集来学习最佳策略,无需访问环境。由于代理商在线交互的展示和培训期间的样本数量,这种技术对于多代理学习任务是可取的。然而,在多代理强化学习(Marl)中,从未研究过在线微调的离线预训练的范式从未研究过,可以使用离线MARL研究的数据集或基准。在本文中,我们试图回答违规在Marl中的离线培训是否能够学习一般的政策表现,这些问题可以帮助提高多个下游任务的性能。我们首先引入基于Starcraftia环境的不同质量水平的第一个离线Marl数据集,然后提出了用于有效的离线学习的多代理决策变压器(MADT)的新颖体系结构。 MADT利用变换器的时间表示的建模能力,并将其与离线和在线MARL任务集成。 Madt的一个至关重要的好处是,它学会了可以在不同任务场景下不同类型的代理之间转移的可稳定性政策。当在脱机目的Datline数据上进行评估时,Madt展示了比最先进的离线RL基线的性能卓越。当应用于在线任务时,预先训练的MADT显着提高了样品效率,即使在零射击案件中也享有强大的性能。为了我们的最佳知识,这是第一个研究并展示了在Marl中的样本效率和最常性增强方面的离线预训练模型的有效性。
translated by 谷歌翻译
通过集中培训和分散执行的价值功能分解是有助于解决合作多功能协商强化任务的承诺。该地区QMIX的方法之一已成为最先进的,在星际争霸II微型管理基准上实现了最佳性能。然而,已知QMIX中每个代理估计的单调混合是限制它可以表示的关节动作Q值,以及单个代理价值函数估计的全局状态信息,通常导致子优相。为此,我们呈现LSF-SAC,这是一种新颖的框架,其具有基于变分推理的信息共享机制,作为额外的状态信息,以帮助在价值函数分子中提供各个代理。我们证明,这种潜在的个人状态信息共享可以显着扩展价值函数分解的力量,而通过软演员批评设计仍然可以在LSF-SAC中保持完全分散的执行。我们在星际争霸II微型管理挑战上评估LSF-SAC,并证明它在挑战协作任务方面优于几种最先进的方法。我们进一步设定了广泛的消融研究,以定位核算其绩效改进的关键因素。我们认为,这种新的洞察力可以导致新的地方价值估算方法和变分的深度学习算法。可以在https://sites.google.com/view/sacmm处找到演示视频和实现代码。
translated by 谷歌翻译
流动性和流量的许多方案都涉及多种不同的代理,需要合作以找到共同解决方案。行为计划的最新进展使用强化学习以寻找有效和绩效行为策略。但是,随着自动驾驶汽车和车辆对X通信变得越来越成熟,只有使用单身独立代理的解决方案在道路上留下了潜在的性能增长。多代理增强学习(MARL)是一个研究领域,旨在为彼此相互作用的多种代理找到最佳解决方案。这项工作旨在将该领域的概述介绍给研究人员的自主行动能力。我们首先解释Marl并介绍重要的概念。然后,我们讨论基于Marl算法的主要范式,并概述每个范式中最先进的方法和思想。在这种背景下,我们调查了MAL在自动移动性场景中的应用程序,并概述了现有的场景和实现。
translated by 谷歌翻译
几乎所有的多代理强化学习算法没有交流,都遵循分散执行的集中培训原则。在集中培训期间,代理可以以相同的信号为指导,例如全球国家。但是,在分散执行期间,代理缺乏共享信号。受到观点不变性和对比学习的启发,我们在本文中提出了共识学习,以学习合作的多代理增强学习。尽管基于局部观察结果,但不同的代理可以在离散空间中推断出相同的共识。在分散执行期间,我们将推断的共识作为对代理网络的明确输入提供了,从而发展了他们的合作精神。我们提出的方法可以扩展到具有小模型更改的各种多代理增强学习算法。此外,我们执行一些完全合作的任务,并获得令人信服的结果。
translated by 谷歌翻译
在多机构强化学习中,由其他代理人行动引起的环境的固有非平稳性给代理人独立学习良好政策带来了很大的困难。处理非平稳性的一种方法是对手建模,代理人考虑到其他代理人政策的影响。大多数现有的工作依赖于预测其他代理的行动或目标,或区分不同的政策。但是,这种建模无法同时捕获策略之间的相似性和差异,因此在概括到看不见的代理时无法提供足够的有用信息。为了解决这个问题,我们提出了一种一般方法,以了解其他代理商政策的表示,以便政策之间的距离是由表示距离之间的距离故意反映的,而策略距离是从训练期间从采样的共同行动分布中推断出来的。我们从经验上表明,以学习的策略表示为条件的代理可以很好地概括在三个多代理任务中看不见的代理。
translated by 谷歌翻译
Recently, model-based agents have achieved better performance than model-free ones using the same computational budget and training time in single-agent environments. However, due to the complexity of multi-agent systems, it is tough to learn the model of the environment. The significant compounding error may hinder the learning process when model-based methods are applied to multi-agent tasks. This paper proposes an implicit model-based multi-agent reinforcement learning method based on value decomposition methods. Under this method, agents can interact with the learned virtual environment and evaluate the current state value according to imagined future states in the latent space, making agents have the foresight. Our approach can be applied to any multi-agent value decomposition method. The experimental results show that our method improves the sample efficiency in different partially observable Markov decision process domains.
translated by 谷歌翻译
多代理深入的强化学习已应用于解决各种离散或连续动作空间的各种复杂问题,并取得了巨大的成功。但是,大多数实际环境不能仅通过离散的动作空间或连续的动作空间来描述。而且很少有作品曾经利用深入的加固学习(DRL)来解决混合动作空间的多代理问题。因此,我们提出了一种新颖的算法:深层混合软性角色 - 批评(MAHSAC)来填补这一空白。该算法遵循集中式训练但分散执行(CTDE)范式,并扩展软actor-Critic算法(SAC),以根据最大熵在多机构环境中处理混合动作空间问题。我们的经验在一个简单的多代理粒子世界上运行,具有连续的观察和离散的动作空间以及一些基本的模拟物理。实验结果表明,MAHSAC在训练速度,稳定性和抗干扰能力方面具有良好的性能。同时,它在合作场景和竞争性场景中胜过现有的独立深层学习方法。
translated by 谷歌翻译
学习协作对于多机构增强学习(MARL)至关重要。以前的作品通过最大化代理行为的相关性来促进协作,该行为的相关性通常以不同形式的相互信息(MI)为特征。但是,我们揭示了次最佳的协作行为,也出现了强烈的相关性,并且简单地最大化MI可以阻碍学习的学习能力。为了解决这个问题,我们提出了一个新颖的MARL框架,称为“渐进式信息协作(PMIC)”,以进行更有效的MI驱动协作。 PMIC使用全球国家和联合行动之间MI测量的新协作标准。基于此标准,PMIC的关键思想是最大程度地提高与优越的协作行为相关的MI,并最大程度地减少与下等方面相关的MI。这两个MI目标通过促进更好的合作,同时避免陷入次级优势,从而扮演互补的角色。与其他算法相比,在各种MARL基准测试的实验表明,PMIC的表现出色。
translated by 谷歌翻译