信息共享是建立团队认知并实现协调与合作的关键。高性能的人类团队也从战略性地采用迭代沟通和合理性的层次结构级别中受益,这意味着人类代理可以推理队友在决策中的行动。然而,多代理强化学习(MARL)的大多数先前工作不支持迭代的理性性,而只能鼓励跨性别的沟通,从而实现了次优的平衡合作策略。在这项工作中,我们表明,在优化政策梯度(PG)时,将代理商的政策重新制定为有条件依靠其邻近队友的政策,从而固有地提高了相互信息(MI)的最大程度。在有限的理性和认知层次结构理论下的决策观念的基础上,我们表明我们的修改后的PG方法不仅可以最大化本地代理人的奖励,而且还隐含着关于代理之间MI的理由,而无需任何明确的临时正则化术语。我们的方法Infopg在学习新兴的协作行为方面优于基准,并在分散的合作MARL任务中设定了最先进的工作。我们的实验通过在几个复杂的合作多代理域中实现较高的样品效率和更大的累积奖励来验证InfoPG的实用性。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
用于分散执行的集中培训,其中代理商使用集中信息训练,但在线以分散的方式执行,在多智能体增强学习界中获得了普及。特别是,具有集中评论家和分散的演员的演员 - 批评方法是这个想法的常见实例。然而,即使它是许多算法的标准选择,也没有完全讨论和理解使用集中评论批读的影响。因此,我们正式分析集中和分散的批评批评方法,了解对评论家选择的影响。由于我们的理论使得不切实际的假设,我们还经验化地比较了广泛的环境中集中式和分散的批评方法来验证我们的理论并提供实用建议。我们展示了当前文献中集中评论家存在误解,并表明集中式评论家设计并不是严格用的,而是集中和分散的批评者具有不同的利弊,算法设计人员应该考虑到不同的利弊。
translated by 谷歌翻译
培训期间的对抗性攻击能够强烈影响多功能增强学习算法的性能。因此,非常希望增加现有算法,使得消除对抗对协作网络的对抗性攻击的影响,或者至少有界限。在这项工作中,我们考虑一个完全分散的网络,每个代理商收到本地奖励并观察全球州和行动。我们提出了一种基于弹性共识的演员 - 批评算法,其中每个代理估计了团队平均奖励和价值函数,并将关联的参数向量传送到其立即邻居。我们表明,在拜占庭代理人的存在下,其估算和通信策略是完全任意的,合作社的估计值会融合到有概率一体的有界共识值,条件是在附近的最多有$ H $拜占庭代理商每个合作社和网络都是$(2h + 1)$ - 强大。此外,我们证明,合作社的政策在其团队平均目标函数的局部最大化器周围汇聚在其团队平均目标函数的概率上,这是对渐关节转移变得稳定的普发因子的政策。
translated by 谷歌翻译
在现实设置中跨多个代理的决策同步是有问题的,因为它要求代理等待其他代理人终止和交流有关终止的终止。理想情况下,代理应该学习和执行异步。这样的异步方法还允许暂时扩展的动作,这些操作可能会根据执行的情况和操作花费不同的时间。不幸的是,当前的策略梯度方法不适用于异步设置,因为他们认为代理在每个时间步骤中都同步推理了动作选择。为了允许异步学习和决策,我们制定了一组异步的多代理参与者 - 批判性方法,这些方法使代理可以在三个标准培训范式中直接优化异步策略:分散的学习,集中学习,集中学习和集中培训以进行分解执行。各种现实域中的经验结果(在模拟和硬件中)证明了我们在大型多代理问题中的优势,并验证了我们算法在学习高质量和异步解决方案方面的有效性。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
在分散的合作多机构增强学习中,代理可以彼此汇总信息,以学习最大化团队平均目标功能的政策。尽管愿意与他人合作,但各个代理商可能会直接分享有关其当地状态,奖励和价值功能的信息,这是由于隐私问题而不受欢迎的。在这项工作中,我们引入了一种带有TD错误聚合的分散的参与者批判算法,该算法不违反隐私问题,并假设沟通渠道会受到时间延迟和数据包的删除。通过传输数据的维度来衡量,我们为做出如此薄弱的假设所支付的成本是增加的沟通负担。有趣的是,通信负担仅在图形大小上是二次的,这使得适用于大型网络的算法。我们在减小的步进大小下提供收敛分析,以验证代理最大化团队平均目标函数。
translated by 谷歌翻译
Adequate strategizing of agents behaviors is essential to solving cooperative MARL problems. One intuitively beneficial yet uncommon method in this domain is predicting agents future behaviors and planning accordingly. Leveraging this point, we propose a two-level hierarchical architecture that combines a novel information-theoretic objective with a trajectory prediction model to learn a strategy. To this end, we introduce a latent policy that learns two types of latent strategies: individual $z_A$, and relational $z_R$ using a modified Graph Attention Network module to extract interaction features. We encourage each agent to behave according to the strategy by conditioning its local $Q$ functions on $z_A$, and we further equip agents with a shared $Q$ function that conditions on $z_R$. Additionally, we introduce two regularizers to allow predicted trajectories to be accurate and rewarding. Empirical results on Google Research Football (GRF) and StarCraft (SC) II micromanagement tasks show that our method establishes a new state of the art being, to the best of our knowledge, the first MARL algorithm to solve all super hard SC II scenarios as well as the GRF full game with a win rate higher than $95\%$, thus outperforming all existing methods. Videos and brief overview of the methods and results are available at: https://sites.google.com/view/hier-strats-marl/home.
translated by 谷歌翻译
Softmax政策的政策梯度(PG)估计与子最佳饱和初始化无效,当密度集中在次良动作时发生。从策略初始化或策略已经收敛后发生的环境的突然变化可能会出现次优策略饱和度,并且SoftMax PG估计器需要大量更新以恢复有效的策略。这种严重问题导致高样本低效率和对新情况的适应性差。为缓解此问题,我们提出了一种新的政策梯度估计,用于软MAX策略,该估计在批评中利用批评中的偏差和奖励信号中存在的噪声来逃避策略参数空间的饱和区域。我们对匪徒和古典MDP基准测试任务进行了分析和实验,表明我们的估算变得更加坚固,以便对政策饱和度更加强大。
translated by 谷歌翻译
政策梯度方法在多智能体增强学习中变得流行,但由于存在环境随机性和探索代理(即非公平性​​),它们遭受了高度的差异,这可能因信用分配难度而受到困扰。结果,需要一种方法,该方法不仅能够有效地解决上述两个问题,而且需要足够强大地解决各种任务。为此,我们提出了一种新的多代理政策梯度方法,称为强大的本地优势(ROLA)演员 - 评论家。 Rola允许每个代理人将个人动作值函数作为当地评论家,以及通过基于集中评论家的新型集中培训方法来改善环境不良。通过使用此本地批评,每个代理都计算基准,以减少对其策略梯度估计的差异,这导致含有其他代理的预期优势动作值,这些选项可以隐式提高信用分配。我们在各种基准测试中评估ROLA,并在许多最先进的多代理政策梯度算法上显示其鲁棒性和有效性。
translated by 谷歌翻译
This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. We introduce a set of cooperative control tasks that includes tasks with discrete and continuous actions, as well as tasks that involve hundreds of agents. The three approaches are evaluated against each other using different neural architectures, training procedures, and reward structures. Using deep reinforcement learning with a curriculum learning scheme, our approach can solve problems that were previously considered intractable by most multi-agent reinforcement learning algorithms. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods when using feed-forward neural architectures. We also show that recurrent policies, while more difficult to train, outperform feed-forward policies on our evaluation tasks.
translated by 谷歌翻译
由于策略梯度定理导致的策略设置存在各种理论上 - 声音策略梯度算法,其为梯度提供了简化的形式。然而,由于存在多重目标和缺乏明确的脱助政策政策梯度定理,截止策略设置不太明确。在这项工作中,我们将这些目标统一到一个违规目标,并为此统一目标提供了政策梯度定理。推导涉及强调的权重和利息职能。我们显示多种策略来近似梯度,以识别权重(ACE)称为Actor评论家的算法。我们证明了以前(半梯度)脱离政策演员 - 评论家 - 特别是offpac和DPG - 收敛到错误的解决方案,而Ace找到最佳解决方案。我们还强调为什么这些半梯度方法仍然可以在实践中表现良好,表明ace中的方差策略。我们经验研究了两个经典控制环境的若干ACE变体和基于图像的环境,旨在说明每个梯度近似的权衡。我们发现,通过直接逼近强调权重,ACE在所有测试的所有设置中执行或优于offpac。
translated by 谷歌翻译
学习协作对于多机构增强学习(MARL)至关重要。以前的作品通过最大化代理行为的相关性来促进协作,该行为的相关性通常以不同形式的相互信息(MI)为特征。但是,我们揭示了次最佳的协作行为,也出现了强烈的相关性,并且简单地最大化MI可以阻碍学习的学习能力。为了解决这个问题,我们提出了一个新颖的MARL框架,称为“渐进式信息协作(PMIC)”,以进行更有效的MI驱动协作。 PMIC使用全球国家和联合行动之间MI测量的新协作标准。基于此标准,PMIC的关键思想是最大程度地提高与优越的协作行为相关的MI,并最大程度地减少与下等方面相关的MI。这两个MI目标通过促进更好的合作,同时避免陷入次级优势,从而扮演互补的角色。与其他算法相比,在各种MARL基准测试的实验表明,PMIC的表现出色。
translated by 谷歌翻译
Current approaches to multi-agent cooperation rely heavily on centralized mechanisms or explicit communication protocols to ensure convergence. This paper studies the problem of distributed multi-agent learning without resorting to centralized components or explicit communication. It examines the use of distribution matching to facilitate the coordination of independent agents. In the proposed scheme, each agent independently minimizes the distribution mismatch to the corresponding component of a target visitation distribution. The theoretical analysis shows that under certain conditions, each agent minimizing its individual distribution mismatch allows the convergence to the joint policy that generated the target distribution. Further, if the target distribution is from a joint policy that optimizes a cooperative task, the optimal policy for a combination of this task reward and the distribution matching reward is the same joint policy. This insight is used to formulate a practical algorithm (DM$^2$), in which each individual agent matches a target distribution derived from concurrently sampled trajectories from a joint expert policy. Experimental validation on the StarCraft domain shows that combining (1) a task reward, and (2) a distribution matching reward for expert demonstrations for the same task, allows agents to outperform a naive distributed baseline. Additional experiments probe the conditions under which expert demonstrations need to be sampled to obtain the learning benefits.
translated by 谷歌翻译
缩放多智能体增强学习的卓越障碍之一是为大量代理商分配给个别代理的行动。在本文中,我们通过呼叫\ yrest {部分奖励去耦}(prd)的方法来解决这一信用分配问题,该方法试图将大型合作多代理RL问题分解成涉及代理子集的解耦子问题,从而简化了信用分配。我们经验证明使用PRD在演员 - 批评算法中分解RL问题导致较低的差异策略梯度估计,这提高了各种其他跨越多个代理RL任务的数据效率,学习稳定性和渐近性能。演员 - 评论家方法。此外,我们还将我们的反事实多代理政策梯度(COMA),最先进的MARL算法以及经验证明我们的方法通过更好地利用代理商奖励流的信息来实现昏迷状态,以及启用最近的优势估计的进步。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
我们呈现协调的近端策略优化(COPPO),该算法将原始近端策略优化(PPO)扩展到多功能代理设置。关键的想法在于多个代理之间的策略更新过程中的步骤大小的协调适应。当优化理论上接地的联合目标时,我们证明了政策改进的单调性,并基于一组近似推导了简化的优化目标。然后,我们解释了Coppo中的这种目标可以在代理商之间实现动态信用分配,从而减轻了代理政策的同时更新期间的高方差问题。最后,我们证明COPPO优于几种强大的基线,并且在典型的多代理设置下,包括最新的多代理PPO方法(即MAPPO),包括合作矩阵游戏和星际争霸II微管理任务。
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
我们将记住和忘记的经验重播(Ref-ER)算法扩展到多代理增强学习(MARL)。参考器被证明超过了最先进的算法状态,以连续控制从OpenAI健身房到复杂的流体流动。在MARL中,代理之间的依赖项包括在州值估计器中,环境动力学是通过参考文献使用的重要性权重对其建模的。在协作环境中,当使用个人奖励估算值时,我们发现最佳性能,并且我们忽略了其他动作对过渡图的影响。我们基准在斯坦福大学智能系统实验室(SISL)环境中进行参考文献的性能。我们发现,采用单个馈送前馈神经网络来进行策略和参考文献中的价值函数,优于依靠复杂的神经网络体系结构的最先进的算法状态。
translated by 谷歌翻译