尽管近年来在多机构增强学习(MARL)方面取得了重大进展,但复杂领域的协调仍然是一个挑战。 MARL的工作通常专注于解决代理与环境中所有其他代理和实体互动的任务;但是,我们观察到现实世界任务通常由几个局部代理相互作用(子任务)的几个隔离实例组成,并且每个代理都可以有意义地专注于一个子任务,以排除环境中其他所有内容。在这些综合任务中,成功的策略通常可以分解为两个决策级别:代理人分配给特定的子任务,并且每个代理人仅针对其指定的子任务有效地采取行动。这种分解的决策提供了强烈的结构感应偏见,大大降低了代理观察空间,并鼓励在训练期间重复使用和组成子任务特异性策略,而不是将子任务的每个新组成视为独特的。我们介绍了ALMA,这是一种利用这些结构化任务的一般学习方法。阿尔玛同时学习高级子任务分配策略和低级代理政策。我们证明,阿尔玛(Alma)在许多具有挑战性的环境中学习了复杂的协调行为,表现优于强大的基准。 Alma的模块化还使其能够更好地概括为新的环境配置。最后,我们发现,尽管ALMA可以整合受过训练的分配和行动策略,但最佳性能仅通过共同训练所有组件才能获得。我们的代码可从https://github.com/shariqiqbal2810/alma获得
translated by 谷歌翻译
Recently, some challenging tasks in multi-agent systems have been solved by some hierarchical reinforcement learning methods. Inspired by the intra-level and inter-level coordination in the human nervous system, we propose a novel value decomposition framework HAVEN based on hierarchical reinforcement learning for fully cooperative multi-agent problems. To address the instability arising from the concurrent optimization of policies between various levels and agents, we introduce the dual coordination mechanism of inter-level and inter-agent strategies by designing reward functions in a two-level hierarchy. HAVEN does not require domain knowledge and pre-training, and can be applied to any value decomposition variant. Our method achieves desirable results on different decentralized partially observable Markov decision process domains and outperforms other popular multi-agent hierarchical reinforcement learning algorithms.
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
我们开发了一个多功能辅助救援学习(MARL)方法,以了解目标跟踪的可扩展控制策略。我们的方法可以处理任意数量的追求者和目标;我们显示出现的任务,该任务包括高达1000追踪跟踪1000个目标。我们使用分散的部分可观察的马尔可夫决策过程框架来模拟追求者作为接受偏见观察(范围和轴承)的代理,了解使用固定的未知政策的目标。注意机制用于参数化代理的价值函数;这种机制允许我们处理任意数量的目标。熵 - 正规的脱助政策RL方法用于培训随机政策,我们讨论如何在追求者之间实现对冲行为,尽管有完全分散的控制执行,但仍然导致合作较弱的合作形式。我们进一步开发了一个掩蔽启发式,允许训练较少的问题,少量追求目标和在更大的问题上执行。进行彻底的仿真实验,消融研究和对现有技术算法的比较,以研究对不同数量的代理和目标性能的方法和鲁棒性的可扩展性。
translated by 谷歌翻译
在本文中,我们认为合作的多代理强化学习(MARL)具有稀疏的奖励。为了解决这个问题,我们提出了一种名为Maser:MARL的新方法,并具有从经验重播缓冲区产生的子目标。在广泛使用的集中式培训的假设下,通过分散执行和对MARL的Q值分解的一致性,Maser通过考虑单个Q值和总Q值来自动为多个代理人生成适当的子目标。然后,Maser根据与Q学习相关的可行表示为每个代理设计个人固有奖励,以便代理人达到其子目标,同时最大化联合行动值。数值结果表明,与其他最先进的MARL算法相比,Maser的表现明显优于Starcraft II微管理基准。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.
translated by 谷歌翻译
In multi-agent reinforcement learning (MARL), many popular methods, such as VDN and QMIX, are susceptible to a critical multi-agent pathology known as relative overgeneralization (RO), which arises when the optimal joint action's utility falls below that of a sub-optimal joint action in cooperative tasks. RO can cause the agents to get stuck into local optima or fail to solve tasks that require significant coordination between agents within a given timestep. Recent value-based MARL algorithms such as QPLEX and WQMIX can overcome RO to some extent. However, our experimental results show that they can still fail to solve cooperative tasks that exhibit strong RO. In this work, we propose a novel approach called curriculum learning for relative overgeneralization (CURO) to better overcome RO. To solve a target task that exhibits strong RO, in CURO, we first fine-tune the reward function of the target task to generate source tasks that are tailored to the current ability of the learning agent and train the agent on these source tasks first. Then, to effectively transfer the knowledge acquired in one task to the next, we use a novel transfer learning method that combines value function transfer with buffer transfer, which enables more efficient exploration in the target task. We demonstrate that, when applied to QMIX, CURO overcomes severe RO problem and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.
translated by 谷歌翻译
许多现实世界的应用程序都可以作为多机构合作问题进行配置,例如网络数据包路由和自动驾驶汽车的协调。深入增强学习(DRL)的出现为通过代理和环境的相互作用提供了一种有前途的多代理合作方法。但是,在政策搜索过程中,传统的DRL解决方案遭受了多个代理具有连续动作空间的高维度。此外,代理商政策的动态性使训练非平稳。为了解决这些问题,我们建议采用高级决策和低水平的个人控制,以进行有效的政策搜索,提出一种分层增强学习方法。特别是,可以在高级离散的动作空间中有效地学习多个代理的合作。同时,低水平的个人控制可以减少为单格强化学习。除了分层增强学习外,我们还建议对手建模网络在学习过程中对其他代理的政策进行建模。与端到端的DRL方法相反,我们的方法通过以层次结构将整体任务分解为子任务来降低学习的复杂性。为了评估我们的方法的效率,我们在合作车道变更方案中进行了现实世界中的案例研究。模拟和现实世界实验都表明我们的方法在碰撞速度和收敛速度中的优越性。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has made prominent progress in recent years. For training efficiency and scalability, most of the MARL algorithms make all agents share the same policy or value network. However, in many complex multi-agent tasks, different agents are expected to possess specific abilities to handle different subtasks. In those scenarios, sharing parameters indiscriminately may lead to similar behavior across all agents, which will limit the exploration efficiency and degrade the final performance. To balance the training complexity and the diversity of agent behavior, we propose a novel framework to learn dynamic subtask assignment (LDSA) in cooperative MARL. Specifically, we first introduce a subtask encoder to construct a vector representation for each subtask according to its identity. To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy, which can dynamically group agents with similar abilities into the same subtask. In this way, agents dealing with the same subtask share their learning of specific abilities and different subtasks correspond to different specific abilities. We further introduce two regularizers to increase the representation difference between subtasks and stabilize the training by discouraging agents from frequently changing subtasks, respectively. Empirical results show that LDSA learns reasonable and effective subtask assignment for better collaboration and significantly improves the learning performance on the challenging StarCraft II micromanagement benchmark and Google Research Football.
translated by 谷歌翻译
协作多代理增强学习(MARL)已在许多实际应用中广泛使用,在许多实际应用中,每个代理商都根据自己的观察做出决定。大多数主流方法在对分散的局部实用程序函数进行建模时,将每个局部观察结果视为完整的。但是,他们忽略了这样一个事实,即可以将局部观察信息进一步分为几个实体,只有一部分实体有助于建模推理。此外,不同实体的重要性可能会随着时间而变化。为了提高分散政策的性能,使用注意机制用于捕获本地信息的特征。然而,现有的注意模型依赖于密集的完全连接的图,并且无法更好地感知重要状态。为此,我们提出了一个稀疏的状态MARL(S2RL)框架,该框架利用稀疏的注意机制将无关的信息丢弃在局部观察中。通过自我注意力和稀疏注意机制估算局部效用函数,然后将其合并为标准的关节价值函数和中央评论家的辅助关节价值函数。我们将S2RL框架设计为即插即用的模块,使其足够一般,可以应用于各种方法。关于Starcraft II的广泛实验表明,S2RL可以显着提高许多最新方法的性能。
translated by 谷歌翻译
This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. We introduce a set of cooperative control tasks that includes tasks with discrete and continuous actions, as well as tasks that involve hundreds of agents. The three approaches are evaluated against each other using different neural architectures, training procedures, and reward structures. Using deep reinforcement learning with a curriculum learning scheme, our approach can solve problems that were previously considered intractable by most multi-agent reinforcement learning algorithms. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods when using feed-forward neural architectures. We also show that recurrent policies, while more difficult to train, outperform feed-forward policies on our evaluation tasks.
translated by 谷歌翻译
深层合作的多方强化学习已经证明了其在各种复杂的控制任务上取得了巨大的成功。但是,多学院学习的最新进展主要集中在价值分解上,而使实体交互仍然交织在一起,这很容易导致对实体之间的嘈杂相互作用过度拟合。在这项工作中,我们引入了一种新型的交互模式分离(OPT)方法,以将关节值函数不仅置于分散执行的代理值函数中,还将实体交互作用到交互原型中,每种都代表了潜在的交互作用模式在实体的子组中。 OPT促进了无关实体之间的嘈杂相互作用,从而显着提高了普遍性和可解释性。具体而言,OPT引入了稀疏分歧机制,以鼓励发现的相互作用原型之间的稀疏性和多样性。然后,该模型通过具有可学习权重的聚合器选择将这些原型重组为紧凑的交互模式。为了减轻部分可观察性引起的训练不稳定性问题,我们建议最大程度地提高聚合权重与每个代理的历史行为之间的相互信息。单任务和多任务基准的实验表明,所提出的方法得出的结果优于最先进的对应。我们的代码将公开可用。
translated by 谷歌翻译
在本文中,我们提出了一个名为“星际争霸多代理挑战”的新颖基准,代理商学习执行多阶段任务并使用没有精确奖励功能的环境因素。以前的挑战(SMAC)被认为是多名强化学习的标准基准,主要涉及确保所有代理人仅通过具有明显的奖励功能的精细操纵而合作消除接近对手。另一方面,这一挑战对MARL算法的探索能力有效地学习隐式多阶段任务和环境因素以及微控制感兴趣。这项研究涵盖了进攻和防御性场景。在进攻情况下,代理商必须学会先寻找对手,然后消除他们。防御性场景要求代理使用地形特征。例如,代理需要将自己定位在保护结构后面,以使敌人更难攻击。我们研究了SMAC+下的MARL算法,并观察到最近的方法在与以前的挑战类似,但在进攻情况下表现不佳。此外,我们观察到,增强的探索方法对性能有积极影响,但无法完全解决所有情况。这项研究提出了未来研究的新方向。
translated by 谷歌翻译
Adequate strategizing of agents behaviors is essential to solving cooperative MARL problems. One intuitively beneficial yet uncommon method in this domain is predicting agents future behaviors and planning accordingly. Leveraging this point, we propose a two-level hierarchical architecture that combines a novel information-theoretic objective with a trajectory prediction model to learn a strategy. To this end, we introduce a latent policy that learns two types of latent strategies: individual $z_A$, and relational $z_R$ using a modified Graph Attention Network module to extract interaction features. We encourage each agent to behave according to the strategy by conditioning its local $Q$ functions on $z_A$, and we further equip agents with a shared $Q$ function that conditions on $z_R$. Additionally, we introduce two regularizers to allow predicted trajectories to be accurate and rewarding. Empirical results on Google Research Football (GRF) and StarCraft (SC) II micromanagement tasks show that our method establishes a new state of the art being, to the best of our knowledge, the first MARL algorithm to solve all super hard SC II scenarios as well as the GRF full game with a win rate higher than $95\%$, thus outperforming all existing methods. Videos and brief overview of the methods and results are available at: https://sites.google.com/view/hier-strats-marl/home.
translated by 谷歌翻译
在合作多智能体增强学习(Marl)中的代理商的创造和破坏是一个批判性的研究领域。当前的Marl算法通常认为,在整个实验中,组内的代理数量仍然是固定的。但是,在许多实际问题中,代理人可以在队友之前终止。这次早期终止问题呈现出挑战:终止的代理人必须从本集团的成功或失败中学习,这是超出其自身存在的成败。我们指代薪资奖励的传播价值作为遣返代理商作为追索的奖励作为追索权。当前的MARL方法通过将这些药剂放在吸收状态下,直到整组试剂达到终止条件,通过将这些药剂置于终止状态来处理该问题。虽然吸收状态使现有的算法和API能够在没有修改的情况下处理终止的代理,但存在实际培训效率和资源使用问题。在这项工作中,我们首先表明样本复杂性随着系统监督学习任务中的吸收状态的数量而增加,同时对变量尺寸输入更加强大。然后,我们为现有的最先进的MARL算法提出了一种新颖的架构,它使用注意而不是具有吸收状态的完全连接的层。最后,我们展示了这一新颖架构在剧集中创建或销毁的任务中的标准架构显着优于标准架构以及标准的多代理协调任务。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
在复杂的协调问题中,深层合作多智能经纪增强学习(Marl)的高效探索仍然依然存在挑战。在本文中,我们介绍了一种具有奇妙驱动的探索的新型情节多功能钢筋学习,称为EMC。我们利用对流行分解的MARL算法的洞察力“诱导的”个体Q值,即用于本地执行的单个实用程序功能,是本地动作观察历史的嵌入,并且可以捕获因奖励而捕获代理之间的相互作用在集中培训期间的反向化。因此,我们使用单独的Q值的预测误差作为协调勘探的内在奖励,利用集肠内存来利用探索的信息经验来提高政策培训。随着代理商的个人Q值函数的动态捕获了国家的新颖性和其他代理人的影响,我们的内在奖励可以促使对新或有前途的国家的协调探索。我们通过教学实例说明了我们的方法的优势,并展示了在星际争霸II微互动基准中挑战任务的最先进的MARL基础上的其显着优势。
translated by 谷歌翻译
多代理增强学习(MARL)最近在各个领域取得了巨大的成功。但是,借助黑盒神经网络架构,现有的MARL方法以不透明的方式做出决策,使人无法理解学习知识以及输入观察如何影响决策。我们的解决方案是混合经常性的软决策树(MixRTS),这是一种可解释的新型结构,可以通过决策树的根到叶子路径来表示明确的决策过程。我们在软决策树中引入了一种新颖的经常性结构,以解决部分观察性,并通过仅基于局部观察结果线性混合复发树的输出来估算关节作用值。理论分析表明,混合物在分解中保证具有添加性和单调性的结构约束。我们在一系列具有挑战性的Starcraft II任务上评估MixRT。实验结果表明,与广泛研究的基线相比,我们的可解释的学习框架获得了竞争性能,并提供了对决策过程的更直接的解释和领域知识。
translated by 谷歌翻译