我们提出了小说的少量团队合作(FST)问题,在该问题中,在团队中训练有素的熟练代理人完成一项任务与来自不同任务的熟练代理相结合,并且必须共同学习适应一个看不见但相关的任务。我们讨论如何将FST问题视为解决两个单独的问题:一种减少培训代理团队完成复杂任务所需的经验;与陌生队友合作完成了一项新任务。解决FST的进展可能会导致多方面的强化学习和临时团队合作的进步。
translated by 谷歌翻译
OW QMIX, CW QMIX, QTRAN, QMIX, and VDN are the state-of-the-art algorithms for solving Dec-POMDP domains. OW QMIX, CW QMIX, QTRAN, QMIX, and VDN failed to solve complex agents' cooperation domains such as box-pushing. We give a 2-stage algorithm to solve such problems. On 1st stage we solve single-agent problem (POMDP) and get an optimal policy traces. On 2nd stage we solve multi-agent problem (Dec-POMDP) with the single-agent optimal policy traces. Single-agent to multi-agent has a clear advantage over OW QMIX, CW QMIX, QTRAN, QMIX, and VDN on complex agents' cooperative domains.
translated by 谷歌翻译
临时团队合作是设计可以与新队友合作而无需事先协调的研究问题的研究问题。这项调查做出了两个贡献:首先,它提供了对临时团队工作问题不同方面的结构化描述。其次,它讨论了迄今为止该领域取得的进展,并确定了临时团队工作中需要解决的直接和长期开放问题。
translated by 谷歌翻译
In multi-agent reinforcement learning (MARL), many popular methods, such as VDN and QMIX, are susceptible to a critical multi-agent pathology known as relative overgeneralization (RO), which arises when the optimal joint action's utility falls below that of a sub-optimal joint action in cooperative tasks. RO can cause the agents to get stuck into local optima or fail to solve tasks that require significant coordination between agents within a given timestep. Recent value-based MARL algorithms such as QPLEX and WQMIX can overcome RO to some extent. However, our experimental results show that they can still fail to solve cooperative tasks that exhibit strong RO. In this work, we propose a novel approach called curriculum learning for relative overgeneralization (CURO) to better overcome RO. To solve a target task that exhibits strong RO, in CURO, we first fine-tune the reward function of the target task to generate source tasks that are tailored to the current ability of the learning agent and train the agent on these source tasks first. Then, to effectively transfer the knowledge acquired in one task to the next, we use a novel transfer learning method that combines value function transfer with buffer transfer, which enables more efficient exploration in the target task. We demonstrate that, when applied to QMIX, CURO overcomes severe RO problem and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.
translated by 谷歌翻译
可以与其他代理人互动以完成给定任务的自主代理的发展是人工智能和机器学习研究的核心领域。为了实现这一目标,自主代理研究小组开发了用于自主系统控制的新型机器学习算法,特别关注深度强化学习和多代理强化学习。研究问题包括可扩展的协调代理政策和代理间沟通;从有限观察的情况下对其他代理的行为,目标和组成的推理;以及基于内在动机,课程学习,因果推断和代表性学习的样品学习。本文概述了该小组正在进行的研究组合,并讨论了未来方向的开放问题。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
深度加强学习概括(RL)的研究旨在产生RL算法,其政策概括为在部署时间进行新的未经调整情况,避免对其培训环境的过度接受。如果我们要在现实世界的情景中部署强化学习算法,那么解决这一点至关重要,那么环境将多样化,动态和不可预测。该调查是这个新生领域的概述。我们为讨论不同的概括问题提供统一的形式主义和术语,在以前的作品上建立不同的概括问题。我们继续对现有的基准进行分类,以及用于解决泛化问题的当前方法。最后,我们提供了对现场当前状态的关键讨论,包括未来工作的建议。在其他结论之外,我们认为,采取纯粹的程序内容生成方法,基准设计不利于泛化的进展,我们建议快速在线适应和将RL特定问题解决作为未来泛化方法的一些领域,我们推荐在UniTexplorated问题设置中构建基准测试,例如离线RL泛化和奖励函数变化。
translated by 谷歌翻译
成功部署多机构强化学习通常需要代理来适应其行为。在这项工作中,我们讨论了团队合作适应的问题,其中一组代理团队需要调整其政策以通过有限的微调解决新的任务。由代理人需要能够识别和区分任务以使其行为适应当前任务的直觉的动机,我们建议学习多代理任务嵌入(MATE)。这些任务嵌入方式是使用针对重建过渡和奖励功能进行优化的编码器架构训练的,这些功能唯一地识别任务。我们表明,在提供任务嵌入时,一组代理商可以适应新颖的任务。我们提出了三个伴侣训练范例:独立伴侣,集中式伴侣和混合伴侣,这些伴侣在任务编码的信息中有所不同。我们表明,伴侣学到的嵌入识别任务,并提供有用的信息,哪些代理在适应新任务期间利用了哪些代理。
translated by 谷歌翻译
加强学习课程学习是一种越来越流行的技术,涉及培训代理的代理,称为课程的一系列中级任务,以提高代理商的性能和学习速度。本文介绍了基于进展和映射函数的课程生成的新颖范式。虽然riveSion函数在任何给定时间指定环境的复杂性,但映射函数生成特定复杂性的环境。介绍了不同的进展功能,包括基于代理商的性能的自主在线任务进度。通过在六个域上的两个最先进的课程学习算法,通过凭借其对六个域的两个最先进的课程学习算法来显示我们的方法的益处和广泛的适用性。
translated by 谷歌翻译
尽管近年来在多机构增强学习(MARL)方面取得了重大进展,但复杂领域的协调仍然是一个挑战。 MARL的工作通常专注于解决代理与环境中所有其他代理和实体互动的任务;但是,我们观察到现实世界任务通常由几个局部代理相互作用(子任务)的几个隔离实例组成,并且每个代理都可以有意义地专注于一个子任务,以排除环境中其他所有内容。在这些综合任务中,成功的策略通常可以分解为两个决策级别:代理人分配给特定的子任务,并且每个代理人仅针对其指定的子任务有效地采取行动。这种分解的决策提供了强烈的结构感应偏见,大大降低了代理观察空间,并鼓励在训练期间重复使用和组成子任务特异性策略,而不是将子任务的每个新组成视为独特的。我们介绍了ALMA,这是一种利用这些结构化任务的一般学习方法。阿尔玛同时学习高级子任务分配策略和低级代理政策。我们证明,阿尔玛(Alma)在许多具有挑战性的环境中学习了复杂的协调行为,表现优于强大的基准。 Alma的模块化还使其能够更好地概括为新的环境配置。最后,我们发现,尽管ALMA可以整合受过训练的分配和行动策略,但最佳性能仅通过共同训练所有组件才能获得。我们的代码可从https://github.com/shariqiqbal2810/alma获得
translated by 谷歌翻译
In reinforcement learning applications like robotics, agents usually need to deal with various input/output features when specified with different state/action spaces by their developers or physical restrictions. This indicates unnecessary re-training from scratch and considerable sample inefficiency, especially when agents follow similar solution steps to achieve tasks. In this paper, we aim to transfer similar high-level goal-transition knowledge to alleviate the challenge. Specifically, we propose PILoT, i.e., Planning Immediate Landmarks of Targets. PILoT utilizes the universal decoupled policy optimization to learn a goal-conditioned state planner; then, distills a goal-planner to plan immediate landmarks in a model-free style that can be shared among different agents. In our experiments, we show the power of PILoT on various transferring challenges, including few-shot transferring across action spaces and dynamics, from low-dimensional vector states to image inputs, from simple robot to complicated morphology; and we also illustrate a zero-shot transfer solution from a simple 2D navigation task to the harder Ant-Maze task.
translated by 谷歌翻译
This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. We introduce a set of cooperative control tasks that includes tasks with discrete and continuous actions, as well as tasks that involve hundreds of agents. The three approaches are evaluated against each other using different neural architectures, training procedures, and reward structures. Using deep reinforcement learning with a curriculum learning scheme, our approach can solve problems that were previously considered intractable by most multi-agent reinforcement learning algorithms. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods when using feed-forward neural architectures. We also show that recurrent policies, while more difficult to train, outperform feed-forward policies on our evaluation tasks.
translated by 谷歌翻译
Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.
translated by 谷歌翻译
临时团队合作的进步有可能创建在现实世界应用程序中合作的代理商。但是,部署在现实世界中的代理人容易受到颠覆它们的意图的对手。在临时团队工作中,几乎没有研究对手的存在。我们解释了扩展临时团队工作以包括对手的存在的重要性,并澄清了为什么这个问题很困难。然后,我们提出了一些在临时团队合作中的新研究机会的指示,这会导致更强大的多代理网络物理基础设施系统。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
我们提出了Composuite,这是一种用于组成多任务增强学习(RL)的开源模拟机器人操纵基准。每个复合仪任务都需要特定的机器人组来操纵一个单独的对象,以实现一个任务目标,同时避免障碍物。该任务的这种组成定义赋予了Composuite具有两个非凡的属性。首先,改变机器人/对象/客观/障碍元素会导致数百个RL任务,每个任务都需要有意义的不同行为。其次,可以专门评估RL方法,以了解其学习任务的组成结构的能力。后者在功能上分解问题的能力将使智能代理能够识别和利用学习任务之间的共同点,以处理大量高度多样化的问题。我们在各种培训环境中基准了现有的单任务,多任务和组成学习算法,并评估其在构图上概括到看不见的任务的能力。我们的评估暴露了现有RL方法在组成方面的缺点,并为调查开辟了新的途径。
translated by 谷歌翻译
建立可以探索开放式环境的自主机器,发现可能的互动,自主构建技能的曲目是人工智能的一般目标。发展方法争辩说,这只能通过可以生成,选择和学习解决自己问题的自主和本质上动机的学习代理人来实现。近年来,我们已经看到了发育方法的融合,特别是发展机器人,具有深度加强学习(RL)方法,形成了发展机器学习的新领域。在这个新域中,我们在这里审查了一组方法,其中深入RL算法训练,以解决自主获取的开放式曲目的发展机器人问题。本质上动机的目标条件RL算法训练代理商学习代表,产生和追求自己的目标。自我生成目标需要学习紧凑的目标编码以及它们的相关目标 - 成就函数,这导致与传统的RL算法相比,这导致了新的挑战,该算法设计用于使用外部奖励信号解决预定义的目标集。本文提出了在深度RL和发育方法的交叉口中进行了这些方法的类型,调查了最近的方法并讨论了未来的途径。
translated by 谷歌翻译
在从同质机器人群到异构人类自治团队的多机构团队的运营中,可能会发生意外的事件。虽然对多代理任务分配问题的操作效率是主要目标,但决策框架必须足够聪明,可以用有限的资源来管理意外的任务负载。否则,操作效率将大幅下降,而超载的代理人面临不可预见的风险。在这项工作中,我们为多机构团队提供了一个决策框架,以通过分散的强化学习来考虑负载管理,以学习负载管理,并避免了不必要的资源使用。我们说明了负载管理对团队绩效的影响,并在示例场景中探索了代理行为。此外,在处理潜在的超负荷情况时,开发了一种衡量协作中的代理重要性的衡量标准。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译
平均场理论提供了一种将多基强化学习算法扩展到许多代理可以由虚拟均值代理提取的环境的有效方法。在本文中,我们将平均字段多基因算法扩展到多种类型。这种类型使平均田间强化学习中的核心假设可以放松,即环境中的所有代理都在采用几乎相似的策略,并且具有相同的目标。我们基于标准的魔法框架,对许多代理增强学习领域的三个不同测试床进行实验。我们考虑两种不同类型的平均场环境:a)代理属于预定义类型的游戏,这些类型是先验和b)每个代理的类型未知的游戏,因此必须根据观察结果学习。我们为每种类型的游戏介绍了新的算法,并演示了它们优于最先进的算法,这些算法假定所有代理都属于Magent Framework中的所有代理属于相同类型和其他基线算法。
translated by 谷歌翻译