Multi-robot manipulation tasks involve various control entities that can be separated into dynamically independent parts. A typical example of such real-world tasks is dual-arm manipulation. Learning to naively solve such tasks with reinforcement learning is often unfeasible due to the sample complexity and exploration requirements growing with the dimensionality of the action and state spaces. Instead, we would like to handle such environments as multi-agent systems and have several agents control parts of the whole. However, decentralizing the generation of actions requires coordination across agents through a channel limited to information central to the task. This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents. We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
translated by 谷歌翻译
机器人操纵器广泛用于现代制造过程。但是,它们在非结构化环境中的部署仍然是一个公开问题。为了应对现实世界操纵任务的多样性,复杂性和不确定性,必须开发灵活的框架,以减少环境特征的假设。近年来,加固学习(RL)为单臂机器人操纵表现出了很大的结果。然而,专注于双臂操纵的研究仍然很少见。根据经典的控制视角,解决这些任务通常涉及两个操纵器之间的相互作用的复杂建模,以及在任务中遇到的对象,以及在控制水平处耦合的两个机器人。相反,在这项工作中,我们探讨了无模型RL对双臂组件的适用性。当我们的目标是促进不限于双臂组件的方法,而是一般来说,双臂操纵,我们将尽量措施保持建模。因此,为了避免建模两个机器人与使用的组装工具之间的相互作用,我们呈现了一种模块化方法,其具有两个分散的单臂控制器,其使用单个集中式学习策略耦合。我们只使用稀疏奖励将建模努力降低到最低限度。我们的建筑使成功的装配和简单地从模拟转移到现实世界。我们展示了框架对双臂钉孔的有效性,并分析了不同动作空间的样品效率和成功率。此外,我们在处理位置不确定性时,我们比较不同的间隙和展示干扰恢复和稳健性的结果。最后,我们Zero-Shot Transfer策略在模拟中培训到现实世界并评估其性能。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
Recently, model-based agents have achieved better performance than model-free ones using the same computational budget and training time in single-agent environments. However, due to the complexity of multi-agent systems, it is tough to learn the model of the environment. The significant compounding error may hinder the learning process when model-based methods are applied to multi-agent tasks. This paper proposes an implicit model-based multi-agent reinforcement learning method based on value decomposition methods. Under this method, agents can interact with the learned virtual environment and evaluate the current state value according to imagined future states in the latent space, making agents have the foresight. Our approach can be applied to any multi-agent value decomposition method. The experimental results show that our method improves the sample efficiency in different partially observable Markov decision process domains.
translated by 谷歌翻译
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in singleagent settings. We present an actor-critic algorithm that trains decentralized policies in multiagent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multiagent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.
translated by 谷歌翻译
多代理深入的强化学习已应用于解决各种离散或连续动作空间的各种复杂问题,并取得了巨大的成功。但是,大多数实际环境不能仅通过离散的动作空间或连续的动作空间来描述。而且很少有作品曾经利用深入的加固学习(DRL)来解决混合动作空间的多代理问题。因此,我们提出了一种新颖的算法:深层混合软性角色 - 批评(MAHSAC)来填补这一空白。该算法遵循集中式训练但分散执行(CTDE)范式,并扩展软actor-Critic算法(SAC),以根据最大熵在多机构环境中处理混合动作空间问题。我们的经验在一个简单的多代理粒子世界上运行,具有连续的观察和离散的动作空间以及一些基本的模拟物理。实验结果表明,MAHSAC在训练速度,稳定性和抗干扰能力方面具有良好的性能。同时,它在合作场景和竞争性场景中胜过现有的独立深层学习方法。
translated by 谷歌翻译
This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. We introduce a set of cooperative control tasks that includes tasks with discrete and continuous actions, as well as tasks that involve hundreds of agents. The three approaches are evaluated against each other using different neural architectures, training procedures, and reward structures. Using deep reinforcement learning with a curriculum learning scheme, our approach can solve problems that were previously considered intractable by most multi-agent reinforcement learning algorithms. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods when using feed-forward neural architectures. We also show that recurrent policies, while more difficult to train, outperform feed-forward policies on our evaluation tasks.
translated by 谷歌翻译
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. * Equal contribution. Order was determined by coin flip.
translated by 谷歌翻译
在现实设置中跨多个代理的决策同步是有问题的,因为它要求代理等待其他代理人终止和交流有关终止的终止。理想情况下,代理应该学习和执行异步。这样的异步方法还允许暂时扩展的动作,这些操作可能会根据执行的情况和操作花费不同的时间。不幸的是,当前的策略梯度方法不适用于异步设置,因为他们认为代理在每个时间步骤中都同步推理了动作选择。为了允许异步学习和决策,我们制定了一组异步的多代理参与者 - 批判性方法,这些方法使代理可以在三个标准培训范式中直接优化异步策略:分散的学习,集中学习,集中学习和集中培训以进行分解执行。各种现实域中的经验结果(在模拟和硬件中)证明了我们在大型多代理问题中的优势,并验证了我们算法在学习高质量和异步解决方案方面的有效性。
translated by 谷歌翻译
通过加强学习(RL)掌握机器人操纵技巧通常需要设计奖励功能。该地区的最新进展表明,使用稀疏奖励,即仅在成功完成任务时奖励代理,可能会导致更好的政策。但是,在这种情况下,国家行动空间探索更困难。最近的RL与稀疏奖励学习的方法已经为任务提供了高质量的人类演示,但这些可能是昂贵的,耗时甚至不可能获得的。在本文中,我们提出了一种不需要人类示范的新颖有效方法。我们观察到,每个机器人操纵任务都可以被视为涉及从被操纵对象的角度来看运动的任务,即,对象可以了解如何自己达到目标状态。为了利用这个想法,我们介绍了一个框架,最初使用现实物理模拟器获得对象运动策略。然后,此策略用于生成辅助奖励,称为模拟的机器人演示奖励(SLDRS),使我们能够学习机器人操纵策略。拟议的方法已在增加复杂性的13个任务中进行了评估,与替代算法相比,可以实现更高的成功率和更快的学习率。 SLDRS对多对象堆叠和非刚性物体操作等任务特别有益。
translated by 谷歌翻译
将深度强化学习(DRL)扩展到多代理领域的研究已经解决了许多复杂的问题,并取得了重大成就。但是,几乎所有这些研究都只关注离散或连续的动作空间,而且很少有作品曾经使用过多代理的深度强化学习来实现现实世界中的环境问题,这些问题主要具有混合动作空间。因此,在本文中,我们提出了两种算法:深层混合软性角色批评(MAHSAC)和多代理混合杂种深层确定性政策梯度(MAHDDPG)来填补这一空白。这两种算法遵循集中式培训和分散执行(CTDE)范式,并可以解决混合动作空间问题。我们的经验在多代理粒子环境上运行,这是一个简单的多代理粒子世界,以及一些基本的模拟物理。实验结果表明,这些算法具有良好的性能。
translated by 谷歌翻译
In multi-agent reinforcement learning (MARL), many popular methods, such as VDN and QMIX, are susceptible to a critical multi-agent pathology known as relative overgeneralization (RO), which arises when the optimal joint action's utility falls below that of a sub-optimal joint action in cooperative tasks. RO can cause the agents to get stuck into local optima or fail to solve tasks that require significant coordination between agents within a given timestep. Recent value-based MARL algorithms such as QPLEX and WQMIX can overcome RO to some extent. However, our experimental results show that they can still fail to solve cooperative tasks that exhibit strong RO. In this work, we propose a novel approach called curriculum learning for relative overgeneralization (CURO) to better overcome RO. To solve a target task that exhibits strong RO, in CURO, we first fine-tune the reward function of the target task to generate source tasks that are tailored to the current ability of the learning agent and train the agent on these source tasks first. Then, to effectively transfer the knowledge acquired in one task to the next, we use a novel transfer learning method that combines value function transfer with buffer transfer, which enables more efficient exploration in the target task. We demonstrate that, when applied to QMIX, CURO overcomes severe RO problem and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.
translated by 谷歌翻译
本文考虑了从专家演示中学习机器人运动和操纵任务。生成对抗性模仿学习(GAIL)训练一个区分专家与代理转换区分开的歧视者,进而使用歧视器输出定义的奖励来优化代理商的策略生成器。这种生成的对抗训练方法非常强大,但取决于歧视者和发电机培训之间的微妙平衡。在高维问题中,歧视训练可能很容易过度拟合或利用与任务 - 核定功能进行过渡分类的关联。这项工作的一个关键见解是,在合适的潜在任务空间中进行模仿学习使训练过程稳定,即使在挑战高维问题中也是如此。我们使用动作编码器模型来获得低维的潜在动作空间,并使用对抗性模仿学习(Lapal)训练潜在政策。可以从州行动对脱机来训练编码器模型,以获得任务无关的潜在动作表示或与歧视器和发电机培训同时在线获得,以获得任务意识到的潜在行动表示。我们证明了Lapal训练是稳定的,具有近乎单的性能的改进,并在大多数运动和操纵任务中实现了专家性能,而Gail基线收敛速度较慢,并且在高维环境中无法实现专家的表现。
translated by 谷歌翻译
通过集中培训和分散执行的价值功能分解是有助于解决合作多功能协商强化任务的承诺。该地区QMIX的方法之一已成为最先进的,在星际争霸II微型管理基准上实现了最佳性能。然而,已知QMIX中每个代理估计的单调混合是限制它可以表示的关节动作Q值,以及单个代理价值函数估计的全局状态信息,通常导致子优相。为此,我们呈现LSF-SAC,这是一种新颖的框架,其具有基于变分推理的信息共享机制,作为额外的状态信息,以帮助在价值函数分子中提供各个代理。我们证明,这种潜在的个人状态信息共享可以显着扩展价值函数分解的力量,而通过软演员批评设计仍然可以在LSF-SAC中保持完全分散的执行。我们在星际争霸II微型管理挑战上评估LSF-SAC,并证明它在挑战协作任务方面优于几种最先进的方法。我们进一步设定了广泛的消融研究,以定位核算其绩效改进的关键因素。我们认为,这种新的洞察力可以导致新的地方价值估算方法和变分的深度学习算法。可以在https://sites.google.com/view/sacmm处找到演示视频和实现代码。
translated by 谷歌翻译
我们研究了流行的集中训练和分散执行(CTDE)范式中的多机器人发臭导航问题。当每个机器人考虑其路径而不明确地与其他机器人明确分享观察时,这一问题挑战了,可能导致深度加强学习(DRL)中的非静止问题。典型的CTDE算法将联合动作值函数分解为个别函数,以支持合作并实现分散的执行。这种分解涉及限制(例如,单调性),其限制在个体中的新行为的出现,因为从联合动作值开始训练。相比之下,我们为CTDE提出了一种新颖的架构,该架构使用集中式状态值网络来计算联合状态值,该值用于在代理的基于值的更新中注入全局状态信息。因此,考虑到环境的整体状态,每个模型计算其权重的梯度更新。我们的想法遵循Dueling Networks作为联合状态值的单独估计的独立估计,具有提高采样效率的优点,同时提供每个机器人信息,无论全局状态是否为(或不是)有价值的。具有2 4和8个机器人的机器人导航任务的实验,确认了我们对先前CTDE方法的方法的卓越性能(例如,VDN,QMIX)。
translated by 谷歌翻译
在合作多智能体增强学习(Marl)中的代理商的创造和破坏是一个批判性的研究领域。当前的Marl算法通常认为,在整个实验中,组内的代理数量仍然是固定的。但是,在许多实际问题中,代理人可以在队友之前终止。这次早期终止问题呈现出挑战:终止的代理人必须从本集团的成功或失败中学习,这是超出其自身存在的成败。我们指代薪资奖励的传播价值作为遣返代理商作为追索的奖励作为追索权。当前的MARL方法通过将这些药剂放在吸收状态下,直到整组试剂达到终止条件,通过将这些药剂置于终止状态来处理该问题。虽然吸收状态使现有的算法和API能够在没有修改的情况下处理终止的代理,但存在实际培训效率和资源使用问题。在这项工作中,我们首先表明样本复杂性随着系统监督学习任务中的吸收状态的数量而增加,同时对变量尺寸输入更加强大。然后,我们为现有的最先进的MARL算法提出了一种新颖的架构,它使用注意而不是具有吸收状态的完全连接的层。最后,我们展示了这一新颖架构在剧集中创建或销毁的任务中的标准架构显着优于标准架构以及标准的多代理协调任务。
translated by 谷歌翻译
从意外的外部扰动中恢复的能力是双模型运动的基本机动技能。有效的答复包括不仅可以恢复平衡并保持稳定性的能力,而且在平衡恢复物质不可行时,也可以保证安全的方式。对于与双式运动有关的机器人,例如人形机器人和辅助机器人设备,可帮助人类行走,设计能够提供这种稳定性和安全性的控制器可以防止机器人损坏或防止伤害相关的医疗费用。这是一个具有挑战性的任务,因为它涉及用触点产生高维,非线性和致动系统的高动态运动。尽管使用基于模型和优化方法的前进方面,但诸如广泛领域知识的要求,诸如较大的计算时间和有限的动态变化的鲁棒性仍然会使这个打开问题。在本文中,为了解决这些问题,我们开发基于学习的算法,能够为两种不同的机器人合成推送恢复控制政策:人形机器人和有助于双模型运动的辅助机器人设备。我们的工作可以分为两个密切相关的指示:1)学习人形机器人的安全下降和预防策略,2)使用机器人辅助装置学习人类的预防策略。为实现这一目标,我们介绍了一套深度加强学习(DRL)算法,以学习使用这些机器人时提高安全性的控制策略。
translated by 谷歌翻译
实现人类水平的灵活性是机器人技术中的重要开放问题。但是,即使在婴儿级别,灵巧的手动操纵任务也是通过增强学习(RL)的挑战。困难在于高度的自由度和异质因素(例如手指关节)之间所需的合作。在这项研究中,我们提出了双人灵感手基准(BI-DEXHANDS),这是一种模拟器,涉及两只灵巧的手,其中包含数十只双人操纵任务和数千个目标对象。具体而言,根据认知科学文献,BI-DEXHANDS中的任务旨在匹配不同级别的人类运动技能。我们在ISSAC体育馆里建造了Bi-Dexhands;这可以实现高效的RL培训,仅在一个NVIDIA RTX 3090中达到30,000+ fps。我们在不同的设置下为流行的RL算法提供了全面的基准;这包括单代理/多代理RL,离线RL,多任务RL和META RL。我们的结果表明,PPO类型的上车算法可以掌握简单的操纵任务,该任务等效到48个月的人类婴儿(例如,捕获飞行的物体,打开瓶子),而多代理RL可以进一步帮助掌握掌握需要熟练的双人合作的操作(例如,举起锅,堆叠块)。尽管每个任务都取得了成功,但在获得多个操纵技能方面,现有的RL算法无法在大多数多任务和少量学习设置中工作,这需要从RL社区进行更实质性的发展。我们的项目通过https://github.com/pku-marl/dexteroushands开放。
translated by 谷歌翻译
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
translated by 谷歌翻译