羊群控制是一个具有挑战性的问题,在维持羊群的同时,需要达到目标位置,并避免了环境中特工之间的障碍和碰撞碰撞。多代理增强学习在羊群控制中取得了有希望的表现。但是,基于传统强化学习的方法需要代理与环境之间的相互作用。本文提出了一项次优政策帮助多代理增强学习算法(SPA-MARL),以提高样本效率。 Spa-Marl直接利用可以通过非学习方法手动设计或解决的先前政策来帮助代理人学习,在这种情况下,该策略的表现可以是最佳的。 SPA-MARL认识到次优政策与本身之间的性能差异,然后模仿次优政策,如果次优政策更好。我们利用Spa-Marl解决羊群控制问题。基于人造潜在领域的传统控制方法用于生成次优政策。实验表明,水疗中心可以加快训练过程,并优于MARL基线和所使用的次优政策。
translated by 谷歌翻译
在多机构系统(例如多机构无人驾驶汽车和多机构自动驾驶水下车辆)中,羊群控制是一个重大问题,可增强代理的合作和安全性。与传统方法相反,多机构增强学习(MARL)更灵活地解决了羊群控制的问题。但是,基于MARL的方法遭受了样本效率低下的影响,因为它们需要从代理与环境之间的相互作用中收集大量的经验。我们提出了一种新颖的方法,该方法对MARL(PWD-MARL)的示范进行了预处理,该方法可以利用以传统方法预处理剂来利用非专家示范。在预审进过程中,代理人同时通过MARL和行为克隆从示范中学习政策,并阻止过度拟合示范。通过对非专家示范进行预处理,PWD-MARL在温暖的开始中提高了在线MAL的样品效率。实验表明,即使发生不良或很少的示威,PWD-MARL在羊群控制问题中提高了样本效率和政策性能。
translated by 谷歌翻译
如何在演示相对较大时更加普遍地进行模仿学习一直是强化学习(RL)的持续存在问题。糟糕的示威活动导致狭窄和偏见的日期分布,非马洛维亚人类专家演示使代理商难以学习,而过度依赖子最优轨迹可以使代理商努力提高其性能。为了解决这些问题,我们提出了一种名为TD3FG的新算法,可以平稳地过渡从专家到学习从经验中学习。我们的算法在Mujoco环境中实现了有限的有限和次优的演示。我们使用行为克隆来将网络作为参考动作发生器训练,并在丢失函数和勘探噪声方面使用它。这种创新可以帮助代理商从示威活动中提取先验知识,同时降低了糟糕的马尔科维亚特性的公正的不利影响。与BC +微调和DDPGFD方法相比,它具有更好的性能,特别是当示范相对有限时。我们调用我们的方法TD3FG意味着来自发电机的TD3。
translated by 谷歌翻译
Though transfer learning is promising to increase the learning efficiency, the existing methods are still subject to the challenges from long-horizon tasks, especially when expert policies are sub-optimal and partially useful. Hence, a novel algorithm named EASpace (Enhanced Action Space) is proposed in this paper to transfer the knowledge of multiple sub-optimal expert policies. EASpace formulates each expert policy into multiple macro actions with different execution time period, then integrates all macro actions into the primitive action space directly. Through this formulation, the proposed EASpace could learn when to execute which expert policy and how long it lasts. An intra-macro-action learning rule is proposed by adjusting the temporal difference target of macro actions to improve the data efficiency and alleviate the non-stationarity issue in multi-agent settings. Furthermore, an additional reward proportional to the execution time of macro actions is introduced to encourage the environment exploration via macro actions, which is significant to learn a long-horizon task. Theoretical analysis is presented to show the convergence of the proposed algorithm. The efficiency of the proposed algorithm is illustrated by a grid-based game and a multi-agent pursuit problem. The proposed algorithm is also implemented to real physical systems to justify its effectiveness.
translated by 谷歌翻译
Deep reinforcement learning (DRL) provides a new way to generate robot control policy. However, the process of training control policy requires lengthy exploration, resulting in a low sample efficiency of reinforcement learning (RL) in real-world tasks. Both imitation learning (IL) and learning from demonstrations (LfD) improve the training process by using expert demonstrations, but imperfect expert demonstrations can mislead policy improvement. Offline to Online reinforcement learning requires a lot of offline data to initialize the policy, and distribution shift can easily lead to performance degradation during online fine-tuning. To solve the above problems, we propose a learning from demonstrations method named A-SILfD, which treats expert demonstrations as the agent's successful experiences and uses experiences to constrain policy improvement. Furthermore, we prevent performance degradation due to large estimation errors in the Q-function by the ensemble Q-functions. Our experiments show that A-SILfD can significantly improve sample efficiency using a small number of different quality expert demonstrations. In four Mujoco continuous control tasks, A-SILfD can significantly outperform baseline methods after 150,000 steps of online training and is not misled by imperfect expert demonstrations during training.
translated by 谷歌翻译
Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.
translated by 谷歌翻译
由于在存在障碍物和高维视觉观测的情况下,由于在存在障碍和高维视觉观测的情况下,学习复杂的操纵任务是一个具有挑战性的问题。事先工作通过整合运动规划和强化学习来解决勘探问题。但是,运动计划程序增强策略需要访问状态信息,该信息通常在现实世界中不可用。为此,我们建议通过(1)视觉行为克隆以通过(1)视觉行为克隆来将基于国家的运动计划者增强策略,以删除运动计划员依赖以及其抖动运动,以及(2)基于视觉的增强学习来自行为克隆代理的平滑轨迹的指导。我们在阻塞环境中的三个操作任务中评估我们的方法,并将其与各种加固学习和模仿学习基线进行比较。结果表明,我们的框架是高度采样的和优于最先进的算法。此外,与域随机化相结合,我们的政策能够用零击转移到未经分散的人的未经环境环境。 https://clvrai.com/mopa-pd提供的代码和视频
translated by 谷歌翻译
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
translated by 谷歌翻译
在现实世界中学习机器人任务仍然是高度挑战性的,有效的实用解决方案仍有待发现。在该领域使用的传统方法是模仿学习和强化学习,但是当应用于真正的机器人时,它们都有局限性。将强化学习与预先收集的演示结合在一起是一种有前途的方法,可以帮助学习控制机器人任务的控制政策。在本文中,我们提出了一种使用新技术来利用离线和在线培训来利用离线专家数据的算法,以获得更快的收敛性和提高性能。拟议的算法(AWET)用新颖的代理优势权重对批评损失进行了加权,以改善专家数据。此外,AWET利用自动的早期终止技术来停止和丢弃与专家轨迹不同的策略推出 - 以防止脱离专家数据。在一项消融研究中,与在四个标准机器人任务上的最新基线相比,AWET表现出改善和有希望的表现。
translated by 谷歌翻译
深度加固学习(DRL)使机器人能够结束结束地执行一些智能任务。然而,长地平线稀疏奖励机器人机械手任务仍存在许多挑战。一方面,稀疏奖励设置会导致探索效率低下。另一方面,使用物理机器人的探索是高成本和不安全的。在本文中,我们提出了一种学习使用本文中名为基础控制器的一个或多个现有传统控制器的长地平线稀疏奖励任务。基于深度确定性的政策梯度(DDPG),我们的算法将现有基础控制器融入勘探,价值学习和策略更新的阶段。此外,我们介绍了合成不同基础控制器以整合它们的优点的直接方式。通过从堆叠块到杯子的实验,证明学习的国家或基于图像的策略稳定优于基础控制器。与以前的示范中的学习作品相比,我们的方法通过数量级提高了样品效率,提高了性能。总体而言,我们的方法具有利用现有的工业机器人操纵系统来构建更灵活和智能控制器的可能性。
translated by 谷歌翻译
In multi-agent reinforcement learning (MARL), many popular methods, such as VDN and QMIX, are susceptible to a critical multi-agent pathology known as relative overgeneralization (RO), which arises when the optimal joint action's utility falls below that of a sub-optimal joint action in cooperative tasks. RO can cause the agents to get stuck into local optima or fail to solve tasks that require significant coordination between agents within a given timestep. Recent value-based MARL algorithms such as QPLEX and WQMIX can overcome RO to some extent. However, our experimental results show that they can still fail to solve cooperative tasks that exhibit strong RO. In this work, we propose a novel approach called curriculum learning for relative overgeneralization (CURO) to better overcome RO. To solve a target task that exhibits strong RO, in CURO, we first fine-tune the reward function of the target task to generate source tasks that are tailored to the current ability of the learning agent and train the agent on these source tasks first. Then, to effectively transfer the knowledge acquired in one task to the next, we use a novel transfer learning method that combines value function transfer with buffer transfer, which enables more efficient exploration in the target task. We demonstrate that, when applied to QMIX, CURO overcomes severe RO problem and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks.
translated by 谷歌翻译
保守主义的概念导致了离线强化学习(RL)的重要进展,其中代理从预先收集的数据集中学习。但是,尽可能多的实际方案涉及多个代理之间的交互,解决更实际的多代理设置中的离线RL仍然是一个开放的问题。鉴于最近将Online RL算法转移到多代理设置的成功,可以预期离线RL算法也将直接传输到多代理设置。令人惊讶的是,当基于保守的算法应用于多蛋白酶的算法时,性能显着降低了越来越多的药剂。为了减轻劣化,我们确定了价值函数景观可以是非凹形的关键问题,并且策略梯度改进容易出现本地最优。自从任何代理人的次优政策可能导致不协调的全球失败以来,多个代理人会加剧问题。在这种直觉之后,我们提出了一种简单而有效的方法,脱机多代理RL与演员整流(OMAR),通过有效的一阶政策梯度和Zeroth订单优化方法为演员更好地解决这一关键挑战优化保守值函数。尽管简单,奥马尔显着优于强大的基线,在多售后连续控制基准测试中具有最先进的性能。
translated by 谷歌翻译
小型无人驾驶飞机的障碍避免对于未来城市空袭(UAM)和无人机系统(UAS)交通管理(UTM)的安全性至关重要。有许多技术用于实时强大的无人机指导,但其中许多在离散的空域和控制中解决,这将需要额外的路径平滑步骤来为UA提供灵活的命令。为提供无人驾驶飞机的操作安全有效的计算指导,我们探讨了基于近端政策优化(PPO)的深增强学习算法的使用,以指导自主UA到其目的地,同时通过连续控制避免障碍物。所提出的场景状态表示和奖励功能可以将连续状态空间映射到连续控制,以便进行标题角度和速度。为了验证所提出的学习框架的性能,我们用静态和移动障碍进行了数值实验。详细研究了与环境和安全操作界限的不确定性。结果表明,该拟议的模型可以提供准确且强大的指导,并解决了99%以上的成功率的冲突。
translated by 谷歌翻译
政策梯度方法在多智能体增强学习中变得流行,但由于存在环境随机性和探索代理(即非公平性​​),它们遭受了高度的差异,这可能因信用分配难度而受到困扰。结果,需要一种方法,该方法不仅能够有效地解决上述两个问题,而且需要足够强大地解决各种任务。为此,我们提出了一种新的多代理政策梯度方法,称为强大的本地优势(ROLA)演员 - 评论家。 Rola允许每个代理人将个人动作值函数作为当地评论家,以及通过基于集中评论家的新型集中培训方法来改善环境不良。通过使用此本地批评,每个代理都计算基准,以减少对其策略梯度估计的差异,这导致含有其他代理的预期优势动作值,这些选项可以隐式提高信用分配。我们在各种基准测试中评估ROLA,并在许多最先进的多代理政策梯度算法上显示其鲁棒性和有效性。
translated by 谷歌翻译
Collaborative autonomous multi-agent systems covering a specified area have many potential applications, such as UAV search and rescue, forest fire fighting, and real-time high-resolution monitoring. Traditional approaches for such coverage problems involve designing a model-based control policy based on sensor data. However, designing model-based controllers is challenging, and the state-of-the-art classical control policy still exhibits a large degree of suboptimality. In this paper, we present a reinforcement learning (RL) approach for the multi-agent coverage problem involving agents with second-order dynamics. Our approach is based on the Multi-Agent Proximal Policy Optimization Algorithm (MAPPO). To improve the stability of the learning-based policy and efficiency of exploration, we utilize an imitation loss based on the state-of-the-art classical control policy. Our trained policy significantly outperforms the state-of-the-art. Our proposed network architecture includes incorporation of self attention, which allows a single-shot domain transfer of the trained policy to a large variety of domain shapes and number of agents. We demonstrate our proposed method in a variety of simulated experiments.
translated by 谷歌翻译
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a randomly initialized DNN policy. Existing RL schedulers overlook the importance of learning from historical data and improving upon custom heuristic policies. Offline reinforcement learning presents the prospect of policy optimization from pre-recorded datasets without online environment interaction. Following the recent success of data-driven learning, we explore two RL methods: 1) Behaviour Cloning and 2) Offline RL, which aim to learn policies from logged data without interacting with the environment. These methods address the challenges concerning the cost of data collection and safety, particularly pertinent to real-world applications of RL. Although the data-driven RL methods generate good results, we show that the performance is highly dependent on the quality of the historical datasets. Finally, we demonstrate that by effectively incorporating prior expert demonstrations to pre-train the agent, we short-circuit the random exploration phase to learn a reasonable policy with online training. We utilize Offline RL as a \textbf{launchpad} to learn effective scheduling policies from prior experience collected using Oracle or heuristic policies. Such a framework is effective for pre-training from historical datasets and well suited to continuous improvement with online data collection.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
将深度强化学习(DRL)扩展到多代理领域的研究已经解决了许多复杂的问题,并取得了重大成就。但是,几乎所有这些研究都只关注离散或连续的动作空间,而且很少有作品曾经使用过多代理的深度强化学习来实现现实世界中的环境问题,这些问题主要具有混合动作空间。因此,在本文中,我们提出了两种算法:深层混合软性角色批评(MAHSAC)和多代理混合杂种深层确定性政策梯度(MAHDDPG)来填补这一空白。这两种算法遵循集中式培训和分散执行(CTDE)范式,并可以解决混合动作空间问题。我们的经验在多代理粒子环境上运行,这是一个简单的多代理粒子世界,以及一些基本的模拟物理。实验结果表明,这些算法具有良好的性能。
translated by 谷歌翻译
许多现实世界的应用程序都可以作为多机构合作问题进行配置,例如网络数据包路由和自动驾驶汽车的协调。深入增强学习(DRL)的出现为通过代理和环境的相互作用提供了一种有前途的多代理合作方法。但是,在政策搜索过程中,传统的DRL解决方案遭受了多个代理具有连续动作空间的高维度。此外,代理商政策的动态性使训练非平稳。为了解决这些问题,我们建议采用高级决策和低水平的个人控制,以进行有效的政策搜索,提出一种分层增强学习方法。特别是,可以在高级离散的动作空间中有效地学习多个代理的合作。同时,低水平的个人控制可以减少为单格强化学习。除了分层增强学习外,我们还建议对手建模网络在学习过程中对其他代理的政策进行建模。与端到端的DRL方法相反,我们的方法通过以层次结构将整体任务分解为子任务来降低学习的复杂性。为了评估我们的方法的效率,我们在合作车道变更方案中进行了现实世界中的案例研究。模拟和现实世界实验都表明我们的方法在碰撞速度和收敛速度中的优越性。
translated by 谷歌翻译
多代理深入的强化学习已应用于解决各种离散或连续动作空间的各种复杂问题,并取得了巨大的成功。但是,大多数实际环境不能仅通过离散的动作空间或连续的动作空间来描述。而且很少有作品曾经利用深入的加固学习(DRL)来解决混合动作空间的多代理问题。因此,我们提出了一种新颖的算法:深层混合软性角色 - 批评(MAHSAC)来填补这一空白。该算法遵循集中式训练但分散执行(CTDE)范式,并扩展软actor-Critic算法(SAC),以根据最大熵在多机构环境中处理混合动作空间问题。我们的经验在一个简单的多代理粒子世界上运行,具有连续的观察和离散的动作空间以及一些基本的模拟物理。实验结果表明,MAHSAC在训练速度,稳定性和抗干扰能力方面具有良好的性能。同时,它在合作场景和竞争性场景中胜过现有的独立深层学习方法。
translated by 谷歌翻译