现有的模仿学习方法主要集中于使代理有效地模仿一种表现出的行为,但并未解决行为方式与任务目标之间的潜在矛盾。普遍缺乏有效的方法,使代理可以在完成任务的主要目标的同时部分模仿不同程度的演示行为。在本文中,我们提出了一种称为正规软批评的方法,该方法在受约束的马尔可夫决策过程框架(CMDP)下制定了主要任务和模仿任务。主要任务定义为软性参数(SAC)中使用的最大熵目标,模仿任务定义为约束。我们评估了与视频游戏应用程序相关的连续控制任务的方法。
translated by 谷歌翻译
We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actorcritic methods, which can be viewed performing approximate inference on the corresponding energy-based model.
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
在许多顺序决策问题(例如,机器人控制,游戏播放,顺序预测),人类或专家数据可用包含有关任务的有用信息。然而,来自少量专家数据的模仿学习(IL)可能在具有复杂动态的高维环境中具有挑战性。行为克隆是一种简单的方法,由于其简单的实现和稳定的收敛而被广泛使用,但不利用涉及环境动态的任何信息。由于对奖励和政策近似器或偏差,高方差梯度估计器,难以在实践中难以在实践中努力训练的许多现有方法。我们介绍了一种用于动态感知IL的方法,它通过学习单个Q函数来避免对抗训练,隐含地代表奖励和策略。在标准基准测试中,隐式学习的奖励显示与地面真实奖励的高正面相关性,说明我们的方法也可以用于逆钢筋学习(IRL)。我们的方法,逆软Q学习(IQ-Learn)获得了最先进的结果,在离线和在线模仿学习设置中,显着优于现有的现有方法,这些方法都在所需的环境交互和高维空间中的可扩展性中,通常超过3倍。
translated by 谷歌翻译
Adversarial Imitation Learning (AIL) is a class of popular state-of-the-art Imitation Learning algorithms commonly used in robotics. In AIL, an artificial adversary's misclassification is used as a reward signal that is optimized by any standard Reinforcement Learning (RL) algorithm. Unlike most RL settings, the reward in AIL is $differentiable$ but current model-free RL algorithms do not make use of this property to train a policy. The reward is AIL is also shaped since it comes from an adversary. We leverage the differentiability property of the shaped AIL reward function and formulate a class of Actor Residual Critic (ARC) RL algorithms. ARC algorithms draw a parallel to the standard Actor-Critic (AC) algorithms in RL literature and uses a residual critic, $C$ function (instead of the standard $Q$ function) to approximate only the discounted future return (excluding the immediate reward). ARC algorithms have similar convergence properties as the standard AC algorithms with the additional advantage that the gradient through the immediate reward is exact. For the discrete (tabular) case with finite states, actions, and known dynamics, we prove that policy iteration with $C$ function converges to an optimal policy. In the continuous case with function approximation and unknown dynamics, we experimentally show that ARC aided AIL outperforms standard AIL in simulated continuous-control and real robotic manipulation tasks. ARC algorithms are simple to implement and can be incorporated into any existing AIL implementation with an AC algorithm. Video and link to code are available at: https://sites.google.com/view/actor-residual-critic.
translated by 谷歌翻译
增强学习(RL)算法假设用户通过手动编写奖励函数来指定任务。但是,这个过程可能是费力的,需要相当大的技术专长。我们可以设计RL算法,而是通过提供成功结果的示例来支持用户来指定任务吗?在本文中,我们推导了一种控制算法,可以最大化这些成功结果示例的未来概率。在前阶段的工作已经接近了类似的问题,首先学习奖励功能,然后使用另一个RL算法优化此奖励功能。相比之下,我们的方法直接从过渡和成功的结果中学习价值函数,而无需学习此中间奖励功能。因此,我们的方法需要较少的封闭式曲折和调试的代码行。我们表明我们的方法满足了一种新的数据驱动Bellman方程,其中示例取代了典型的奖励函数术语。实验表明,我们的方法优于学习明确奖励功能的先前方法。
translated by 谷歌翻译
软演员 - 评论家(SAC)是最先进的偏离策略强化学习(RL)算法之一,其在基于最大熵的RL框架内。 SAC被证明在具有良好稳定性和稳健性的持续控制任务的列表中表现得非常好。 SAC了解一个随机高斯政策,可以最大限度地提高预期奖励和政策熵之间的权衡。要更新策略,SAC可最大限度地减少当前策略密度与软值函数密度之间的kl分歧。然后用于获得这种分歧的近似梯度的回报。在本文中,我们提出了跨熵策略优化(SAC-CEPO)的软演员 - 评论家,它使用跨熵方法(CEM)来优化SAC的政策网络。初始思想是使用CEM来迭代地对软价函数密度的最接近的分布进行采样,并使用结果分布作为更新策略网络的目标。为了降低计算复杂性,我们还介绍了一个解耦的策略结构,该策略结构将高斯策略解耦为一个策略,了解了学习均值的均值和另一个策略,以便只有CEM训练平均政策。我们表明,这种解耦的政策结构确实会聚到最佳,我们还通过实验证明SAC-CEPO实现对原始囊的竞争性能。
translated by 谷歌翻译
仿制学习(IL)是一个框架,了解从示范中模仿专家行为。最近,IL显示了高维和控制任务的有希望的结果。然而,IL通常遭受环境互动方面的样本低效率,这严重限制了它们对模拟域的应用。在工业应用中,学习者通常具有高的相互作用成本,与环境的互动越多,对环境的损害越多,学习者本身就越多。在本文中,我们努力通过引入逆钢筋学习的新颖方案来提高样本效率。我们的方法,我们调用\ texit {model redion函数基础的模仿学习}(mrfil),使用一个集合动态模型作为奖励功能,是通过专家演示培训的内容。关键的想法是通过在符合专家示范分布时提供积极奖励,为代理商提供与漫长地平线相匹配的演示。此外,我们展示了新客观函数的收敛保证。实验结果表明,与IL方法相比,我们的算法达到了竞争性能,并显着降低了环境交互。
translated by 谷歌翻译
尽管强化学习(RL)对于不确定性下的顺序决策问题有效,但在风险或安全性是具有约束力约束的现实系统中,它仍然无法蓬勃发展。在本文中,我们将安全限制作为非零和游戏制定了RL问题。在用最大熵RL部署的同时,此配方会导致一个安全的对手引导的软角色批评框架,称为SAAC。在SAAC中,对手旨在打破安全约束,而RL代理的目标是在对手的策略下最大程度地提高约束价值功能。对代理的价值函数的安全限制仅表现为代理商和对手政策之间的排斥项。与以前的方法不同,SAAC可以解决不同的安全标准,例如安全探索,均值差异风险敏感性和类似CVAR的相干风险敏感性。我们说明了这些约束的对手的设计。然后,在每种变化中,我们都表明,除了学习解决任务外,代理人与对手的不安全行为不同。最后,对于具有挑战性的持续控制任务,我们证明SAAC可以实现更快的融合,提高效率和更少的失败以满足安全限制,而不是风险避免风险的分布RL和风险中性的软性参与者批判性算法。
translated by 谷歌翻译
最大熵增强学习(MaxEnt RL)算法,如软Q-Learning(SQL)和软演员 - 评论家权衡奖励和政策熵,有可能提高培训稳定性和鲁棒性。然而,大多数最大的RL方法使用恒定的权衡系数(温度),与温度应该在训练早期高的直觉相反,以避免对嘈杂的价值估算和减少培训后,我们越来越多地信任高价值估计,避免危险的估算和减少导致好奖励。此外,我们对价值估计的置信度是国家依赖的,每次使用更多证据来更新估算时都会增加。在本文中,我们提出了一种简单的状态温度调度方法,并将其实例化为基于计数的软Q学习(CBSQL)。我们在玩具领域以及在几个Atari 2600域中评估我们的方法,并显示有前途的结果。
translated by 谷歌翻译
如何在演示相对较大时更加普遍地进行模仿学习一直是强化学习(RL)的持续存在问题。糟糕的示威活动导致狭窄和偏见的日期分布,非马洛维亚人类专家演示使代理商难以学习,而过度依赖子最优轨迹可以使代理商努力提高其性能。为了解决这些问题,我们提出了一种名为TD3FG的新算法,可以平稳地过渡从专家到学习从经验中学习。我们的算法在Mujoco环境中实现了有限的有限和次优的演示。我们使用行为克隆来将网络作为参考动作发生器训练,并在丢失函数和勘探噪声方面使用它。这种创新可以帮助代理商从示威活动中提取先验知识,同时降低了糟糕的马尔科维亚特性的公正的不利影响。与BC +微调和DDPGFD方法相比,它具有更好的性能,特别是当示范相对有限时。我们调用我们的方法TD3FG意味着来自发电机的TD3。
translated by 谷歌翻译
本文考虑了从专家演示中学习机器人运动和操纵任务。生成对抗性模仿学习(GAIL)训练一个区分专家与代理转换区分开的歧视者,进而使用歧视器输出定义的奖励来优化代理商的策略生成器。这种生成的对抗训练方法非常强大,但取决于歧视者和发电机培训之间的微妙平衡。在高维问题中,歧视训练可能很容易过度拟合或利用与任务 - 核定功能进行过渡分类的关联。这项工作的一个关键见解是,在合适的潜在任务空间中进行模仿学习使训练过程稳定,即使在挑战高维问题中也是如此。我们使用动作编码器模型来获得低维的潜在动作空间,并使用对抗性模仿学习(Lapal)训练潜在政策。可以从州行动对脱机来训练编码器模型,以获得任务无关的潜在动作表示或与歧视器和发电机培训同时在线获得,以获得任务意识到的潜在行动表示。我们证明了Lapal训练是稳定的,具有近乎单的性能的改进,并在大多数运动和操纵任务中实现了专家性能,而Gail基线收敛速度较慢,并且在高维环境中无法实现专家的表现。
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
强化学习的标准制定缺乏指定禁止和禁止行为的实用方式。最常见的是,从业者通过手动工程来指定行为规范的任务,这是一个需要几个迭代的反向直观的过程,并且易于奖励代理人。在这项工作中,我们认为,几乎完全用于安全RL的受限制的RL,也有可能大大减少应用加强学习项目中奖励规范所花费的工作量。为此,我们建议在CMDP框架中指定行为偏好,并使用拉格朗日方法,该方法寻求解决代理程序的策略和拉格朗日乘法器之间的最小问题,以自动称量每个行为约束。具体而言,我们研究了如何调整CMDP,以便解决基于目标的任务,同时遵守一组行为约束,并提出对Sac-Lagrangian算法的修改以处理若干约束的具有挑战性的情况。我们对这一框架进行了一系列持续控制任务,该任务与用于视频游戏中NPC设计的加固学习应用相关。
translated by 谷歌翻译
深度加强学习(DRL)的框架为连续决策提供了强大而广泛适用的数学形式化。本文提出了一种新的DRL框架,称为\ emph {$ f $-diveliventcence加强学习(frl)}。在FRL中,通过最大限度地减少学习政策和采样策略之间的$ F $同时执行策略评估和政策改进阶段,这与旨在最大化预期累计奖励的传统DRL算法不同。理论上,我们证明最小化此类$ F $ - 可以使学习政策会聚到最佳政策。此外,我们将FRL框架中的培训代理程序转换为通过Fenchel Concugate的特定$ F $函数转换为鞍点优化问题,这构成了政策评估和政策改进的新方法。通过数学证据和经验评估,我们证明FRL框架有两个优点:(1)政策评估和政策改进过程同时进行,(2)高估价值函数的问题自然而缓解。为了评估FRL框架的有效性,我们对Atari 2600的视频游戏进行实验,并显示在FRL框架中培训的代理匹配或超越基线DRL算法。
translated by 谷歌翻译
超参数优化是机器学习中的一个重要问题,因为它旨在在任何模型中实现最先进的性能。在这一领域取得了巨大努力,例如随机搜索,网格搜索,贝叶斯优化。在本文中,我们将超参数优化过程模拟为马尔可夫决策过程,并用加强学习解决它。提出了一种基于软演员评论家的新型超参数优化方法和分层混合阵列。实验表明,所提出的方法可以在较短的时间内获得更好的超参数。
translated by 谷歌翻译
Reinforcement learning in partially observable domains is challenging due to the lack of observable state information. Thankfully, learning offline in a simulator with such state information is often possible. In particular, we propose a method for partially observable reinforcement learning that uses a fully observable policy (which we call a state expert) during offline training to improve online performance. Based on Soft Actor-Critic (SAC), our agent balances performing actions similar to the state expert and getting high returns under partial observability. Our approach can leverage the fully-observable policy for exploration and parts of the domain that are fully observable while still being able to learn under partial observability. On six robotics domains, our method outperforms pure imitation, pure reinforcement learning, the sequential or parallel combination of both types, and a recent state-of-the-art method in the same setting. A successful policy transfer to a physical robot in a manipulation task from pixels shows our approach's practicality in learning interesting policies under partial observability.
translated by 谷歌翻译
与政策策略梯度技术相比,使用先前收集的数据的无模型的无模型深钢筋学习(RL)方法可以提高采样效率。但是,当利益政策的分布与收集数据的政策之间的差异时,非政策学习变得具有挑战性。尽管提出了良好的重要性抽样和范围的政策梯度技术来补偿这种差异,但它们通常需要一系列长轨迹,以增加计算复杂性并引起其他问题,例如消失或爆炸梯度。此外,由于需要行动概率,它们对连续动作领域的概括严格受到限制,这不适合确定性政策。为了克服这些局限性,我们引入了一种替代的非上政策校正算法,用于连续作用空间,参与者 - 批判性非政策校正(AC-OFF-POC),以减轻先前收集的数据引入的潜在缺陷。通过由代理商对随机采样批次过渡的状态的最新动作决策计算出的新颖差异度量,该方法不需要任何策略的实际或估计的行动概率,并提供足够的一步重要性抽样。理论结果表明,引入的方法可以使用固定的独特点获得收缩映射,从而可以进行“安全”的非政策学习。我们的经验结果表明,AC-Off-POC始终通过有效地安排学习率和Q学习和政策优化的学习率,以比竞争方法更少的步骤改善最新的回报。
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard offpolicy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.
translated by 谷歌翻译