不确定性量化是现实世界应用中机器学习的主要挑战之一。在强化学习中,一个代理人面对两种不确定性,称为认识论不确定性和态度不确定性。同时解开和评估这些不确定性,有机会提高代理商的最终表现,加速培训并促进部署后的质量保证。在这项工作中,我们为连续控制任务的不确定性感知强化学习算法扩展了深层确定性策略梯度算法(DDPG)。它利用了认识论的不确定性,以加快探索和不确定性来学习风险敏感的政策。我们进行数值实验,表明我们的DDPG变体在机器人控制和功率网络优化方面的基准任务中均优于香草DDPG而没有不确定性估计。
translated by 谷歌翻译
在无模型的深度加强学习(RL)算法中,利用嘈杂的值估计监督政策评估和优化对样品效率有害。由于这种噪声是异源的,因此可以在优化过程中使用基于不确定性的权重来缓解其效果。以前的方法依赖于采样的合奏,这不会捕获不确定性的所有方面。我们对在RL的嘈杂监管中提供了对不确定性的不确定性来源的系统分析,并引入了诸如将概率集合和批处理逆差加权组合的贝叶斯框架的逆差异RL。我们提出了一种方法,其中两个互补的不确定性估计方法占Q值和环境随机性,以更好地减轻嘈杂监督的负面影响。我们的结果表明,对离散和连续控制任务的采样效率方面显着改进。
translated by 谷歌翻译
深度强化学习(DRL)和深度多机构的强化学习(MARL)在包括游戏AI,自动驾驶汽车,机器人技术等各种领域取得了巨大的成功。但是,众所周知,DRL和Deep MARL代理的样本效率低下,即使对于相对简单的问题设置,通常也需要数百万个相互作用,从而阻止了在实地场景中的广泛应用和部署。背后的一个瓶颈挑战是众所周知的探索问题,即如何有效地探索环境和收集信息丰富的经验,从而使政策学习受益于最佳研究。在稀疏的奖励,吵闹的干扰,长距离和非平稳的共同学习者的复杂环境中,这个问题变得更加具有挑战性。在本文中,我们对单格和多代理RL的现有勘探方法进行了全面的调查。我们通过确定有效探索的几个关键挑战开始调查。除了上述两个主要分支外,我们还包括其他具有不同思想和技术的著名探索方法。除了算法分析外,我们还对一组常用基准的DRL进行了全面和统一的经验比较。根据我们的算法和实证研究,我们终于总结了DRL和Deep Marl中探索的公开问题,并指出了一些未来的方向。
translated by 谷歌翻译
无模型的深度增强学习(RL)已成功应用于挑战连续控制域。然而,较差的样品效率可防止这些方法广泛用于现实世界领域。我们通过提出一种新的无模型算法,现实演员 - 评论家(RAC)来解决这个问题,旨在通过学习关于Q函数的各种信任的政策家庭来解决价值低估和高估之间的权衡。我们构建不确定性惩罚Q-Learning(UPQ),该Q-Learning(UPQ)使用多个批评者的合并来控制Q函数的估计偏差,使Q函数平稳地从低于更高的置信范围偏移。随着这些批评者的指导,RAC采用通用价值函数近似器(UVFA),同时使用相同的神经网络学习许多乐观和悲观的政策。乐观的政策会产生有效的探索行为,而悲观政策会降低价值高估的风险,以确保稳定的策略更新和Q函数。该方法可以包含任何违规的演员 - 评论家RL算法。我们的方法实现了10倍的样本效率和25 \%的性能改进与SAC在最具挑战性的人形环境中,获得了11107美元的集中奖励1107美元,价格为10 ^ 6美元。所有源代码都可以在https://github.com/ihuhuhu/rac获得。
translated by 谷歌翻译
资产分配(或投资组合管理)是确定如何最佳将有限预算的资金分配给一系列金融工具/资产(例如股票)的任务。这项研究调查了使用无模型的深RL代理应用于投资组合管理的增强学习(RL)的性能。我们培训了几个RL代理商的现实股票价格,以学习如何执行资产分配。我们比较了这些RL剂与某些基线剂的性能。我们还比较了RL代理,以了解哪些类别的代理表现更好。从我们的分析中,RL代理可以执行投资组合管理的任务,因为它们的表现明显优于基线代理(随机分配和均匀分配)。四个RL代理(A2C,SAC,PPO和TRPO)总体上优于最佳基线MPT。这显示了RL代理商发现更有利可图的交易策略的能力。此外,基于价值和基于策略的RL代理之间没有显着的性能差异。演员批评者的表现比其他类型的药物更好。同样,在政策代理商方面的表现要好,因为它们在政策评估方面更好,样品效率在投资组合管理中并不是一个重大问题。这项研究表明,RL代理可以大大改善资产分配,因为它们的表现优于强基础。基于我们的分析,在政策上,参与者批评的RL药物显示出最大的希望。
translated by 谷歌翻译
准确的价值估计对于禁止禁止增强学习是重要的。基于时间差学学习的算法通常容易容易出现过度或低估的偏差。在本文中,我们提出了一种称为自适应校准批评者(ACC)的一般方法,该方法使用最近的高方差,但不偏见的on-Police Rollouts来缓解低方差时间差目标的偏差。我们将ACC应用于截断的分位数批评,这是一种连续控制的算法,允许使用每个环境调谐的超参数调节偏差。生成的算法在训练渲染渲染超参数期间自适应调整参数不必要,并在Openai健身房连续控制基准测试中设置一个新的算法中,这些算法在所有环境中没有调整HyperParameters的所有算法中。此外,我们证明ACC通过进一步将其进一步应用于TD3并在此设置中显示出改进的性能而相当一般。
translated by 谷歌翻译
强化学习中的固有问题是应对不确定要采取的行动(或状态价值)的政策。模型不确定性,更正式地称为认知不确定性,是指超出采样噪声的模型的预期预测误差。在本文中,我们提出了Q值函数中认知不确定性估计的度量,我们将其称为路线上的认知不确定性。我们进一步开发了一种计算其近似上限的方法,我们称之为f值。我们通过实验将后者应用于深Q-Networks(DQN),并表明增强学习中的不确定性估计是学习进步的有用指标。然后,我们提出了一种新的方法,通过从现有(以前学过的或硬编码)的甲骨文政策中学习不确定性的同时,旨在避免在训练过程中避免非生产性的随机操作,从而提高参与者批评算法的样本效率。我们认为这位评论家的信心指导了探索(CCGE)。我们使用我们的F-Value指标在软演奏者(SAC)上实施CCGE,我们将其应用于少数流行的健身环境,并表明它比有限的背景下的香草囊获得了更好的样本效率和全部情节奖励。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
尽管强化学习(RL)对于不确定性下的顺序决策问题有效,但在风险或安全性是具有约束力约束的现实系统中,它仍然无法蓬勃发展。在本文中,我们将安全限制作为非零和游戏制定了RL问题。在用最大熵RL部署的同时,此配方会导致一个安全的对手引导的软角色批评框架,称为SAAC。在SAAC中,对手旨在打破安全约束,而RL代理的目标是在对手的策略下最大程度地提高约束价值功能。对代理的价值函数的安全限制仅表现为代理商和对手政策之间的排斥项。与以前的方法不同,SAAC可以解决不同的安全标准,例如安全探索,均值差异风险敏感性和类似CVAR的相干风险敏感性。我们说明了这些约束的对手的设计。然后,在每种变化中,我们都表明,除了学习解决任务外,代理人与对手的不安全行为不同。最后,对于具有挑战性的持续控制任务,我们证明SAAC可以实现更快的融合,提高效率和更少的失败以满足安全限制,而不是风险避免风险的分布RL和风险中性的软性参与者批判性算法。
translated by 谷歌翻译
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN Replay Dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully-trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN Replay Dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms used on sufficiently large and diverse offline datasets can lead to high quality policies. To provide a testbed for offline RL and reproduce our results, the DQN Replay Dataset is released at offline-rl.github.io.
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
深度加强学习(RL)的增长为该领域带来了多种令人兴奋的工具和方法。这种快速扩展使得了解RL工具箱的各个元素之间的相互作用。通过在连续控制环境中进行研究,我们从实证角度接近这项任务。我们提出了对基本性质的多个见解,包括:从相同数据培训的多个演员的平均值提升了性能;现有方法在培训运行,培训时期,培训时期和评估运行不稳定;有效培训不需要常用的添加剂动作噪声;基于后抽样的策略探讨比近似的UCB与加权Bellman备份相结合的探讨;单独加权的Bellman备份不能取代剪辑的双Q学习;批评者的初始化在基于集合的演员批评探索中起着重要作用。作为一个结论,我们展示了现有的工具如何以新颖的方式汇集,产生集合深度确定性政策梯度(ED2)方法,从Openai Gyem Mujoco的连续控制任务产生最先进的结果。从实际方面,ED2在概念上简单,易于编码,并且不需要在现有RL工具箱之外的知识。
translated by 谷歌翻译
Offline reinforcement learning (RL) is suitable for safety-critical domains where online exploration is too costly or dangerous. In safety-critical settings, decision-making should take into consideration the risk of catastrophic outcomes. In other words, decision-making should be risk-sensitive. Previous works on risk in offline RL combine together offline RL techniques, to avoid distributional shift, with risk-sensitive RL algorithms, to achieve risk-sensitivity. In this work, we propose risk-sensitivity as a mechanism to jointly address both of these issues. Our model-based approach is risk-averse to both epistemic and aleatoric uncertainty. Risk-aversion to epistemic uncertainty prevents distributional shift, as areas not covered by the dataset have high epistemic uncertainty. Risk-aversion to aleatoric uncertainty discourages actions that may result in poor outcomes due to environment stochasticity. Our experiments show that our algorithm achieves competitive performance on deterministic benchmarks, and outperforms existing approaches for risk-sensitive objectives in stochastic domains.
translated by 谷歌翻译
与政策策略梯度技术相比,使用先前收集的数据的无模型的无模型深钢筋学习(RL)方法可以提高采样效率。但是,当利益政策的分布与收集数据的政策之间的差异时,非政策学习变得具有挑战性。尽管提出了良好的重要性抽样和范围的政策梯度技术来补偿这种差异,但它们通常需要一系列长轨迹,以增加计算复杂性并引起其他问题,例如消失或爆炸梯度。此外,由于需要行动概率,它们对连续动作领域的概括严格受到限制,这不适合确定性政策。为了克服这些局限性,我们引入了一种替代的非上政策校正算法,用于连续作用空间,参与者 - 批判性非政策校正(AC-OFF-POC),以减轻先前收集的数据引入的潜在缺陷。通过由代理商对随机采样批次过渡的状态的最新动作决策计算出的新颖差异度量,该方法不需要任何策略的实际或估计的行动概率,并提供足够的一步重要性抽样。理论结果表明,引入的方法可以使用固定的独特点获得收缩映射,从而可以进行“安全”的非政策学习。我们的经验结果表明,AC-Off-POC始终通过有效地安排学习率和Q学习和政策优化的学习率,以比竞争方法更少的步骤改善最新的回报。
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译
一种被称为优先体验重播(PER)的广泛研究的深钢筋学习(RL)技术使代理可以从与其时间差异(TD)误差成正比的过渡中学习。尽管已经表明,PER是离散作用域中深度RL方法总体性能的最关键组成部分之一,但许多经验研究表明,在连续控制中,它的表现非常低于参与者 - 批评算法。从理论上讲,我们表明,无法有效地通过具有较大TD错误的过渡对演员网络进行训练。结果,在Q网络下计算的近似策略梯度与在最佳Q功能下计算的实际梯度不同。在此激励的基础上,我们引入了一种新颖的经验重播抽样框架,用于演员批评方法,该框架还认为稳定性和最新发现的问题是Per的经验表现不佳。引入的算法提出了对演员和评论家网络的有效和高效培训的改进的新分支。一系列广泛的实验验证了我们的理论主张,并证明了引入的方法显着优于竞争方法,并获得了与标准的非政策参与者 - 批评算法相比,获得最先进的结果。
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译
Classical reinforcement learning (RL) techniques are generally concerned with the design of decision-making policies driven by the maximisation of the expected outcome. Nevertheless, this approach does not take into consideration the potential risk associated with the actions taken, which may be critical in certain applications. To address that issue, the present research work introduces a novel methodology based on distributional RL to derive sequential decision-making policies that are sensitive to the risk, the latter being modelled by the tail of the return probability distribution. The core idea is to replace the $Q$ function generally standing at the core of learning schemes in RL by another function taking into account both the expected return and the risk. Named the risk-based utility function $U$, it can be extracted from the random return distribution $Z$ naturally learnt by any distributional RL algorithm. This enables to span the complete potential trade-off between risk minimisation and expected return maximisation, in contrast to fully risk-averse methodologies. Fundamentally, this research yields a truly practical and accessible solution for learning risk-sensitive policies with minimal modification to the distributional RL algorithm, and with an emphasis on the interpretability of the resulting decision-making process.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译