基于Q学习的强化学习算法正在推动深入的强化学习(DRL)研究,以解决复杂的问题并在其中许多方面实现超人的表现。然而,已知Q学习是积极偏见的,因为它通过使用最大值的期望值噪声估计来学习。对动作值的系统高估与DRL方法的固有较高方差相结合会导致逐渐积累的错误,从而导致学习算法的差异。理想情况下,我们希望DRL代理人考虑到他们对每个动作的最佳性的不确定性,并能够利用它以对预期收益进行更明智的估计。在这方面,加权Q学习(WQL)有效地减少了偏见,并在随机环境中显示出显着的结果。 WQL使用估计动作值的加权总和,其中权重对应于每个动作值的概率为最大值。但是,这些概率的计算仅在表格设置中是实用的。在这项工作中,我们通过使用接受辍学训练的神经网络作为深豪斯过程的有效近似,从而提供了方法上的进步,以从DRL中的WQL属性中受益。特别是,我们采用具体的辍学变体来获得DRL认知不确定性的校准估计值。然后,通过采取几个随机前向通过动作值网络并以蒙特卡洛的方式计算权重来获得估计器。这样的权重是对应于最大W.R.T.的每个动作值的概率的贝叶斯估计。通过辍学估计的后验概率分布。我们展示了我们的新颖加权Q学习算法如何减少偏见W.R.T.相关基线,并提供了其在代表性基准方面的优势的经验证据。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.
translated by 谷歌翻译
在探索中,由于当前的低效率而引起的强化学习领域,具有较大动作空间的学习控制政策是一个具有挑战性的问题。在这项工作中,我们介绍了深入的强化学习(DRL)算法呼叫多动作网络(MAN)学习,以应对大型离散动作空间的挑战。我们建议将动作空间分为两个组件,从而为每个子行动创建一个值神经网络。然后,人使用时间差异学习来同步训练网络,这比训练直接动作输出的单个网络要简单。为了评估所提出的方法,我们在块堆叠任务上测试了人,然后扩展了人类从Atari Arcade学习环境中使用18个动作空间的12个游戏。我们的结果表明,人的学习速度比深Q学习和双重Q学习更快,这意味着我们的方法比当前可用于大型动作空间的方法更好地执行同步时间差异算法。
translated by 谷歌翻译
在时间差异增强学习算法中,价值估计的差异会导致最大目标值的不稳定性和高估。已经提出了许多算法来减少高估,包括最近的几种集合方法,但是,没有通过解决估计方差作为高估的根本原因来表现出样品效率学习的成功。在本文中,我们提出了一种简单的集合方法,将目标值估计为集合均值。尽管它很简单,但卑鄙的(还是在Atari学习环境基准测试的实验中显示出明显的样本效率)。重要的是,我们发现大小5的合奏充分降低了估计方差以消除滞后目标网络,从而消除了它作为偏见的来源并进一步获得样本效率。我们以直观和经验的方式为曲线的设计选择证明了合理性,包括独立经验抽样的必要性。在一组26个基准ATARI环境中,曲线均优于所有经过测试的基线,包括最佳的基线,日出,在16/26环境中的100K交互步骤,平均为68​​%。在21/26的环境中,曲线还优于500k步骤的Rainbow DQN,平均为49%,并使用200K($ \ pm $ 100k)的交互步骤实现平均人级绩效。我们的实施可从https://github.com/indylab/meanq获得。
translated by 谷歌翻译
大多数强化学习算法都利用了经验重播缓冲液,以反复对代理商过去观察到的样本进行训练。这样可以防止灾难性的遗忘,但是仅仅对每个样本都分配了同等的重要性是一种天真的策略。在本文中,我们提出了一种根据样本可以从样本中学到多少样本确定样本优先级的方法。我们将样本的学习能力定义为随着时间的推移,与该样品相关的训练损失的稳定减少。我们开发了一种算法,以优先考虑具有较高学习能力的样本,同时将优先级较低,为那些难以学习的样本,通常是由噪声或随机性引起的。我们从经验上表明,我们的方法比随机抽样更强大,而且比仅在训练损失方面优先排序更好,即时间差损失,这是在香草优先的经验重播中使用的。
translated by 谷歌翻译
Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard offpolicy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.
translated by 谷歌翻译
深Q学习网络(DQN)是一种成功的方式,将增强学习与深神经网络结合在一起,并导致广泛应用强化学习。当将DQN或其他强化学习算法应用于现实世界问题时,一个具有挑战性的问题是数据收集。因此,如何提高数据效率是强化学习研究中最重要的问题之一。在本文中,我们提出了一个框架,该框架使用深q网络中的最大均值损失(m $^2 $ dqn)。我们没有在训练步骤中抽样一批体验,而是从体验重播中采样了几批,并更新参数,以使这些批次的最大td-Error最小化。所提出的方法可以通过替换损耗函数来与DQN算法的大多数现有技术结合使用。我们在几个健身游戏中使用了最广泛的技术DQN(DDQN)之一来验证该框架的有效性。结果表明,我们的方法会导致学习速度和性能的实质性提高。
translated by 谷歌翻译
在无模型的深度加强学习(RL)算法中,利用嘈杂的值估计监督政策评估和优化对样品效率有害。由于这种噪声是异源的,因此可以在优化过程中使用基于不确定性的权重来缓解其效果。以前的方法依赖于采样的合奏,这不会捕获不确定性的所有方面。我们对在RL的嘈杂监管中提供了对不确定性的不确定性来源的系统分析,并引入了诸如将概率集合和批处理逆差加权组合的贝叶斯框架的逆差异RL。我们提出了一种方法,其中两个互补的不确定性估计方法占Q值和环境随机性,以更好地减轻嘈杂监督的负面影响。我们的结果表明,对离散和连续控制任务的采样效率方面显着改进。
translated by 谷歌翻译
Q学习目标的乐观性质导致高度估计偏差,这是与标准$ Q-$学习相关的固有问题。这种偏差未能考虑低返回的可能性,特别是在风险方案中。然而,偏差的存在,无论是高估还是低估,不一定都不需要不可取。在本文中,我们分析了偏见学习的效用,并表明具体类型的偏差可能是优选的,这取决于场景。基于这一发现,我们设计了一种新颖的加强学习算法,平衡Q学习,其中将目标被修改为悲观和乐观术语的凸起组合,其相关权重分析地确定在线确定。我们在表格设置中证明了该算法的收敛,并经验证明了其在各种环境中的优越学习性能。
translated by 谷歌翻译
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译
最大熵增强学习(MaxEnt RL)算法,如软Q-Learning(SQL)和软演员 - 评论家权衡奖励和政策熵,有可能提高培训稳定性和鲁棒性。然而,大多数最大的RL方法使用恒定的权衡系数(温度),与温度应该在训练早期高的直觉相反,以避免对嘈杂的价值估算和减少培训后,我们越来越多地信任高价值估计,避免危险的估算和减少导致好奖励。此外,我们对价值估计的置信度是国家依赖的,每次使用更多证据来更新估算时都会增加。在本文中,我们提出了一种简单的状态温度调度方法,并将其实例化为基于计数的软Q学习(CBSQL)。我们在玩具领域以及在几个Atari 2600域中评估我们的方法,并显示有前途的结果。
translated by 谷歌翻译
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN Replay Dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully-trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN Replay Dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms used on sufficiently large and diverse offline datasets can lead to high quality policies. To provide a testbed for offline RL and reproduce our results, the DQN Replay Dataset is released at offline-rl.github.io.
translated by 谷歌翻译
本文介绍了用于交易单一资产的双重Q网络算法,即E-MINI S&P 500连续期货合约。我们使用经过验证的设置作为我们环境的基础,并具有多个扩展。我们的贸易代理商的功能不断扩展,包括其他资产,例如商品,从而产生了四种型号。我们还应对环境条件,包括成本和危机。我们的贸易代理商首先接受了特定时间段的培训,并根据新数据进行了测试,并将其与长期策略(市场)进行了比较。我们分析了各种模型与样本中/样本外性能之间有关环境的差异。实验结果表明,贸易代理人遵循适当的行为。它可以将其政策调整为不同的情况,例如在存在交易成本时更广泛地使用中性位置。此外,净资产价值超过了基准的净值,代理商在测试集中的市场优于市场。我们使用DDQN算法对代理商在金融领域中的行为提供初步见解。这项研究的结果可用于进一步发展。
translated by 谷歌翻译
回声状态网络(ESN)是一种特殊类型的复发性神经网络,用于处理时间序列数据集。然而,受代理顺序样本之间的强相关的强烈相关性,基于ESN的策略控制算法难以使用递归最小二乘(RLS)算法来更新ESN的参数。为了解决这个问题,我们提出了两种新颖的政策控制算法,esnrls-q和esnrls-sarsa。首先,为了减少训练样本的相关性,我们使用泄漏的积分器ESN和迷你批量学习模式。其次,为了使RLS适用于迷你批量模式的训练ESN,我们提出了一种用于更新RLS相关矩阵的新平均近似方法。第三,为了防止ESN过度拟合,我们使用L1正则化技术。最后,为了防止目标状态动作价值高估,我们采用了MOLLMAX方法。仿真结果表明,我们的算法具有良好的收敛性能。
translated by 谷歌翻译
In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
深度神经网络的强大学习能力使强化学习者能够直接从连续环境中学习有效的控制政策。从理论上讲,为了实现稳定的性能,神经网络假设I.I.D.不幸的是,在训练数据在时间上相关且非平稳的一般强化学习范式中,输入不存在。这个问题可能导致“灾难性干扰”和性能崩溃的现象。在本文中,我们提出智商,即干涉意识深度Q学习,以减轻单任务深度加固学习中的灾难性干扰。具体来说,我们求助于在线聚类,以实现在线上下文部门,以及一个多头网络和一个知识蒸馏正规化术语,用于保留学习上下文的政策。与现有方法相比,智商基于深Q网络,始终如一地提高稳定性和性能,并通过对经典控制和ATARI任务进行了广泛的实验。该代码可在以下网址公开获取:https://github.com/sweety-dm/interference-aware-ware-deep-q-learning。
translated by 谷歌翻译
在训练加强学习(RL)代理的过程中,随着代理商的行为随着时间的变化而变化,培训数据的分布是非平稳的。因此,有风险,代理被过度专门针对特定的分布,其性能在更大的情况下受到了影响。合奏RL可以通过学习强大的策略来减轻此问题。但是,由于新引入的价值和策略功能,它遭受了大量的计算资源消耗。在本文中,为了避免臭名昭著的资源消费问题,我们设计了一个新颖而简单的合奏深度RL框架,将多个模型集成到单个模型中。具体而言,我们提出了\下划线{m} inimalist \ usewissline {e} nsemble \ useverlline {p} olicy \ usewissline {g} radient框架(mepg),通过利用修改后的辍学者,引入了简约的bellman更新。 MEPG通过保持Bellman方程式两侧的辍学一致性来持有合奏属性。此外,辍学操作员还增加了MEPG的概括能力。此外,我们从理论上表明,MEPG中的政策评估阶段维持了两个同步的深高斯流程。为了验证MEPG框架的概括能力,我们在健身房模拟器上执行实验,该实验表明,MEPG框架的表现优于或达到与当前最新的无效合奏方法和不增加模型的方法相似的性能水平其他计算资源成本。
translated by 谷歌翻译