In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near toplevel human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10×10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units.
translated by 谷歌翻译
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
translated by 谷歌翻译
In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.
translated by 谷歌翻译
在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
在探索中,由于当前的低效率而引起的强化学习领域,具有较大动作空间的学习控制政策是一个具有挑战性的问题。在这项工作中,我们介绍了深入的强化学习(DRL)算法呼叫多动作网络(MAN)学习,以应对大型离散动作空间的挑战。我们建议将动作空间分为两个组件,从而为每个子行动创建一个值神经网络。然后,人使用时间差异学习来同步训练网络,这比训练直接动作输出的单个网络要简单。为了评估所提出的方法,我们在块堆叠任务上测试了人,然后扩展了人类从Atari Arcade学习环境中使用18个动作空间的12个游戏。我们的结果表明,人的学习速度比深Q学习和双重Q学习更快,这意味着我们的方法比当前可用于大型动作空间的方法更好地执行同步时间差异算法。
translated by 谷歌翻译
Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new stateof-the-art, outperforming DQN with uniform replay on 41 out of 49 games.
translated by 谷歌翻译
自成立以来,建立在广泛任务中表现出色的普通代理的任务一直是强化学习的重要目标。这个问题一直是对Alarge工作体系的研究的主题,并且经常通过观察Atari 57基准中包含的广泛范围环境的分数来衡量的性能。 Agent57是所有57场比赛中第一个超过人类基准的代理商,但这是以数据效率差的代价,需要实现近800亿帧的经验。以Agent57为起点,我们采用了各种各样的形式,以降低超过人类基线所需的经验200倍。在减少数据制度和Propose有效的解决方案时,我们遇到了一系列不稳定性和瓶颈,以构建更强大,更有效的代理。我们还使用诸如Muesli和Muzero之类的高性能方法证明了竞争性的性能。 TOOUR方法的四个关键组成部分是(1)近似信任区域方法,该方法可以从TheOnline网络中稳定引导,(2)损失和优先级的归一化方案,在学习具有广泛量表的一组值函数时,可以提高鲁棒性, (3)改进的体系结构采用了NFNET的技术技术来利用更深的网络而无需标准化层,并且(4)政策蒸馏方法可使瞬时贪婪的策略加班。
translated by 谷歌翻译
Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.
translated by 谷歌翻译
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN Replay Dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully-trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN Replay Dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms used on sufficiently large and diverse offline datasets can lead to high quality policies. To provide a testbed for offline RL and reproduce our results, the DQN Replay Dataset is released at offline-rl.github.io.
translated by 谷歌翻译
我们在这项工作中的主要贡献是一个实证发现随机通用价值函数(GVF),即深度动作条件预测 - 随机观察到他们预测的观察的特征以及预测的操作顺序中 - 为强化学习(RL)问题形成良好的辅助任务。特别是,我们表明当用作辅助任务时,随机深度动作条件预测产生了产生控制性能的状态表示,其具有与最先进的手工制作的辅助任务相同的辅助辅助任务,如atari中的值预测,像素控制和卷曲和DeepMind实验室任务。在另一组实验中,我们将梯度从网络的RL部分停止到网络的状态代表性学习部分,也许令人惊讶的是,单独的辅助任务足以学习州表示足以超过最终的状态 - 训练的演员 - 评论家基线。我们在https://github.com/hwhitetooth/random_gvs ovensourced我们的代码。
translated by 谷歌翻译
横跨街机学习环境,彩虹实现了对人类和现代RL算法的竞争程度。然而,获得这种性能水平需要大量的数据和硬件资源,在该区域进行研究计算地昂贵并且在实际应用中使用通常是不可行的。本文的贡献是三倍:我们(1)提出了一种改进的彩虹版本,寻求大大减少彩虹的数据,培训时间和计算要求,同时保持其竞争性能; (2)我们通过实验通过对街机学习环境的实验来证明我们的方法的有效性,以及(3)我们进行了许多消融研究,以研究个体提出的修改的效果。我们改进的Rainbow版本达到了靠近经典彩虹的中位数的人为规范化分数,而使用20倍的数据,只需要7.5小时的单个GPU培训时间。我们还提供了我们的全部实施,包括预先训练的型号。
translated by 谷歌翻译
Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
最大熵增强学习(MaxEnt RL)算法,如软Q-Learning(SQL)和软演员 - 评论家权衡奖励和政策熵,有可能提高培训稳定性和鲁棒性。然而,大多数最大的RL方法使用恒定的权衡系数(温度),与温度应该在训练早期高的直觉相反,以避免对嘈杂的价值估算和减少培训后,我们越来越多地信任高价值估计,避免危险的估算和减少导致好奖励。此外,我们对价值估计的置信度是国家依赖的,每次使用更多证据来更新估算时都会增加。在本文中,我们提出了一种简单的状态温度调度方法,并将其实例化为基于计数的软Q学习(CBSQL)。我们在玩具领域以及在几个Atari 2600域中评估我们的方法,并显示有前途的结果。
translated by 谷歌翻译
街机学习环境(ALE)被提出作为凭经验评估跨越atari 2600游戏的代理的一般性的评估平台。 ALE提供各种具有挑战性的问题,并从深加固学习(RL)社区中汲取了重要的关注。从Deep Q-Networks(DQN)到Agent57,RL代理似乎在ALE中实现了超人的性能。但是,这是这种情况吗?在本文中,为了探讨这个问题,我们首先审查了Atari基准中的当前评估指标,然后揭示了实现超人表现的当前评估标准是不合适的,这使得人类的性能相对于可能的是。为了处理这些问题并促进RL研究的发展,我们提出了一种基于人类世界纪录(HWR)的新型Atari基准,这对最终表现和学习效率提出了RL代理的更高要求。此外,我们总结了ATARI基准中的最先进的(SOTA)方法,并根据人类世界记录提供基准测试新的评估指标。我们得出结论,至少四个开放的挑战阻碍了RL代理商从那些新的基准结果实现超人绩效。最后,我们还讨论了一些有希望的方法来处理这些问题。
translated by 谷歌翻译
本文探讨了在深度参与者批评的增强学习模型中同时学习价值功能和政策的问题。我们发现,由于这两个任务之间的噪声水平差异差异,共同学习这些功能的共同实践是亚最佳选择。取而代之的是,我们表明独立学习这些任务,但是由于蒸馏阶段有限,可以显着提高性能。此外,我们发现可以使用较低的\ textIt {方差}返回估计值来降低策略梯度噪声水平。鉴于,值学习噪声水平降低了较低的\ textit {bias}估计值。这些见解共同为近端策略优化的扩展提供了信息,我们称为\ textit {dual Network Archituction}(DNA),这极大地超过了其前身。DNA还超过了受欢迎的彩虹DQN算法在测试的五个环境中的四个环境中的性能,即使在更困难的随机控制设置下也是如此。
translated by 谷歌翻译
本文介绍了用于交易单一资产的双重Q网络算法,即E-MINI S&P 500连续期货合约。我们使用经过验证的设置作为我们环境的基础,并具有多个扩展。我们的贸易代理商的功能不断扩展,包括其他资产,例如商品,从而产生了四种型号。我们还应对环境条件,包括成本和危机。我们的贸易代理商首先接受了特定时间段的培训,并根据新数据进行了测试,并将其与长期策略(市场)进行了比较。我们分析了各种模型与样本中/样本外性能之间有关环境的差异。实验结果表明,贸易代理人遵循适当的行为。它可以将其政策调整为不同的情况,例如在存在交易成本时更广泛地使用中性位置。此外,净资产价值超过了基准的净值,代理商在测试集中的市场优于市场。我们使用DDQN算法对代理商在金融领域中的行为提供初步见解。这项研究的结果可用于进一步发展。
translated by 谷歌翻译