训练无模型的深度加强学习模型来解决图像到图像转换是困难的,因为它涉及高维连续状态和动作空间。在本文中,我们借鉴了最近的最大熵增强学习框架成功的灵感来设计用于挑战连续控制问题,在包括图像表示,产生和控制的高维连续空间上开发随机策略。这种方法的核心是随机演员 - 执行程序 - 批评者 - 评论家(SAEC),这是一个违法的演员 - 评论家模型,具有额外的excator来生成现实图像。具体地,该actor通过随机潜行动作侧重于高级表示和控制策略,以及明确地指示执行器生成用于操纵状态的低级动作。关于若干图像到图像转换任务的实验已经证明了在面对高维连续空间问题时所提出的SAEC的有效性和稳健性。
translated by 谷歌翻译
由不同形状和非线性形状变化引起的机器官的大变形,对医学图像配准产生了重大挑战。传统的注册方法需要通过特定变形模型迭代地优化目标函数以及细致的参数调谐,但在具有大变形的图像中具有有限的能力。虽然基于深度学习的方法可以从输入图像到它们各自的变形字段中的复杂映射,但它是基于回归的,并且容易被卡在局部最小值,特别是当涉及大变形时。为此,我们呈现随机策划者 - 演员 - 评论家(SPAC),这是一种新的加强学习框架,可以执行逐步登记。关键概念通过每次步骤连续地翘曲运动图像,以最终与固定图像对齐。考虑到在传统的强化学习(RL)框架中处理高维连续动作和状态空间有挑战性,我们向标准演员 - 评论家模型引入了一个新的概念“计划”,这是低维度,可以促进演员生成易于高维行动。整个框架基于无监督的培训,并以端到端的方式运行。我们在几个2D和3D医学图像数据集上评估我们的方法,其中一些包含大变形。我们的经验结果强调了我们的工作实现了一致,显着的收益和优于最先进的方法。
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
采用合理的策略是具有挑战性的,但对于智能代理商的智能代理人至关重要,其资源有限,在危险,非结构化和动态环境中工作,以改善系统实用性,降低整体成本并增加任务成功概率。深度强化学习(DRL)帮助组织代理的行为和基于其状态的行为,并代表复杂的策略(行动的组成)。本文提出了一种基于贝叶斯链条的新型分层策略分解方法,将复杂的政策分为几个简单的子手段,并将其作为贝叶斯战略网络(BSN)组织。我们将这种方法整合到最先进的DRL方法中,软演奏者 - 批评者(SAC),并通过组织几个子主管作为联合政策来构建相应的贝叶斯软演奏者(BSAC)模型。我们将建议的BSAC方法与标准连续控制基准(Hopper-V2,Walker2D-V2和Humanoid-V2)在SAC和其他最先进的方法(例如TD3,DDPG和PPO)中进行比较 - Mujoco与Openai健身房环境。结果表明,BSAC方法的有希望的潜力可显着提高训练效率。可以从https://github.com/herolab-uga/bsac访问BSAC的开源代码。
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
软演员 - 评论家(SAC)是最先进的偏离策略强化学习(RL)算法之一,其在基于最大熵的RL框架内。 SAC被证明在具有良好稳定性和稳健性的持续控制任务的列表中表现得非常好。 SAC了解一个随机高斯政策,可以最大限度地提高预期奖励和政策熵之间的权衡。要更新策略,SAC可最大限度地减少当前策略密度与软值函数密度之间的kl分歧。然后用于获得这种分歧的近似梯度的回报。在本文中,我们提出了跨熵策略优化(SAC-CEPO)的软演员 - 评论家,它使用跨熵方法(CEM)来优化SAC的政策网络。初始思想是使用CEM来迭代地对软价函数密度的最接近的分布进行采样,并使用结果分布作为更新策略网络的目标。为了降低计算复杂性,我们还介绍了一个解耦的策略结构,该策略结构将高斯策略解耦为一个策略,了解了学习均值的均值和另一个策略,以便只有CEM训练平均政策。我们表明,这种解耦的政策结构确实会聚到最佳,我们还通过实验证明SAC-CEPO实现对原始囊的竞争性能。
translated by 谷歌翻译
本文考虑了从专家演示中学习机器人运动和操纵任务。生成对抗性模仿学习(GAIL)训练一个区分专家与代理转换区分开的歧视者,进而使用歧视器输出定义的奖励来优化代理商的策略生成器。这种生成的对抗训练方法非常强大,但取决于歧视者和发电机培训之间的微妙平衡。在高维问题中,歧视训练可能很容易过度拟合或利用与任务 - 核定功能进行过渡分类的关联。这项工作的一个关键见解是,在合适的潜在任务空间中进行模仿学习使训练过程稳定,即使在挑战高维问题中也是如此。我们使用动作编码器模型来获得低维的潜在动作空间,并使用对抗性模仿学习(Lapal)训练潜在政策。可以从州行动对脱机来训练编码器模型,以获得任务无关的潜在动作表示或与歧视器和发电机培训同时在线获得,以获得任务意识到的潜在行动表示。我们证明了Lapal训练是稳定的,具有近乎单的性能的改进,并在大多数运动和操纵任务中实现了专家性能,而Gail基线收敛速度较慢,并且在高维环境中无法实现专家的表现。
translated by 谷歌翻译
当前相机图像和信号处理管道(ISP)(ISP),包括深度训练的版本,倾向于应用一个均匀地应用于整个图像的单个过滤器。尽管大多数获取的相机图像具有空间异构的伪影。这种空间异质性在图像空间中表现为各种莫尔振铃,运动模糊,颜色漂白或透镜的投影失真。此外,这些图像伪像的组合可以存在于所获取的图像中的小或大像素邻域中。这里,我们介绍了一种在学习的潜在子空间中工作的深度加强学习模型,通过基于补丁的空间自适应伪影滤波和图像增强来递归地改善相机图像质量。我们的RSE-RL模型视图识别和纠正作为递归自学习和自我改善练习,并由两个主要子模块组成:(i)通过等级变分自动获得的潜在特征子空间聚类/分组-Encoder能够快速识别嘈杂和清洁图像补丁之间的对应和差异。 (ii)由信任区域软演员 - 批评代理控制的自适应学习转换,逐步过滤并使用其最接近的清洁补丁的特征距离邻居增强嘈杂的补丁。可以在基于贴剂的ISP中引入的人工伪影,也通过基于奖励的去阻断恢复和图像增强来消除。我们通过在图像上进行递归训练和测试来展示我们模型的自我改善特征,其中每个时代产生的增强图像为RSE-RL训练过滤管道提供自然数据增强和鲁棒性。
translated by 谷歌翻译
将深度强化学习(DRL)扩展到多代理领域的研究已经解决了许多复杂的问题,并取得了重大成就。但是,几乎所有这些研究都只关注离散或连续的动作空间,而且很少有作品曾经使用过多代理的深度强化学习来实现现实世界中的环境问题,这些问题主要具有混合动作空间。因此,在本文中,我们提出了两种算法:深层混合软性角色批评(MAHSAC)和多代理混合杂种深层确定性政策梯度(MAHDDPG)来填补这一空白。这两种算法遵循集中式培训和分散执行(CTDE)范式,并可以解决混合动作空间问题。我们的经验在多代理粒子环境上运行,这是一个简单的多代理粒子世界,以及一些基本的模拟物理。实验结果表明,这些算法具有良好的性能。
translated by 谷歌翻译
软演员 - 评论家(SAC)被认为是连续动作空间设置中的最先进的算法。它使用最大熵框架进行效率和稳定性,并应用启发式温度拉格朗日术语来调整温度$ \ Alpha $,这决定了策略应该如何“软”。经验证据表明SAC在离散域中表现不佳是反直观的。在本文中,我们研究了这种现象的可能解释,并提出了靶熵调度囊(TES-囊),用于施加在囊上的靶熵参数的退火方法。目标熵是温度拉格朗日术语中的常数,表示离散囊中的目标政策熵。我们将我们的方法与不同常数目标熵囊的Atari 2600游戏进行比较,并分析我们的调度如何影响囊。
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译
这篇综述解决了在深度强化学习(DRL)背景下学习测量数据的抽象表示的问题。尽管数据通常是模棱两可,高维且复杂的解释,但许多动态系统可以通过一组低维状态变量有效地描述。从数据中发现这些状态变量是提高数据效率,稳健性和DRL方法的概括,应对维度的诅咒以及将可解释性和见解带入Black-Box DRL的关键方面。这篇综述通过描述用于学习世界的学习代表的主要深度学习工具,提供对方法和原则的系统观点,总结应用程序,基准和评估策略,并讨论开放的方式,从而提供了DRL中无监督的代表性学习的全面概述,挑战和未来的方向。
translated by 谷歌翻译
近年来,许多定量金融领域的从业者试图使用深度强化学习(DRL)来建立更好的定量交易(QT)策略。然而,许多现有研究未能应对几个严重的挑战,例如非平稳财务环境以及在实际金融市场应用DRL时的偏见和差异权衡。在这项工作中,我们提出了Safe-Finrl,这是一种基于DRL的新型高FREQ股票交易策略,该策略通过近部财务环境以及低偏差和差异估算而增强。我们的主要贡献是双重的:首先,我们将漫长的财务时间序列分为近乎固定的短期环境;其次,我们通过将一般反探测器纳入软批评者中,在近部财务环境中实施Trace-SAC。对加密货币市场的广泛实验表明,避风势范围提供了稳定的价值估计,并稳定的政策改善,并在近部财务环境中显着降低了偏见和差异。
translated by 谷歌翻译
最近,目睹了利用专家国家在模仿学习(IL)中的各种成功应用。然而,来自视觉输入(ILFVI)的另一个IL设定 - IL,它通过利用在线视觉资源而具有更大的承诺,它具有低数据效率和良好的性能,从政策学习方式和高度产生了差 - 宣称视觉输入。我们提出了由禁止策略学习方式,数据增强和编码器技术组成的OPIFVI(视觉输入的偏离策略模仿),分别分别解决所提到的挑战。更具体地,为了提高数据效率,OPIFVI以脱策方式进行IL,可以多次使用采样数据。此外,我们提高了opifvi与光谱归一化的稳定性,以减轻脱助政策培训的副作用。我们认为代理商的ILFVI表现不佳的核心因素可能不会从视觉输入中提取有意义的功能。因此,Opifvi采用计算机愿望的数据增强,以帮助列车编码器,可以更好地从视觉输入中提取功能。另外,对编码器的梯度背交量的特定结构旨在稳定编码器训练。最后,我们证明OPIFVI能够实现专家级性能和优于现有的基线,无论是通过使用Deepmind控制套件的广泛实验,无论视觉演示还是视觉观测。
translated by 谷歌翻译
Whole-slide images (WSI) in computational pathology have high resolution with gigapixel size, but are generally with sparse regions of interest, which leads to weak diagnostic relevance and data inefficiency for each area in the slide. Most of the existing methods rely on a multiple instance learning framework that requires densely sampling local patches at high magnification. The limitation is evident in the application stage as the heavy computation for extracting patch-level features is inevitable. In this paper, we develop RLogist, a benchmarking deep reinforcement learning (DRL) method for fast observation strategy on WSIs. Imitating the diagnostic logic of human pathologists, our RL agent learns how to find regions of observation value and obtain representative features across multiple resolution levels, without having to analyze each part of the WSI at the high magnification. We benchmark our method on two whole-slide level classification tasks, including detection of metastases in WSIs of lymph node sections, and subtyping of lung cancer. Experimental results demonstrate that RLogist achieves competitive classification performance compared to typical multiple instance learning algorithms, while having a significantly short observation path. In addition, the observation path given by RLogist provides good decision-making interpretability, and its ability of reading path navigation can potentially be used by pathologists for educational/assistive purposes. Our code is available at: \url{https://github.com/tencent-ailab/RLogist}.
translated by 谷歌翻译
How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA.
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
Transformer在学习视觉和语言表示方面取得了巨大的成功,这在各种下游任务中都是一般的。在视觉控制中,可以在不同控制任务之间转移的可转移状态表示对于减少训练样本量很重要。但是,将变压器移植到样品有效的视觉控制仍然是一个具有挑战性且未解决的问题。为此,我们提出了一种新颖的控制变压器(CTRLFORMER),具有先前艺术所没有的许多吸引人的好处。首先,CTRLFORMER共同学习视觉令牌和政策令牌之间的自我注意事项机制,在不同的控制任务之间可以学习和转移多任务表示无灾难性遗忘。其次,我们仔细设计了一种对比的增强学习范式来训练Ctrlformer,从而使其能够达到高样本效率,这在控制问题中很重要。例如,在DMControl基准测试中,与最近的高级方法不同,该方法在使用100K样品转移学习后通过在“ Cartpole”任务中产生零分数而失败,CTRLFORMER可以在维持100K样本的同时获得最先进的分数先前任务的性能。代码和模型已在我们的项目主页中发布。
translated by 谷歌翻译