Parkinson's disease is marked by altered and increased firing characteristics of pathological oscillations in the brain. In other words, it causes abnormal synchronous oscillations and suppression during neurological processing. In order to examine and regulate the synchronization and pathological oscillations in motor circuits, deep brain stimulators (DBS) are used. Although machine learning methods have been applied for the investigation of suppression, these models require large amounts of training data and computational power, both of which pose challenges to resource-constrained DBS. This research proposes a novel reinforcement learning (RL) framework for suppressing the synchronization in neuronal activity during episodes of neurological disorders with less power consumption. The proposed RL algorithm comprises an ensemble of a temporal representation of stimuli and a twin-delayed deep deterministic (TD3) policy gradient algorithm. We quantify the stability of the proposed framework to noise and reduced synchrony using RL for three pathological signaling regimes: regular, chaotic, and bursting, and further eliminate the undesirable oscillations. Furthermore, metrics such as evaluation rewards, energy supplied to the ensemble, and the mean point of convergence were used and compared to other RL algorithms, specifically the Advantage actor critic (A2C), the Actor critic with Kronecker-featured trust region (ACKTR), and the Proximal policy optimization (PPO).
translated by 谷歌翻译
We present temporally layered architecture (TLA), a biologically inspired system for temporally adaptive distributed control. TLA layers a fast and a slow controller together to achieve temporal abstraction that allows each layer to focus on a different time-scale. Our design is biologically inspired and draws on the architecture of the human brain which executes actions at different timescales depending on the environment's demands. Such distributed control design is widespread across biological systems because it increases survivability and accuracy in certain and uncertain environments. We demonstrate that TLA can provide many advantages over existing approaches, including persistent exploration, adaptive control, explainable temporal behavior, compute efficiency and distributed control. We present two different algorithms for training TLA: (a) Closed-loop control, where the fast controller is trained over a pre-trained slow controller, allowing better exploration for the fast controller and closed-loop control where the fast controller decides whether to "act-or-not" at each timestep; and (b) Partially open loop control, where the slow controller is trained over a pre-trained fast controller, allowing for open loop-control where the slow controller picks a temporally extended action or defers the next n-actions to the fast controller. We evaluated our method on a suite of continuous control tasks and demonstrate the advantages of TLA over several strong baselines.
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译
资产分配(或投资组合管理)是确定如何最佳将有限预算的资金分配给一系列金融工具/资产(例如股票)的任务。这项研究调查了使用无模型的深RL代理应用于投资组合管理的增强学习(RL)的性能。我们培训了几个RL代理商的现实股票价格,以学习如何执行资产分配。我们比较了这些RL剂与某些基线剂的性能。我们还比较了RL代理,以了解哪些类别的代理表现更好。从我们的分析中,RL代理可以执行投资组合管理的任务,因为它们的表现明显优于基线代理(随机分配和均匀分配)。四个RL代理(A2C,SAC,PPO和TRPO)总体上优于最佳基线MPT。这显示了RL代理商发现更有利可图的交易策略的能力。此外,基于价值和基于策略的RL代理之间没有显着的性能差异。演员批评者的表现比其他类型的药物更好。同样,在政策代理商方面的表现要好,因为它们在政策评估方面更好,样品效率在投资组合管理中并不是一个重大问题。这项研究表明,RL代理可以大大改善资产分配,因为它们的表现优于强基础。基于我们的分析,在政策上,参与者批评的RL药物显示出最大的希望。
translated by 谷歌翻译
本文介绍了一些最先进的加强学习算法的基准研究,用于解决两个模拟基于视觉的机器人问题。本研究中考虑的算法包括软演员 - 评论家(SAC),近端政策优化(PPO),内插政策梯度(IPG),以及与后敏感体验重播(她)的变体。将这些算法的性能与Pybullet的两个仿真环境进行比较,称为KukadiverseObjectenV和raceCarzedgymenv。这些环境中的状态观察以RGB图像的形式提供,并且动作空间是连续的,使得它们难以解决。建议许多策略提供在基本上单目标环境的这些问题上实施算法所需的中级后敏感目标。另外,提出了许多特征提取架构在学习过程中纳入空间和时间关注。通过严格的模拟实验,建立了这些组分实现的改进。据我们所知,这种基准测试的基础基础是基于视觉的机器人问题的基准研究,使其成为该领域的新贡献。
translated by 谷歌翻译
动物行为是由与不同控制策略并行工作的多个大脑区域驱动的。我们提出了基础神经节中损失钢筋学习的生物学上合理的模型,该模型可以在这种建筑中学习。该模型说明了与动作相关的多巴胺活动调制,该调制不是由实现政策算法的以前模型捕获的。特别是,该模型预测,多巴胺活动标志着奖励预测误差(如经典模型)和“动作惊喜”的组合,这是对动作相对于基础神经节的当前政策的意外程度的衡量标准。在存在动作惊喜项的情况下,该模型实现了Q学习的近似形式。在基准导航和达到任务上,我们从经验上表明,该模型能够完全或部分由其他策略(例如其他大脑区域)学习。相比之下,没有动作惊喜术语的模型在存在其他政策的情况下遭受了损失,并且根本无法从完全由外部驱动的行为中学习。该模型为多巴胺活性提供了许多实验发现,提供了一个计算说明,这是基础神经节中的经典增强模型无法解释的。这些包括背侧和腹侧纹状体中不同水平的动作惊喜信号,通过实践减少了运动调节的多巴胺活性的量以及多巴胺活性的动作起始和运动学的表示。它还提供了进一步的预测,可以通过纹状体多巴胺活性的记录进行测试。
translated by 谷歌翻译
有效的强化学习需要适当的平衡探索和剥削,由动作分布的分散定义。但是,这种平衡取决于任务,学习过程的当前阶段以及当前的环境状态。指定动作分布分散的现有方法需要依赖问题的超参数。在本文中,我们建议使用以下原则自动指定动作分布分布:该分布应具有足够的分散,以评估未来的政策。为此,应调整色散以确保重播缓冲区中的动作和产生它们的分布模式的足够高的概率(密度),但是这种分散不应更高。这样,可以根据缓冲区中的动作有效评估策略,但是当此策略收敛时,动作的探索性随机性会降低。上述原则在挑战性的基准蚂蚁,Halfcheetah,Hopper和Walker2D上进行了验证,并取得了良好的效果。我们的方法使动作标准偏差收敛到与试验和错误优化产生的相似的值。
translated by 谷歌翻译
基于价值的深度增强学习(RL)算法遭受主要由函数近似和时间差(TD)学习引起的估计偏差。此问题会引起故障状态 - 动作值估计,因此损害了学习算法的性能和鲁棒性。尽管提出了几种技术来解决,但学习算法仍然遭受这种偏差。在这里,我们介绍一种技术,该技术使用经验重放机制消除了截止策略连续控制算法中的估计偏差。我们在加权双延迟深度确定性政策梯度算法中自适应地学习加权超参数β。我们的方法名为Adaptive-WD3(AWD3)。我们展示了Openai健身房的连续控制环境,我们的算法匹配或优于最先进的脱离政策政策梯度学习算法。
translated by 谷歌翻译
我们提出了一种在该框架中的精细时间离散化和学习算法中的加强学习(RL)的框架。 RL的主要目标之一是为物理机器提供学习最佳行为而不是被编程的方法。然而,机器通常在精细时间离散化中控制。最常见的RL方法将独立的随机元素应用于每个操作,这不适合该设置。这是不可行的,因为它导致受控系统猛拉,而且没有确保足够的探索,因为单一动作不足以创造可能被翻译成政策改进的重要经验。在本文介绍的RL框架中,考虑了策略,以产生基于在后续时刻中自相关的状态和随机元素的动作。这里介绍的RL算法大致优化了这种策略。在不同的时间离散化中,在四个模拟学习控制问题(ANT,HALFCHETAH,HOPPER和WANKER2D)中验证了该算法的效率。在大多数情况下,这里介绍的算法优于竞争对手。
translated by 谷歌翻译
在许多机器人和工业应用中,传统的线性控制策略已经广泛研究和使用,但它们不应响应系统的总动态,以避免对非线性控制等非线性控制方案的繁琐计算,加强学习的预测控制应用可以提供替代解决方案本文介绍了在移动自拍的深度确定性政策梯度和近端策略优化的情况下实现了RL控制的实现,在移动自拍伸直倒立摆片EWIP系统这样的RL模型使得找到满意控制方案的任务更容易,并在自我调整时有效地响应动态。在本文中提供更好控制的参数,两个RL基础控制器被针对MPC控制器捕获,以基于EWIP系统的状态变量进行评估,同时遵循特定的所需轨迹
translated by 谷歌翻译
强化学习(RL)和脑电脑接口(BCI)是过去十年一直在增长的两个领域。直到最近,这些字段彼此独立操作。随着对循环(HITL)应用的兴趣升高,RL算法已经适用于人类指导,从而产生互动强化学习(IRL)的子领域。相邻的,BCI应用一直很感兴趣在人机交互期间从神经活动中提取内在反馈。这两个想法通过将BCI集成到IRL框架中,将RL和BCI设置在碰撞过程中,通过将内在反馈可用于帮助培训代理商来帮助框架。这种交叉点被称为内在的IRL。为了进一步帮助,促进BCI和IRL的更深层次,我们对内在IRILL的审查有着重点在于其母体领域的反馈驱动的IRL,同时还提供有关有效性,挑战和未来研究方向的讨论。
translated by 谷歌翻译
With the growing need to reduce energy consumption and greenhouse gas emissions, Eco-driving strategies provide a significant opportunity for additional fuel savings on top of other technological solutions being pursued in the transportation sector. In this paper, a model-free deep reinforcement learning (RL) control agent is proposed for active Eco-driving assistance that trades-off fuel consumption against other driver-accommodation objectives, and learns optimal traction torque and transmission shifting policies from experience. The training scheme for the proposed RL agent uses an off-policy actor-critic architecture that iteratively does policy evaluation with a multi-step return and policy improvement with the maximum posteriori policy optimization algorithm for hybrid action spaces. The proposed Eco-driving RL agent is implemented on a commercial vehicle in car following traffic. It shows superior performance in minimizing fuel consumption compared to a baseline controller that has full knowledge of fuel-efficiency tables.
translated by 谷歌翻译
近年来近年来,加固学习方法已经发展了一系列政策梯度方法,主要用于建模随机政策的高斯分布。然而,高斯分布具有无限的支持,而现实世界应用通常具有有限的动作空间。如果它提供有限支持,则该解剖会导致可以消除的估计偏差,因为它提出了有限的支持。在这项工作中,我们调查如何在Openai健身房的两个连续控制任务中训练该测试策略在训练时执行该测试策略。对于这两个任务来说,测试政策在代理人的最终预期奖励方面优于高斯政策,也显示出更多的稳定性和更快的培训过程融合。对于具有高维图像输入的卡路里环境,在高斯政策中,代理的成功率提高了63%。
translated by 谷歌翻译
Synaptic plasticity allows cortical circuits to learn new tasks and to adapt to changing environments. How do cortical circuits use plasticity to acquire functions such as decision-making or working memory? Neurons are connected in complex ways, forming recurrent neural networks, and learning modifies the strength of their connections. Moreover, neurons communicate emitting brief discrete electric signals. Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories, and how computations emerge in the trained networks. Surprisingly, artificial networks and real brains can use similar computational strategies.
translated by 谷歌翻译
Some phenomena related to statistical noise which have been investigated by various authors under the framework of deep reinforcement learning (RL) algorithms are discussed. The following algorithms are examined: the deep Q-network (DQN), double DQN, deep deterministic policy gradient (DDPG), twin-delayed DDPG (TD3), and hill climbing algorithm. First, we consider overestimation, which is a harmful property resulting from noise. Then we deal with noise used for exploration, this is the useful noise. We discuss setting the noise parameter in the TD3 for typical PyBullet environments associated with articulate bodies such as HopperBulletEnv and Walker2DBulletEnv. In the appendix, in relation to the hill climbing algorithm, another example related to noise is considered - an example of adaptive noise.
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译
深层确定性的非政策算法的类别有效地用于解决具有挑战性的连续控制问题。但是,当前的方法使用随机噪声作为一种常见的探索方法,该方法具有多个弱点,例如需要对给定任务进行手动调整以及在训练过程中没有探索性校准。我们通过提出一种新颖的指导探索方法来应对这些挑战,该方法使用差异方向控制器来结合可扩展的探索性动作校正。提供探索性方向的蒙特卡洛评论家合奏作为控制器。提出的方法通过动态改变勘探来改善传统探索方案。然后,我们提出了一种新颖的算法,利用拟议的定向控制器进行政策和评论家修改。所提出的算法在DMControl Suite的各种问题上都优于现代增强算法的现代增强算法。
translated by 谷歌翻译
Machine learning frameworks such as Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control. This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques such as Bayesian Optimization (BO) and Lipschitz global optimization (LIPO). First, we review the general framework of the model-free control problem, bringing together all methods as black-box optimization problems. Then, we test the control algorithms on three test cases. These are (1) the stabilization of a nonlinear dynamical system featuring frequency cross-talk, (2) the wave cancellation from a Burgers' flow and (3) the drag reduction in a cylinder wake flow. We present a comprehensive comparison to illustrate their differences in exploration versus exploitation and their balance between `model capacity' in the control law definition versus `required complexity'. We believe that such a comparison paves the way toward the hybridization of the various methods, and we offer some perspective on their future development in the literature on flow control problems.
translated by 谷歌翻译
Off-policy learning is more unstable compared to on-policy learning in reinforcement learning (RL). One reason for the instability of off-policy learning is a discrepancy between the target ($\pi$) and behavior (b) policy distributions. The discrepancy between $\pi$ and b distributions can be alleviated by employing a smooth variant of the importance sampling (IS), such as the relative importance sampling (RIS). RIS has parameter $\beta\in[0, 1]$ which controls smoothness. To cope with instability, we present the first relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free algorithms in RL. In our method, the network yields a target policy (the actor), a value function (the critic) assessing the current policy ($\pi$) using samples drawn from behavior policy. We use action value generated from the behavior policy in reward function to train our algorithm rather than from the target policy. We also use deep neural networks to train both actor and critic. We evaluated our algorithm on a number of Open AI Gym benchmark problems and demonstrate better or comparable performance to several state-of-the-art RL baselines.
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译