In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from a single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. Making decisions using just the expected future returns -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Therefore, we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time by taking both the future and accrued returns into consideration. In this paper, we propose two novel Monte Carlo tree search algorithms. Firstly, we present a Monte Carlo tree search algorithm that can compute policies for nonlinear utility functions (NLU-MCTS) by optimising the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Secondly, we propose a distributional Monte Carlo tree search algorithm (DMCTS) which extends NLU-MCTS. DMCTS computes an approximate posterior distribution over the utility of the returns, and utilises Thompson sampling during planning to compute policies in risk-aware and multi-objective settings. Both algorithms outperform the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
translated by 谷歌翻译
在许多实际情况下,用户的实用程序来自策略的单个执行。在这种情况下,要应用多目标增强学习,必须优化收益的预期效用。存在各种方案,其中用户对目标(也称为实用程序功能)的偏好是未知或难以指定的。在这种情况下,必须学习一组最佳政策。但是,多目标增强学习社区必须最大程度地忽略了必须最大程度地提高预期效用的设置,结果,一组最佳解决方案尚未定义。在本文中,我们通过提出一阶随机优势作为建立解决方案集以最大化预期效用的标准来应对这一挑战。我们还提出了一种新的优势标准,称为预期标量回报(ESR)优势,该标准率扩展了一阶随机优势,以允许在实践中学习一组最佳策略。然后,我们定义一个称为ESR集的新解决方案概念,该概念是ESR主导的一组策略。最后,我们定义了一种新的多目标分布表格增强学习(MOT-DRL)算法,以在多目标多臂强盗设置中学习设置的ESR。
translated by 谷歌翻译
有效计划的能力对于生物体和人造系统都是至关重要的。在认知神经科学和人工智能(AI)中广泛研究了基于模型的计划和假期,但是从不同的角度来看,以及难以调和的考虑(生物现实主义与可伸缩性)的不同意见(生物现实主义与可伸缩性)。在这里,我们介绍了一种新颖的方法来计划大型POMDP(Active Tree search(ACT)),该方法结合了神经科学中领先的计划理论的规范性特征和生物学现实主义(主动推论)和树木搜索方法的可扩展性AI。这种统一对两种方法都是有益的。一方面,使用树搜索可以使生物学接地的第一原理,主动推断的方法可应用于大规模问题。另一方面,主动推理为探索 - 开发困境提供了一种原则性的解决方案,该解决方案通常在树搜索方法中以启发性解决。我们的模拟表明,ACT成功地浏览了对基于抽样的方法,需要自适应探索的问题以及大型POMDP问题“ RockSample”的二进制树,其中ACT近似于最新的POMDP解决方案。此外,我们说明了如何使用ACT来模拟人类和其他解决大型计划问题的人类和其他动物的神经生理反应(例如,在海马和前额叶皮层)。这些数值分析表明,主动树搜索是神经科学和AI计划理论的原则性实现,既具有生物现实主义和可扩展性。
translated by 谷歌翻译
许多现实世界中的问题都包含多个目标和代理,其中目标之间存在权衡。解决此类问题的关键是利用代理之间存在的稀疏依赖性结构。例如,在风电场控制中,在最大化功率和最大程度地减少对系统组件的压力之间存在权衡。涡轮机之间的依赖性是由于唤醒效应而产生的。我们将这种稀疏依赖性模拟为多目标配位图(MO-COG)。在多目标强化学习实用程序功能通常用于对用户偏好而不是目标建模,这可能是未知的。在这种情况下,必须计算一组最佳策略。哪些策略是最佳的,取决于哪些最佳标准适用。如果用户的效用函数是从策略的多个执行中得出的,则必须优化标识的预期收益(SER)。如果用户的效用是从策略的单个执行中得出的,则必须优化预期的标量回报(ESR)标准。例如,风电场受到必须始终遵守的限制和法规,因此必须优化ESR标准。对于Mo-COG,最新的算法只能计算一组SER标准的最佳策略,而ESR标准进行了研究。要计算在ESR标准下(也称为ESR集合)下的一组最佳策略,必须维护回报上的分布。因此,为了计算MO-COGS的ESR标准下的一组最佳策略,我们提出了一种新型的分布多目标变量消除(DMOVE)算法。我们在逼真的风电场模拟中评估了DMOVE。鉴于实际风电场设置中的回报是连续的,我们使用称为Real-NVP的模型来学习连续的返回分布来计算ESR集合。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
在这项工作中,我们提出了一种初步调查一种名为DYNA-T的新算法。在钢筋学习(RL)中,规划代理有自己的环境表示作为模型。要发现与环境互动的最佳政策,代理商会收集试验和错误时尚的经验。经验可用于学习更好的模型或直接改进价值函数和政策。通常是分离的,Dyna-Q是一种混合方法,在每次迭代,利用真实体验更新模型以及值函数,同时使用模拟数据从其模型中的应用程序进行行动。然而,规划过程是计算昂贵的并且强烈取决于国家行动空间的维度。我们建议在模拟体验上构建一个上置信树(UCT),并在在线学习过程中搜索要选择的最佳动作。我们证明了我们提出的方法对来自Open AI的三个测试平台环境的一系列初步测试的有效性。与Dyna-Q相比,Dyna-T通过选择更强大的动作选择策略来优于随机环境中的最先进的RL代理。
translated by 谷歌翻译
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
translated by 谷歌翻译
Classical reinforcement learning (RL) techniques are generally concerned with the design of decision-making policies driven by the maximisation of the expected outcome. Nevertheless, this approach does not take into consideration the potential risk associated with the actions taken, which may be critical in certain applications. To address that issue, the present research work introduces a novel methodology based on distributional RL to derive sequential decision-making policies that are sensitive to the risk, the latter being modelled by the tail of the return probability distribution. The core idea is to replace the $Q$ function generally standing at the core of learning schemes in RL by another function taking into account both the expected return and the risk. Named the risk-based utility function $U$, it can be extracted from the random return distribution $Z$ naturally learnt by any distributional RL algorithm. This enables to span the complete potential trade-off between risk minimisation and expected return maximisation, in contrast to fully risk-averse methodologies. Fundamentally, this research yields a truly practical and accessible solution for learning risk-sensitive policies with minimal modification to the distributional RL algorithm, and with an emphasis on the interpretability of the resulting decision-making process.
translated by 谷歌翻译
主动推断是建模生物学和人造药物的行为的概率框架,该框架源于最小化自由能的原理。近年来,该框架已成功地应用于各种情况下,其目标是最大程度地提高奖励,提供可比性,有时甚至是卓越的性能与替代方法。在本文中,我们通过演示如何以及何时进行主动推理代理执行最佳奖励的动作来阐明奖励最大化和主动推断之间的联系。确切地说,我们展示了主动推理为Bellman方程提供最佳解决方案的条件 - 这种公式是基于模型的增强学习和控制的几种方法。在部分观察到的马尔可夫决策过程中,标准的主动推理方案可以为计划视野1的最佳动作产生最佳动作,但不能超越。相比之下,最近开发的递归活跃推理方案(复杂的推理)可以在任何有限的颞范围内产生最佳作用。我们通过讨论主动推理和强化学习之间更广泛的关系来补充分析。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
机器学习算法中多个超参数的最佳设置是发出大多数可用数据的关键。为此目的,已经提出了几种方法,例如进化策略,随机搜索,贝叶斯优化和启发式拇指规则。在钢筋学习(RL)中,学习代理在与其环境交互时收集的数据的信息内容严重依赖于许多超参数的设置。因此,RL算法的用户必须依赖于基于搜索的优化方法,例如网格搜索或Nelder-Mead单简单算法,这对于大多数R1任务来说是非常效率的,显着减慢学习曲线和离开用户的速度有目的地偏见数据收集的负担。在这项工作中,为了使RL算法更加用户独立,提出了一种使用贝叶斯优化的自主超参数设置的新方法。来自过去剧集和不同的超参数值的数据通过执行行为克隆在元学习水平上使用,这有助于提高最大化获取功能的加强学习变体的有效性。此外,通过紧密地整合在加强学习代理设计中的贝叶斯优化,还减少了收敛到给定任务的最佳策略所需的状态转换的数量。与其他手动调整和基于优化的方法相比,计算实验显示了有希望的结果,这突出了改变算法超级参数来增加所生成数据的信息内容的好处。
translated by 谷歌翻译
Feature acquisition algorithms address the problem of acquiring informative features while balancing the costs of acquisition to improve the learning performances of ML models. Previous approaches have focused on calculating the expected utility values of features to determine the acquisition sequences. Other approaches formulated the problem as a Markov Decision Process (MDP) and applied reinforcement learning based algorithms. In comparison to previous approaches, we focus on 1) formulating the feature acquisition problem as a MDP and applying Monte Carlo Tree Search, 2) calculating the intermediary rewards for each acquisition step based on model improvements and acquisition costs and 3) simultaneously optimizing model improvement and acquisition costs with multi-objective Monte Carlo Tree Search. With Proximal Policy Optimization and Deep Q-Network algorithms as benchmark, we show the effectiveness of our proposed approach with experimental study.
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
我们考虑单个强化学习与基于事件驱动的代理商金融市场模型相互作用时学习最佳执行代理的学习动力。交易在事件时间内通过匹配引擎进行异步进行。最佳执行代理在不同级别的初始订单尺寸和不同尺寸的状态空间上进行考虑。使用校准方法考虑了对基于代理的模型和市场的影响,该方法探讨了经验性风格化事实和价格影响曲线的变化。收敛,音量轨迹和动作痕迹图用于可视化学习动力学。这表明了最佳执行代理如何在模拟的反应性市场框架内学习最佳交易决策,以及如何通过引入战略订单分类来改变模拟市场的反反应。
translated by 谷歌翻译
在过去的10到15年中,积极的推论有助于解释从习惯形成到多巴胺能放电甚至建模好奇心的各种脑机制。然而,当在将所有可能的策略上计算到时间范围内的所有可能的策略时,当前实现遭受指数(空间和时间)复杂性等级。 Fountas等人(2020)使用Monte Carlo树搜索解决这个问题,导致两个不同的任务中的令人印象深刻的结果。在本文中,我们提出了一种替代框架,其旨在通过铸造规划作为结构学习问题来统一树搜索和有效推论。然后呈现两个树搜索算法。首先将预期的自由能量及时向前传播(即,朝向叶子),而第二次向后传播(即,朝向根)。然后,我们证明前向和后向传播分别与主动推断和复杂的推断相关,从而阐明了这两个规划策略之间的差异。
translated by 谷歌翻译
离线政策优化可能会对许多现实世界的决策问题产生重大影响,因为在线学习在许多应用中可能是不可行的。重要性采样及其变体是离线策略评估中一种常用的估计器类型,此类估计器通常不需要关于价值函数或决策过程模型功能类的属性和代表性能力的假设。在本文中,我们确定了一种重要的过度拟合现象,以优化重要性加权收益,在这种情况下,学到的政策可以基本上避免在最初的状态空间的一部分中做出一致的决策。我们提出了一种算法,以避免通过新的每个国家 - 邻居标准化约束过度拟合,并提供对拟议算法的理论理由。我们还显示了以前尝试这种方法的局限性。我们在以医疗风格的模拟器为中测试算法,该模拟器是从真实医院收集的记录数据集和连续的控制任务。这些实验表明,与最先进的批处理学习算法相比,所提出的方法的过度拟合和更好的测试性能。
translated by 谷歌翻译
We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions of a vector-valued reward. Motivated by the fair resource allocation literature, we model this as an expected welfare maximization problem, for some non-linear fair welfare function of the vector of long-term cumulative rewards. One canonical example of such a function is the Nash Social Welfare, or geometric mean, the log transform of which is also known as the Proportional Fairness objective. We show that even approximately optimal optimization of the expected Nash Social Welfare is computationally intractable even in the tabular case. Nevertheless, we provide a novel adaptation of Q-learning that combines non-linear scalarized learning updates and non-stationary action selection to learn effective policies for optimizing nonlinear welfare functions. We show that our algorithm is provably convergent, and we demonstrate experimentally that our approach outperforms techniques based on linear scalarization, mixtures of optimal linear scalarizations, or stationary action selection for the Nash Social Welfare Objective.
translated by 谷歌翻译
In this paper, we consider the problem of path finding for a set of homogeneous and autonomous agents navigating a previously unknown stochastic environment. In our problem setting, each agent attempts to maximize a given utility function while respecting safety properties. Our solution is based on ideas from evolutionary game theory, namely replicating policies that perform well and diminishing ones that do not. We do a comprehensive comparison with related multiagent planning methods, and show that our technique beats state of the art RL algorithms in minimizing path length by nearly 30% in large spaces. We show that our algorithm is computationally faster than deep RL methods by at least an order of magnitude. We also show that it scales better with an increase in the number of agents as compared to other methods, path planning methods in particular. Lastly, we empirically prove that the policies that we learn are evolutionarily stable and thus impervious to invasion by any other policy.
translated by 谷歌翻译