假设人类(大约)理性使机器人能够通过观察人类行为来推断奖励功能。但人们展出了各种各样的非理性,我们与这项工作的目标是更好地了解他们可以对奖励推论的影响。研究这种效果的挑战是存在许多类型的非理性,具有不同程度的数学形式化。因此,通过改变Bellman Optimaly公式,使用本框架来研究这些框架会如何影响推理的框架,从而通过改变MDP的语言。我们发现错误地建模一个系统地造型的人类,因为嘈杂的理性比正确捕获这些偏差更糟糕 - 这么多,因此可以更好地跳过推动并坚持先前!更重要的是,我们表明,在正确建模时,一个非理性人类可以传达有关奖励的更多信息,而不是完全合理的人体。也就是说,如果机器人具有正确的人类非理性模型,如果人类是理性的,它可以使推论比它能够更强大。非理性基本上有助于而不是阻碍奖励推断,但需要正确占用。
translated by 谷歌翻译
Inferring reward functions from human behavior is at the center of value alignment - aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.
translated by 谷歌翻译
当从人类行为中推断出奖励功能(无论是演示,比较,物理校正或电子停靠点)时,它已证明对人类进行建模作为做出嘈杂的理性选择,并具有“合理性系数”,以捕获多少噪声或熵我们希望看到人类的行为。无论人类反馈的类型或质量如何,许多现有作品都选择修复此系数。但是,在某些情况下,进行演示可能要比回答比较查询要困难得多。在这种情况下,我们应该期望在示范中看到比比较中更多的噪音或次级临时性,并且应该相应地解释反馈。在这项工作中,我们提倡,将每种反馈类型的实际数据中的理性系数扎根,而不是假设默认值,对奖励学习具有重大的积极影响。我们在模拟反馈以及用户研究的实验中测试了这一点。我们发现,从单一反馈类型中学习时,高估人类理性可能会对奖励准确性和遗憾产生可怕的影响。此外,我们发现合理性层面会影响每种反馈类型的信息性:令人惊讶的是,示威并不总是最有用的信息 - 当人类的行为非常卑鄙时,即使在合理性水平相同的情况下,比较实际上就变得更加有用。 。此外,当机器人确定要要求的反馈类型时,它可以通过准确建模每种类型的理性水平来获得很大的优势。最终,我们的结果强调了关注假定理性级别的重要性,不仅是在从单个反馈类型中学习时,尤其是当代理商从多种反馈类型中学习时,尤其是在学习时。
translated by 谷歌翻译
对于许多强化学习(RL)应用程序,指定奖励是困难的。本文考虑了一个RL设置,其中代理仅通过查询可以询问可以的专家来获取有关奖励的信息,例如,评估单个状态或通过轨迹提供二进制偏好。从如此昂贵的反馈中,我们的目标是学习奖励的模型,允许标准RL算法实现高预期的回报,尽可能少的专家查询。为此,我们提出了信息定向奖励学习(IDRL),它使用奖励的贝叶斯模型,然后选择要最大化信息增益的查询,这些查询是有关合理的最佳策略之间的返回差异的差异。与针对特定类型查询设计的先前主动奖励学习方法相比,IDRL自然地适应不同的查询类型。此外,它通过将焦点转移降低奖励近似误差来实现类似或更好的性能,从而降低奖励近似误差,以改善奖励模型引起的策略。我们支持我们的调查结果,在多个环境中进行广泛的评估,并具有不同的查询类型。
translated by 谷歌翻译
真实世界的机器人任务需要复杂的奖励功能。当我们定义机器人需要解决的问题时,我们假装设计人员确切地指定了这种复杂的奖励,并且从那时起,它被设置为石头。然而,在实践中,奖励设计是一个迭代过程:设计师选择奖励,最终遇到奖励激励错误行为的“边缘案例”环境,修改奖励和重复。重新思考机器人问题是什么意思,正式占奖励设计的这种迭代性质?我们建议机器人不采取特定的奖励,而是对其进行不确定性,并占未来设计迭代作为未来的证据。我们贡献了辅助奖励设计方法,通过预测和影响未来的证据来加速设计过程:而不是让设计师最终遇到故障情况并修改奖励,该方法在开发阶段主动地将设计者暴露于这种环境。我们在简化的自主驾驶任务中测试此方法,并发现它通过提出当前奖励的“边缘案例”的环境,更快地提高汽车的行为。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
嘈杂的传感,不完美的控制和环境变化是许多现实世界机器人任务的定义特征。部分可观察到的马尔可夫决策过程(POMDP)提供了一个原则上的数学框架,用于建模和解决不确定性下的机器人决策和控制任务。在过去的十年中,它看到了许多成功的应用程序,涵盖了本地化和导航,搜索和跟踪,自动驾驶,多机器人系统,操纵和人类机器人交互。这项调查旨在弥合POMDP模型的开发与算法之间的差距,以及针对另一端的不同机器人决策任务的应用。它分析了这些任务的特征,并将它们与POMDP框架的数学和算法属性联系起来,以进行有效的建模和解决方案。对于从业者来说,调查提供了一些关键任务特征,以决定何时以及如何成功地将POMDP应用于机器人任务。对于POMDP算法设计师,该调查为将POMDP应用于机器人系统的独特挑战提供了新的见解,并指出了有希望的新方向进行进一步研究。
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
本文研究了一种使用背景计划的新方法,用于基于模型的增强学习:混合(近似)动态编程更新和无模型更新,类似于DYNA体系结构。通过学习模型的背景计划通常比无模型替代方案(例如Double DQN)差,尽管前者使用了更多的内存和计算。基本问题是,学到的模型可能是不准确的,并且经常会产生无效的状态,尤其是在迭代许多步骤时。在本文中,我们通过将背景规划限制为一组(抽象)子目标并仅学习本地,子观念模型来避免这种限制。这种目标空间计划(GSP)方法更有效地是在计算上,自然地纳入了时间抽象,以进行更快的长胜压计划,并避免完全学习过渡动态。我们表明,在各种情况下,我们的GSP算法比双DQN基线要快得多。
translated by 谷歌翻译
主动推断是建模生物学和人造药物的行为的概率框架,该框架源于最小化自由能的原理。近年来,该框架已成功地应用于各种情况下,其目标是最大程度地提高奖励,提供可比性,有时甚至是卓越的性能与替代方法。在本文中,我们通过演示如何以及何时进行主动推理代理执行最佳奖励的动作来阐明奖励最大化和主动推断之间的联系。确切地说,我们展示了主动推理为Bellman方程提供最佳解决方案的条件 - 这种公式是基于模型的增强学习和控制的几种方法。在部分观察到的马尔可夫决策过程中,标准的主动推理方案可以为计划视野1的最佳动作产生最佳动作,但不能超越。相比之下,最近开发的递归活跃推理方案(复杂的推理)可以在任何有限的颞范围内产生最佳作用。我们通过讨论主动推理和强化学习之间更广泛的关系来补充分析。
translated by 谷歌翻译
Model-Based Reinforcement Learning (RL) is widely believed to have the potential to improve sample efficiency by allowing an agent to synthesize large amounts of imagined experience. Experience Replay (ER) can be considered a simple kind of model, which has proved extremely effective at improving the stability and efficiency of deep RL. In principle, a learned parametric model could improve on ER by generalizing from real experience to augment the dataset with additional plausible experience. However, owing to the many design choices involved in empirically successful algorithms, it can be very hard to establish where the benefits are actually coming from. Here, we provide theoretical and empirical insight into when, and how, we can expect data generated by a learned model to be useful. First, we provide a general theorem motivating how learning a model as an intermediate step can narrow down the set of possible value functions more than learning a value function directly from data using the Bellman equation. Second, we provide an illustrative example showing empirically how a similar effect occurs in a more concrete setting with neural network function approximation. Finally, we provide extensive experiments showing the benefit of model-based learning for online RL in environments with combinatorial complexity, but factored structure that allows a learned model to generalize. In these experiments, we take care to control for other factors in order to isolate, insofar as possible, the benefit of using experience generated by a learned model relative to ER alone.
translated by 谷歌翻译
本文解决了逆增强学习(IRL)的问题 - 从观察其行为中推断出代理的奖励功能。 IRL可以为学徒学习提供可概括和紧凑的代表,并能够准确推断人的偏好以帮助他们。 %并提供更准确的预测。但是,有效的IRL具有挑战性,因为许多奖励功能可以与观察到的行为兼容。我们专注于如何利用先前的强化学习(RL)经验,以使学习这些偏好更快,更高效。我们提出了IRL算法基础(通过样本中的连续功能意图推断行为获取行为),该算法利用多任务RL预培训和后继功能,使代理商可以为跨越可能的目标建立强大的基础,从而跨越可能的目标。给定的域。当仅接触一些专家演示以优化新颖目标时,代理商会使用其基础快速有效地推断奖励功能。我们的实验表明,我们的方法非常有效地推断和优化显示出奖励功能,从而准确地从少于100个轨迹中推断出奖励功能。
translated by 谷歌翻译
这项工作研究了以下假设:与人类驾驶状态的部分可观察到的马尔可夫决策过程(POMDP)计划可以显着提高自动高速公路驾驶的安全性和效率。我们在模拟场景中评估了这一假设,即自动驾驶汽车必须在快速连续中安全执行三个车道变化。通过观测扩大(POMCPOW)算法,通过部分可观察到的蒙特卡洛计划获得了近似POMDP溶液。这种方法的表现优于过度自信和保守的MDP基准,匹配或匹配效果优于QMDP。相对于MDP基准,POMCPOW通常将不安全情况的速率降低了一半或将成功率提高50%。
translated by 谷歌翻译
在RL的许多实际应用中,观察来自环境的状态过渡是昂贵的。例如,在核聚变的等离子体控制问题中,计算给定的状态对对的下一个状态需要查询昂贵的过渡功能,这可以导致许多小时的计算机模拟或美元科学研究。这种昂贵的数据收集禁止应用标准RL算法,该算法通常需要大量观察来学习。在这项工作中,我们解决了有效地学习策略的问题,同时为转换函数进行最小数量的状态动作查询。特别是,我们利用贝叶斯最优实验设计的想法,以指导选择国家行动查询以获得高效学习。我们提出了一种采集功能,该函数量化了状态动作对将提供多少信息对Markov决策过程提供的最佳解决方案。在每次迭代时,我们的算法最大限度地提高了该采集功能,选择要查询的最具信息性的状态动作对,从而产生数据有效的RL方法。我们试验各种模拟的连续控制问题,并显示我们的方法学习最佳政策,最高$ 5 $ - $ 1,000 \倍的数据,而不是基于模型的RL基线,10 ^ 3美元 - $ 10 ^ 5 \ times比无模型RL基线更少的数据。我们还提供了几种消融比较,这指出了从获得数据的原理方法产生的大量改进。
translated by 谷歌翻译
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
translated by 谷歌翻译
我们考虑创建助手的问题,这些助手可以帮助代理人(通常是人类)解决新颖的顺序决策问题,假设代理人无法将奖励功能明确指定给助手。我们没有像目前的方法那样旨在自动化并代替代理人,而是赋予助手一个咨询角色,并将代理商作为主要决策者。困难是,我们必须考虑由代理商的限制或限制引起的潜在偏见,这可能导致其看似非理性地拒绝建议。为此,我们介绍了一种新颖的援助形式化,以模拟这些偏见,从而使助手推断和适应它们。然后,我们引入了一种计划助手建议的新方法,该方法可以扩展到大型决策问题。最后,我们通过实验表明我们的方法适应了这些代理偏见,并且比基于自动化的替代方案给代理带来了更高的累积奖励。
translated by 谷歌翻译
强化学习(RL)和脑电脑接口(BCI)是过去十年一直在增长的两个领域。直到最近,这些字段彼此独立操作。随着对循环(HITL)应用的兴趣升高,RL算法已经适用于人类指导,从而产生互动强化学习(IRL)的子领域。相邻的,BCI应用一直很感兴趣在人机交互期间从神经活动中提取内在反馈。这两个想法通过将BCI集成到IRL框架中,将RL和BCI设置在碰撞过程中,通过将内在反馈可用于帮助培训代理商来帮助框架。这种交叉点被称为内在的IRL。为了进一步帮助,促进BCI和IRL的更深层次,我们对内在IRILL的审查有着重点在于其母体领域的反馈驱动的IRL,同时还提供有关有效性,挑战和未来研究方向的讨论。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译