部署后,AI代理会遇到超出其自动解决问题能力的问题。利用人类援助可以帮助代理人克服其固有的局限性,并坚决应对陌生的情况。我们提出了一个通用的交互式框架,该框架使代理商能够从对任务和环境有知识的助手那里请求和解释丰富的上下文有用的信息。我们在模拟的人类辅助导航问题上证明了框架的实用性。在我们的方法中学到的援助要求政策的帮助下,导航代理与完全自主行为相比,在以前看不见的环境中发生的任务上的成功率提高了7倍。我们表明,代理商可以根据上下文来利用不同类型的信息,并分析学习援助要求政策的好处和挑战,当助手可以递归地将任务分解为子任务。
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
建立能够参与与人类社会互动的自治代理是AI的主要挑战之一。在深度加强学习(DRL)领域内,这一目标激励了多种作品上体现语言使用。然而,目前的方法在非常简化和非多样化的社交场合中关注语言作为通信工具:语言的“自然”减少到高词汇大小和变异性的概念。在本文中,我们认为针对人类级别的AI需要更广泛的关键社交技能:1)语言在复杂和可变的社会环境中使用; 2)超越语言,在不断发展的社会世界内的多模式设置中的复杂体现通信。我们解释了认知科学的概念如何帮助AI向人类智力绘制路线图,重点关注其社会方面。作为第一步,我们建议将目前的研究扩大到更广泛的核心社交技能。为此,我们展示了使用其他(脚本)社会代理商的多个网格世界环境来评估DRL代理商社交技能的基准。然后,我们研究了最近的Sota DRL方法的限制,当时在Sowisai上进行测试并讨论熟练社会代理商的重要下一步。视频和代码可在https://sites.google.com/view/socialai找到。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
我们对学习协调的互动代理感兴趣,即$ BUILDER $ - 执行操作但忽略任务的目标 - 以及$架构师$指导建造者以朝着任务的目标指导。我们定义和探索正式的设置,其中人工代理配备了允许它们同时学习任务的机制,同时同时演变共享通信协议。实验符号学领域表明,从先验的未知指示中学习的人类熟练程度。因此,我们从中获取灵感并提出了建筑师构建器问题(ABP):一个不对称的设置,其中建筑师必须学习指导建设者朝构建特定结构。该架构师知道目标结构,但不能在环境中行动,只能向构建器发送任意消息。另一方面的建筑师可以在环境中采取行动,但没有关于手头的任务的知识,必须学会解决它依赖于架构师发送的消息。至关重要的是,消息的含义最初没有在代理商之间定义,而是必须在整个学习中进行协商。在这些约束下,我们建议建筑师构建器迭代(abig),一个解决方案到架构师 - 建筑师的问题,其中建筑师利用Builder的学习模型指导它,同时构建器使用自模仿学习来加强其导游行为。我们分析ABIG的关键学习机制,并在ABP的二维实例化中测试,其中任务涉及抓取立方体,将它们放在给定位置或构建各种形状。在这种环境中,ABIG导致低级,高频,指导通信协议,不仅使建筑师构建器对能够在手头上解决任务,而且还可以概括到未操作任务。
translated by 谷歌翻译
为了解决艰巨的任务,人类提出问题以从外部来源获取知识。相反,经典的加强学习者缺乏这种能力,并且常常诉诸探索性行为。这会加剧,因为很少的当今环境支持查询知识。为了研究如何通过语言教授代理来查询外部知识,我们首先介绍了两个新环境:基于网格世界的Q-babyai和基于文本的Q-Textworld。除了物理互动外,代理还可以查询专门针对这些环境的外部知识源来收集信息。其次,我们提出了“寻求知识”(AFK)代理,该代理学会生成语言命令以查询有助于解决任务的有意义的知识。 AFK利用非参数记忆,指针机制和情节探索奖金来解决(1)无关的信息,(2)一个较大的查询语言空间,(3)延迟奖励有意义的查询。广泛的实验表明,AFK代理在具有挑战性的Q-Babyai和Q-Textworld环境方面优于最近的基线。
translated by 谷歌翻译
建立可以探索开放式环境的自主机器,发现可能的互动,自主构建技能的曲目是人工智能的一般目标。发展方法争辩说,这只能通过可以生成,选择和学习解决自己问题的自主和本质上动机的学习代理人来实现。近年来,我们已经看到了发育方法的融合,特别是发展机器人,具有深度加强学习(RL)方法,形成了发展机器学习的新领域。在这个新域中,我们在这里审查了一组方法,其中深入RL算法训练,以解决自主获取的开放式曲目的发展机器人问题。本质上动机的目标条件RL算法训练代理商学习代表,产生和追求自己的目标。自我生成目标需要学习紧凑的目标编码以及它们的相关目标 - 成就函数,这导致与传统的RL算法相比,这导致了新的挑战,该算法设计用于使用外部奖励信号解决预定义的目标集。本文提出了在深度RL和发育方法的交叉口中进行了这些方法的类型,调查了最近的方法并讨论了未来的途径。
translated by 谷歌翻译
实际上,寻求帮助通常比搜索整个空间更有效,以找到一个未知位置的对象。我们提出了一个学习框架,该框架使代理商能够在此类具体的视觉导航任务中积极寻求帮助,其中反馈将其视为目标的位置。为了模仿老师可能并不总是在场的现实情况,我们提出了一项培训课程,而反馈并不总是可用。我们制定了目标的不确定性度量,并使用经验结果表明,通过这种方法,代理商将在没有反馈时保持有效的帮助,同时保持强大的帮助。
translated by 谷歌翻译
在人类空间中运营的机器人必须能够与人的自然语言互动,既有理解和执行指示,也可以使用对话来解决歧义并从错误中恢复。为此,我们介绍了教学,一个超过3,000人的互动对话的数据集,以完成模拟中的家庭任务。一个有关任务的Oracle信息的指挥官以自然语言与追随者通信。追随者通过环境导航并与环境进行互动,以完成从“咖啡”到“准备早餐”的复杂性不同的任务,提出问题并从指挥官获取其他信息。我们提出三个基准使用教学研究体现了智能挑战,我们评估了对话理解,语言接地和任务执行中的初始模型的能力。
translated by 谷歌翻译
语言指导的体现了AI基准,要求代理导航环境并操纵对象通常允许单向通信:人类用户向代理提供了自然语言命令,而代理只能被动地遵循命令。我们介绍了基于Alfred基准测试的基准测试后的拨号式拨号。Dialfred允许代理商积极向人类用户提出问题;代理使用用户响应中的其他信息来更好地完成其任务。我们发布了一个具有53K任务的问题和答案的人类注销数据集,以及一个可以回答问题的甲骨文。为了解决Dialfred,我们提出了一个提问者绩效框架,其中发问者通过人类通知的数据进行了预训练,并通过增强学习进行了微调。我们将拨号拨入公开,并鼓励研究人员提出和评估他们的解决方案,以构建支持对话的体现代理。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
视觉室内导航(VIN)任务已从数据驱动的机器学习社区中引起了人们的关注,尤其是在最近报告的基于学习方法的成功中。由于这项任务的先天复杂性,研究人员尝试从各种不同角度解决问题,其全部范围尚未在总体报告中捕获。这项调查首先总结了VIN任务的基于学习的方法的代表性工作,然后确定并讨论了阻碍VIN绩效的问题,并激发了值得探索社区的这些关键领域的未来研究。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
临时团队合作是设计可以与新队友合作而无需事先协调的研究问题的研究问题。这项调查做出了两个贡献:首先,它提供了对临时团队工作问题不同方面的结构化描述。其次,它讨论了迄今为止该领域取得的进展,并确定了临时团队工作中需要解决的直接和长期开放问题。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
对话策略学习是面向任务的对话系统(TDS)中的关键组成部分,该系统决定在每个回合处给定对话状态的系统的下一个动作。加强学习(RL)通常被选为学习对话策略,将用户作为环境和系统作为代理。已经创建了许多基准数据集和算法,以促进基于RL的对话策略的制定和评估。在本文中,我们调查了RL规定的对话政策的最新进展和挑战。更具体地说,我们确定了主要问题,并总结了基于RL的对话政策学习的相应解决方案。此外,我们通过将最新方法分类为RL中的基本元素,对将RL应用于对话政策学习的全面调查。我们认为,这项调查可以阐明对话管理未来的研究。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译