在游戏中,就像在其他许多领域一样,设计验证和测试是一个巨大的挑战,因为系统的大小和手动测试变得不可行。本文提出了一种新方法来自动游戏验证和测试。我们的方法利用了数据驱动的模仿学习技术,这几乎不需要精力和时间,并且对机器学习或编程不了解,设计师可以使用该技术有效地训练游戏测试剂。我们通过与行业专家的用户研究一起研究了方法的有效性。调查结果表明,我们的方法确实是一种有效的游戏验证方法,并且数据驱动的编程将是减少努力和提高现代游戏测试质量的有用帮助。该调查还突出了一些开放挑战。在最新文献的帮助下,我们分析了确定的挑战,并提出了适合支持和最大化我们方法实用性的未来研究方向。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
由于部分可观察性,高维视觉感知和延迟奖励,在MINECRAFT等开放世界游戏中的学习理性行为仍然是挑战,以便对加固学习(RL)研究造成挑战性,高维视觉感知和延迟奖励。为了解决这个问题,我们提出了一种具有代表学习和模仿学习的样本有效的等级RL方法,以应对感知和探索。具体来说,我们的方法包括两个层次结构,其中高级控制器学习控制策略来控制选项,低级工作人员学会解决每个子任务。为了提高子任务的学习,我们提出了一种技术组合,包括1)动作感知表示学习,其捕获了行动和表示之间的基础关系,2)基于鉴别者的自模仿学习,以实现有效的探索,以及3)合奏行为克隆一致性筛选政策鲁棒性。广泛的实验表明,Juewu-MC通过大边缘显着提高了样品效率并优于一组基线。值得注意的是,我们赢得了神经脂溢斯矿业锦标赛2021年研究竞赛的冠军,并实现了最高的绩效评分。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
建立可以探索开放式环境的自主机器,发现可能的互动,自主构建技能的曲目是人工智能的一般目标。发展方法争辩说,这只能通过可以生成,选择和学习解决自己问题的自主和本质上动机的学习代理人来实现。近年来,我们已经看到了发育方法的融合,特别是发展机器人,具有深度加强学习(RL)方法,形成了发展机器学习的新领域。在这个新域中,我们在这里审查了一组方法,其中深入RL算法训练,以解决自主获取的开放式曲目的发展机器人问题。本质上动机的目标条件RL算法训练代理商学习代表,产生和追求自己的目标。自我生成目标需要学习紧凑的目标编码以及它们的相关目标 - 成就函数,这导致与传统的RL算法相比,这导致了新的挑战,该算法设计用于使用外部奖励信号解决预定义的目标集。本文提出了在深度RL和发育方法的交叉口中进行了这些方法的类型,调查了最近的方法并讨论了未来的途径。
translated by 谷歌翻译
The increasing complexity of gameplay mechanisms in modern video games is leading to the emergence of a wider range of ways to play games. The variety of possible play-styles needs to be anticipated by designers, through automated tests. Reinforcement Learning is a promising answer to the need of automating video game testing. To that effect one needs to train an agent to play the game, while ensuring this agent will generate the same play-styles as the players in order to give meaningful feedback to the designers. We present CARMI: a Configurable Agent with Relative Metrics as Input. An agent able to emulate the players play-styles, even on previously unseen levels. Unlike current methods it does not rely on having full trajectories, but only summary data. Moreover it only requires little human data, thus compatible with the constraints of modern video game production. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.
translated by 谷歌翻译
本文介绍了一种扮演流行的第一人称射击(FPS)视频游戏的AI代理商的AI代理商;来自像素输入的全球攻势(CSGO)。代理人,一个深度神经网络,符合Deathmatch游戏模式内置AI内置AI的媒体难度的性能,同时采用人类的戏剧风格。与在游戏中的许多事先工作不同,CSGO没有API,因此算法必须培训并实时运行。这限制了可以生成的策略数据的数量,妨碍许多增强学习算法。我们的解决方案使用行为克隆 - 在从在线服务器上的人类播放(400万帧,大小与Imagenet相当的400万帧)上刮出的大型嘈杂数据集的行为克隆训练,以及一个较小的高质量专家演示数据集。这种比例是比FPS游戏中的模仿学习的先前工作的数量级。
translated by 谷歌翻译
临时团队合作是设计可以与新队友合作而无需事先协调的研究问题的研究问题。这项调查做出了两个贡献:首先,它提供了对临时团队工作问题不同方面的结构化描述。其次,它讨论了迄今为止该领域取得的进展,并确定了临时团队工作中需要解决的直接和长期开放问题。
translated by 谷歌翻译
深入学习的强化学习(RL)的结合导致了一系列令人印象深刻的壮举,许多相信(深)RL提供了一般能力的代理。然而,RL代理商的成功往往对培训过程中的设计选择非常敏感,这可能需要繁琐和易于易于的手动调整。这使得利用RL对新问题充满挑战,同时也限制了其全部潜力。在许多其他机器学习领域,AutomL已经示出了可以自动化这样的设计选择,并且在应用于RL时也会产生有希望的初始结果。然而,自动化强化学习(AutorL)不仅涉及Automl的标准应用,而且还包括RL独特的额外挑战,其自然地产生了不同的方法。因此,Autorl已成为RL中的一个重要研究领域,提供来自RNA设计的各种应用中的承诺,以便玩游戏等游戏。鉴于RL中考虑的方法和环境的多样性,在不同的子领域进行了大部分研究,从Meta学习到进化。在这项调查中,我们寻求统一自动的领域,我们提供常见的分类法,详细讨论每个区域并对研究人员来说是一个兴趣的开放问题。
translated by 谷歌翻译
深度强化学习(DRL)在自动游戏测试中引起了很多关注。早期尝试依靠游戏内部信息进行游戏空间探索,因此需要与游戏深入集成,这对于实际应用来说是不便的。在这项工作中,我们建议仅使用屏幕截图/像素作为自动游戏测试的输入,并建立了一般游戏测试代理Inspector,可以轻松地将其应用于不同的游戏,而无需与游戏深入集成。除了覆盖所有游戏测试空间外,我们的代理商还试图采取类似人类的行为与游戏中的关键对象进行交互,因为某些错误通常发生在玩家对象的交互中。检查器基于纯粹的像素输入,包括三个关键模块:游戏空间探索器,关键对象检测器和类似人类的对象研究者。 Game Space Explorer旨在通过使用像素输入的基于好奇心的奖励功能来探索整个游戏空间。关键对象检测器的目的是基于少量标记的屏幕快照在游戏中检测关键对象。类似人类的对象研究者的目标是模仿人类的行为,以通过模仿学习来调查关键对象。我们在两个受欢迎的视频游戏中进行实验:射击游戏和动作RPG游戏。实验结果证明了检查员在探索游戏空间,检测关键对象和调查对象方面的有效性。此外,检查员在这两场比赛中成功发现了两个潜在的错误。检查员的演示视频可从https://github.com/inspector-gametesting/inspector-gametesting获得。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
模仿学习研究社区最近取得了重大进展,以使人工代理人仅凭视频演示模仿行为。然而,由于视频观察的高维质性质,针对此问题开发的当前最新方法表现出很高的样本复杂性。为了解决这个问题,我们在这里介绍了一种新的算法,称为使用状态观察者VGAIFO-SO从观察中获得的,称为视觉生成对抗性模仿。 Vgaifo-So以此为核心,试图使用一种新型的自我监管的状态观察者来解决样本效率低下,该观察者从高维图像中提供了较低维度的本体感受状态表示的估计。我们在几个连续的控制环境中进行了实验表明,Vgaifo-SO比其他IFO算法更有效地从仅视频演示中学习,有时甚至可以实现与观察(Gaifo)算法的生成对抗性模仿(Gaifo)算法的性能,该算法有特权访问访问权限示威者的本体感知状态信息。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
本文解决了逆增强学习(IRL)的问题 - 从观察其行为中推断出代理的奖励功能。 IRL可以为学徒学习提供可概括和紧凑的代表,并能够准确推断人的偏好以帮助他们。 %并提供更准确的预测。但是,有效的IRL具有挑战性,因为许多奖励功能可以与观察到的行为兼容。我们专注于如何利用先前的强化学习(RL)经验,以使学习这些偏好更快,更高效。我们提出了IRL算法基础(通过样本中的连续功能意图推断行为获取行为),该算法利用多任务RL预培训和后继功能,使代理商可以为跨越可能的目标建立强大的基础,从而跨越可能的目标。给定的域。当仅接触一些专家演示以优化新颖目标时,代理商会使用其基础快速有效地推断奖励功能。我们的实验表明,我们的方法非常有效地推断和优化显示出奖励功能,从而准确地从少于100个轨迹中推断出奖励功能。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
We develop a simple framework to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL) using the on-policy Proximal Policy Optimization (PPO) algorithm which shows better stability than other algorithms and can steadily improve the policies pretrained with IL. We show that the combination of IL and RL can match human results and that good performance strongly depends on combining the allocentric information with an egocentric representation of the environment.
translated by 谷歌翻译