我们研究了如何根据PlayTraces有效预测游戏角色。可以通过计算玩家与游戏行为的生成模型(所谓的程序角色)之间的动作协议比率来计算游戏角色。但这在计算上很昂贵,并假设很容易获得适当的程序性格。我们提出了两种用于估计玩家角色的方法,一种是使用定期监督的学习和启动游戏机制的汇总度量的方法,另一种是基于序列学习的序列学习的另一种方法。尽管这两种方法在预测与程序角色一致定义的游戏角色时都具有很高的精度,但它们完全无法预测玩家使用问卷的玩家本身定义的游戏风格。这个有趣的结果突出了使用计算方法定义游戏角色的价值。
translated by 谷歌翻译
The increasing complexity of gameplay mechanisms in modern video games is leading to the emergence of a wider range of ways to play games. The variety of possible play-styles needs to be anticipated by designers, through automated tests. Reinforcement Learning is a promising answer to the need of automating video game testing. To that effect one needs to train an agent to play the game, while ensuring this agent will generate the same play-styles as the players in order to give meaningful feedback to the designers. We present CARMI: a Configurable Agent with Relative Metrics as Input. An agent able to emulate the players play-styles, even on previously unseen levels. Unlike current methods it does not rely on having full trajectories, but only summary data. Moreover it only requires little human data, thus compatible with the constraints of modern video game production. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.
translated by 谷歌翻译
Modern video games are becoming richer and more complex in terms of game mechanics. This complexity allows for the emergence of a wide variety of ways to play the game across the players. From the point of view of the game designer, this means that one needs to anticipate a lot of different ways the game could be played. Machine Learning (ML) could help address this issue. More precisely, Reinforcement Learning is a promising answer to the need of automating video game testing. In this paper we present a video game environment which lets us define multiple play-styles. We then introduce CARI: a Configurable Agent with Reward as Input. An agent able to simulate a wide continuum range of play-styles. It is not constrained to extreme archetypal behaviors like current methods using reward shaping. In addition it achieves this through a single training loop, instead of the usual one loop per play-style. We compare this novel training approach with the more classic reward shaping approach and conclude that CARI can also outperform the baseline on archetypes generation. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.
translated by 谷歌翻译
Real-Time Strategy (RTS) game unit generation is an unexplored area of Procedural Content Generation (PCG) research, which leaves the question of how to automatically generate interesting and balanced units unanswered. Creating unique and balanced units can be a difficult task when designing an RTS game, even for humans. Having an automated method of designing units could help developers speed up the creation process as well as find new ideas. In this work we propose a method of generating balanced and useful RTS units. We draw on Search-Based PCG and a fitness function based on Monte Carlo Tree Search (MCTS). We present ten units generated by our system designed to be used in the game microRTS, as well as results demonstrating that these units are unique, useful, and balanced.
translated by 谷歌翻译
本文通过将影响建模的任务视为强化学习(RL)过程,引入了范式转变。根据拟议的范式,RL代理通过尝试通过其环境(即背景)来最大化一组奖励(即行为和情感模式)来学习政策(即情感互动)。我们的假设是,RL是交织的有效范式影响引起和与行为和情感示威的表现。重要的是,我们对达马西奥的躯体标记假设的第二个假设建设是,情绪可以成为决策的促进者。我们通过训练Go-Blend Agents来对人类的唤醒和行为进行模型来检验赛车游戏中的假设; Go-Blend是Go-explore算法的修改版本,该版本最近在硬探索任务中展示了最高性能。我们首先改变了基于唤醒的奖励功能,并观察可以根据指定的奖励有效地显示情感和行为模式调色板的代理。然后,我们使用基于唤醒的状态选择机制来偏向Go-Blend探索的策略。我们的发现表明,Go-Blend不仅是有效的影响建模范式,而且更重要的是,情感驱动的RL改善了探索并产生更高的性能剂,从而验证了Damasio在游戏领域中的假设。
translated by 谷歌翻译
本文介绍了一种全自动的机械照明方法,以实现一般视频游戏水平的生成。使用受约束的MAP-ELITE算法和GVG-AI框架,该系统生成了最简单的基于图块的级别,该级别包含特定的游戏机制集并满足可玩性约束。我们将这种方法应用于GVG-AI的$ 4 $不同游戏的机械空间:Zelda,Solarfox,Plants和eartortals。
translated by 谷歌翻译
The highest grossing media franchise of all times, with over \$90 billion in total revenue, is Pokemon. The video games belong to the class of Japanese Role Playing Games (J-RPG). Developing a powerful AI agent for these games is very hard because they present big challenges to MinMax, Monte Carlo Tree Search and statistical Machine Learning, as they are vastly different from the well explored in AI literature games. An AI agent for one of these games means significant progress in AI agents for the entire class. Further, the key principles of such work can hopefully inspire approaches to several domains that require excellent teamwork under conditions of extreme uncertainty, including managing a team of doctors, robots or employees in an ever changing environment, like a pandemic stricken region or a war-zone. In this paper we first explain the mechanics of the game and we perform a game analysis. We continue by proposing unique AI algorithms based on our understanding that the two biggest challenges in the game are keeping a balanced team and dealing with three sources of uncertainty. Later on, we describe why evaluating the performance of such agents is challenging and we present the results of our approach. Our AI agent performed significantly better than all previous attempts and peaked at the 33rd place in the world, in one of the most popular battle formats, while running on only 4 single socket servers.
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
自动适应玩家的游戏内容打开新的游戏开发门。在本文中,我们提出了一种使用人物代理和经验指标的架构,这使得能够在进行针对特定玩家人物的程序生成的水平。使用我们的游戏“Grave Rave”,我们证明了这种方法成功地适应了三个不同的三种不同体验指标的基于法则的角色代理。此外,该适应性被证明是特定的,这意味着水平是人的意识,而不仅仅是关于所选度量的一般优化。
translated by 谷歌翻译
与社会推动者的强化学习的最新进展使此类模型能够在特定的互动任务上实现人级的绩效。但是,大多数交互式场景并不是单独的版本作为最终目标。取而代之的是,与人类互动时,这些代理人的社会影响是重要的,并且在很大程度上没有探索。在这方面,这项工作提出了一种基于竞争行为的社会影响的新颖强化学习机制。我们提出的模型汇总了客观和社会感知机制,以得出用于调节人造药物学习的竞争得分。为了调查我们提出的模型,我们使用厨师的帽子卡游戏设计了一个互动游戏场景,并研究竞争调制如何改变代理商的比赛风格,以及这如何影响游戏中人类玩家的体验。我们的结果表明,与普通代理人相比,与竞争对手的代理人相比,人类可以检测到特定的社会特征,这直接影响了后续游戏中人类玩家的表现。我们通过讨论构成人工竞争得分的不同社会和客观特征如何有助于我们的结果来结束我们的工作。
translated by 谷歌翻译
Interaction and cooperation with humans are overarching aspirations of artificial intelligence (AI) research. Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans. These studies primarily evaluate human compatibility through "objective" metrics such as task performance, obscuring potential variation in the levels of trust and subjective preference that different agents garner. To better understand the factors shaping subjective preferences in human-agent cooperation, we train deep reinforcement learning agents in Coins, a two-player social dilemma. We recruit participants for a human-agent cooperation study and measure their impressions of the agents they encounter. Participants' perceptions of warmth and competence predict their stated preferences for different agents, above and beyond objective performance metrics. Drawing inspiration from social science and biology research, we subsequently implement a new "partner choice" framework to elicit revealed preferences: after playing an episode with an agent, participants are asked whether they would like to play the next round with the same agent or to play alone. As with stated preferences, social perception better predicts participants' revealed preferences than does objective performance. Given these results, we recommend human-agent interaction researchers routinely incorporate the measurement of social perception and subjective preferences into their studies.
translated by 谷歌翻译
在人类可能希望从这些系统中学习,与它们合作或作为合作伙伴互动的情况下,可以捕获类似人类行为的AI系统越来越有用。为了开发以人为导向的AI系统,预测人类行为(而不是预测最佳行动)的问题受到了广泛关注。现有的工作集中在总体意义上捕获人类行为,这可能会限制任何特定个人可以从与这些系统互动中获得的收益。我们通过开发国际象棋中人类行为的高度准确的预测模型来扩展这一工作。国际象棋是探索人类互动的一个丰富领域,因为它结合了一套独特的属性:AI系统在多年前实现了超人类的表现,但人类仍然与他们以及对手和准备工具紧密互动,并且有一种关于单个玩家游戏的大量记录数据。从迈亚(Maia)开始,该版本的Alphazero经过了对人类人群的培训,我们证明我们可以通过应用一系列微调方法来显着提高特定玩家的举动的预测准确性。此外,我们的个性化模型可用于执行风格测定法 - 预测谁采取了一组给定的动作 - 表明他们在个人层面上捕获了人类的决策。我们的工作展示了一种使AI系统更好地与个人行为保持一致的方法,这可能会导致人类互动的大量改善。
translated by 谷歌翻译
近年来,人们对经验驱动的程序水平的生成越来越兴趣。已经制定了各种指标来建模玩家的经验并帮助产生个性化的水平。在这项工作中,我们质疑经验指标是否可以适应具有不同角色的代理商。我们首先审查现有的指标以评估游戏水平。然后,专注于平台游戏,我们设计了一个集成了各种代理和评估指标的框架。对\ emph {Super Mario Bros.}的实验研究表明,使用相同的评估指标,但具有不同角色的药物可以为特定角色产生水平。这意味着,对于简单的游戏而言,使用特定玩家原型的游戏代理作为水平测试仪,我们可能需要全部生成各种行为参与度的水平。
translated by 谷歌翻译
Multi-agent artificial intelligence research promises a path to develop intelligent technologies that are more human-like and more human-compatible than those produced by "solipsistic" approaches, which do not consider interactions between agents. Melting Pot is a research tool developed to facilitate work on multi-agent artificial intelligence, and provides an evaluation protocol that measures generalization to novel social partners in a set of canonical test scenarios. Each scenario pairs a physical environment (a "substrate") with a reference set of co-players (a "background population"), to create a social situation with substantial interdependence between the individuals involved. For instance, some scenarios were inspired by institutional-economics-based accounts of natural resource management and public-good-provision dilemmas. Others were inspired by considerations from evolutionary biology, game theory, and artificial life. Melting Pot aims to cover a maximally diverse set of interdependencies and incentives. It includes the commonly-studied extreme cases of perfectly-competitive (zero-sum) motivations and perfectly-cooperative (shared-reward) motivations, but does not stop with them. As in real-life, a clear majority of scenarios in Melting Pot have mixed incentives. They are neither purely competitive nor purely cooperative and thus demand successful agents be able to navigate the resulting ambiguity. Here we describe Melting Pot 2.0, which revises and expands on Melting Pot. We also introduce support for scenarios with asymmetric roles, and explain how to integrate them into the evaluation protocol. This report also contains: (1) details of all substrates and scenarios; (2) a complete description of all baseline algorithms and results. Our intention is for it to serve as a reference for researchers using Melting Pot 2.0.
translated by 谷歌翻译
在游戏中,就像在其他许多领域一样,设计验证和测试是一个巨大的挑战,因为系统的大小和手动测试变得不可行。本文提出了一种新方法来自动游戏验证和测试。我们的方法利用了数据驱动的模仿学习技术,这几乎不需要精力和时间,并且对机器学习或编程不了解,设计师可以使用该技术有效地训练游戏测试剂。我们通过与行业专家的用户研究一起研究了方法的有效性。调查结果表明,我们的方法确实是一种有效的游戏验证方法,并且数据驱动的编程将是减少努力和提高现代游戏测试质量的有用帮助。该调查还突出了一些开放挑战。在最新文献的帮助下,我们分析了确定的挑战,并提出了适合支持和最大化我们方法实用性的未来研究方向。
translated by 谷歌翻译
近年来,游戏AI研究取得了巨大的突破,尤其是在增强学习(RL)中。尽管他们成功了,但基础游戏通常是通过自己的预设环境和游戏机制实现的,因此使研究人员难以创建不同的游戏环境。但是,测试RL代理对各种游戏环境的测试对于最近努力研究RL的概括并避免可能发生过度拟合的问题至关重要。在本文中,我们将Gridd呈现为游戏AI研究的新平台,该平台提供了高度可配置的游戏,不同的观察者类型和有效的C ++核心引擎的独特组合。此外,我们提出了一系列基线实验,以研究RL剂的不同观察构构和泛化能力的影响。
translated by 谷歌翻译
哈纳比(Hanabi)是一款合作游戏,它带来了将其他玩家建模到最前沿的问题。在这个游戏中,协调的一组玩家可以利用预先建立的公约发挥出色的效果,但是在临时环境中进行比赛需要代理商适应其伴侣的策略,而没有以前的协调。在这种情况下评估代理需要各种各样的潜在伙伴人群,但是到目前为止,尚未以系统的方式考虑代理的行为多样性。本文提出了质量多样性算法作为有前途的算法类别,以生成多种人群为此目的,并使用MAP-ELITE生成一系列不同的Hanabi代理。我们还假设,在培训期间,代理商可以从多样化的人群中受益,并实施一个简单的“元策略”,以适应代理人的感知行为利基市场。我们表明,即使可以正确推断其伴侣的行为利基市场,即使培训其伴侣的行为利基市场,这种元策略也可以比通才策略更好地工作,但是在实践中,伴侣的行为取决于并干扰了元代理自己的行为,这表明是一条途径对于未来的研究,可以在游戏过程中表征另一个代理商的行为。
translated by 谷歌翻译
为了协助游戏开发人员制作游戏NPC,我们展示了EvolvingBehavior,这是一种新颖的工具,用于基因编程,以在不真实的引擎4中发展行为树4.在初步评估中,我们将演变的行为与我们的研究人员设计的手工制作的树木和随机的树木进行了比较 - 在3D生存游戏中种植的树木。我们发现,在这种情况下,EvolvingBehavior能够产生行为,以实现设计师的目标。最后,我们讨论了共同创造游戏AI设计工具的探索的含义和未来途径,以及行为树进化的挑战和困难。
translated by 谷歌翻译
与人类合作需要迅速适应他们的个人优势,缺点和偏好。遗憾的是,大多数标准的多智能经纪增强学习技术,如自助(SP)或人口剧(PP),产生培训合作伙伴的代理商,并且对人类不完全概括。或者,研究人员可以使用行为克隆收集人体数据,培训人类模型,然后使用该模型培训“人类感知”代理(“行为克隆播放”或BCP)。虽然这种方法可以改善代理商的概括到新的人类共同球员,但它涉及首先收集大量人体数据的繁重和昂贵的步骤。在这里,我们研究如何培训与人类合作伙伴合作的代理的问题,而无需使用人类数据。我们认为这个问题的症结是制作各种培训伙伴。从竞争域中取得成功的多智能经纪人方法绘制灵感,我们发现令人惊讶的简单方法非常有效。我们培养我们的代理商合作伙伴作为对自行发行代理人口的最佳反应及其过去培训的过去检查点,这是我们呼叫虚构共同扮演(FCP)的方法。我们的实验专注于两位运动员协作烹饪模拟器,最近被提议作为与人类协调的挑战问题。我们发现,与新的代理商和人类合作伙伴配对时,FCP代理商会显着高于SP,PP和BCP。此外,人类还报告了强烈的主观偏好,以与所有基线与FCP代理合作。
translated by 谷歌翻译
除了独奏游戏外,棋盘游戏至少需要其他玩家才能玩。因此,当对手失踪时,我们创建了人工智能(AI)代理商来对抗我们。这些AI代理是通过多种方式创建的,但是这些代理的一个挑战是,与我们相比,代理可以具有较高的能力。在这项工作中,我们描述了如何创建玩棋盘游戏的较弱的AI代理。我们使用Tic-Tac-toe,九名成员的莫里斯和曼卡拉,我们的技术使用了增强学习模型,代理商使用Q学习算法来学习这些游戏。我们展示了这些代理商如何学会完美地玩棋盘游戏,然后我们描述了制作这些代理商较弱版本的方法。最后,我们提供了比较AI代理的方法。
translated by 谷歌翻译