Player modelling is the field of study associated with understanding players. One pursuit in this field is affect prediction: the ability to predict how a game will make a player feel. We present novel improvements to affect prediction by using a deep convolutional neural network (CNN) to predict player experience trained on game event logs in tandem with localized level structure information. We test our approach on levels based on Super Mario Bros. (Infinite Mario Bros.) and Super Mario Bros.: The Lost Levels (Gwario), as well as original Super Mario Bros. levels. We outperform prior work, and demonstrate the utility of training on player logs, even when lacking them at test time for cross-domain player modelling.
translated by 谷歌翻译
超过人类决策能力的机器学习模型的出现,在复杂的领域中启动了一种运动,以构建与人类互动的AI系统。许多构建基础对于这项活动至关重要,中心是人类行为的算法表征。尽管现有的大部分工作都集中在人类的总体行为上,但一个重要的远程目标是开发专门针对个人人并可以在其中区分的行为模型。为了使这个过程形式化,我们研究了行为风格的问题,其中任务是仅从决策中确定决策者。我们提出了一种基于变压器的方法,用于在国际象棋的背景下进行行为风格测量法,其中有人试图识别玩一组游戏的玩家。我们的方法在几个弹药的分类框架中运行,并且可以在只有100个标签游戏的情况下正确地从成千上万的候选玩家中识别出98%精度的候选人。即使接受业余比赛的训练,我们的方法还是对大师级玩家的分布样本的概括,尽管业余球员和世界一流的球员之间存在巨大差异。最后,我们更广泛地考虑了我们所产生的嵌入有关国际象棋中人类风格的揭示的内容,以及在行为数据中识别个人的强大方法的潜在伦理含义。
translated by 谷歌翻译
在人类可能希望从这些系统中学习,与它们合作或作为合作伙伴互动的情况下,可以捕获类似人类行为的AI系统越来越有用。为了开发以人为导向的AI系统,预测人类行为(而不是预测最佳行动)的问题受到了广泛关注。现有的工作集中在总体意义上捕获人类行为,这可能会限制任何特定个人可以从与这些系统互动中获得的收益。我们通过开发国际象棋中人类行为的高度准确的预测模型来扩展这一工作。国际象棋是探索人类互动的一个丰富领域,因为它结合了一套独特的属性:AI系统在多年前实现了超人类的表现,但人类仍然与他们以及对手和准备工具紧密互动,并且有一种关于单个玩家游戏的大量记录数据。从迈亚(Maia)开始,该版本的Alphazero经过了对人类人群的培训,我们证明我们可以通过应用一系列微调方法来显着提高特定玩家的举动的预测准确性。此外,我们的个性化模型可用于执行风格测定法 - 预测谁采取了一组给定的动作 - 表明他们在个人层面上捕获了人类的决策。我们的工作展示了一种使AI系统更好地与个人行为保持一致的方法,这可能会导致人类互动的大量改善。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
Alphazero,Leela Chess Zero和Stockfish Nnue革新了计算机国际象棋。本书对此类引擎的技术内部工作进行了完整的介绍。该书分为四个主要章节 - 不包括第1章(简介)和第6章(结论):第2章引入神经网络,涵盖了所有用于构建深层网络的基本构建块,例如Alphazero使用的网络。内容包括感知器,后传播和梯度下降,分类,回归,多层感知器,矢量化技术,卷积网络,挤压网络,挤压和激发网络,完全连接的网络,批处理归一化和横向归一化和跨性线性单位,残留层,剩余层,过度效果和底漆。第3章介绍了用于国际象棋发动机以及Alphazero使用的经典搜索技术。内容包括minimax,alpha-beta搜索和蒙特卡洛树搜索。第4章展示了现代国际象棋发动机的设计。除了开创性的Alphago,Alphago Zero和Alphazero我们涵盖Leela Chess Zero,Fat Fritz,Fat Fritz 2以及有效更新的神经网络(NNUE)以及MAIA。第5章是关于实施微型α。 Shexapawn是国际象棋的简约版本,被用作为此的示例。 Minimax搜索可以解决六ap峰,并产生了监督学习的培训位置。然后,作为比较,实施了类似Alphazero的训练回路,其中通过自我游戏进行训练与强化学习结合在一起。最后,比较了类似α的培训和监督培训。
translated by 谷歌翻译
In fighting games, individual players of the same skill level often exhibit distinct strategies from one another through their gameplay. Despite this, the majority of AI agents for fighting games have only a single strategy for each "level" of difficulty. To make AI opponents more human-like, we'd ideally like to see multiple different strategies at each level of difficulty, a concept we refer to as "multidimensional" difficulty. In this paper, we introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty that utilize diverse strategies. We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
translated by 谷歌翻译
Superhuman神经网络代理如alphazero是什么?这个问题是科学和实际的兴趣。如果强神经网络的陈述与人类概念没有相似之处,我们理解他们的决定的忠实解释的能力将受到限制,最终限制了我们可以通过神经网络解释来实现的。在这项工作中,我们提供了证据表明,人类知识是由alphapero神经网络获得的,因为它在国际象棋游戏中列车。通过探究广泛的人类象棋概念,我们在alphazero网络中显示了这些概念的时间和地点。我们还提供了一种关注开放游戏的行为分析,包括来自国际象棋Grandmaster Vladimir Kramnik的定性分析。最后,我们开展了初步调查,观察alphazero的表现的低级细节,并在线提供由此产生的行为和代表性分析。
translated by 谷歌翻译
我们研究了如何根据PlayTraces有效预测游戏角色。可以通过计算玩家与游戏行为的生成模型(所谓的程序角色)之间的动作协议比率来计算游戏角色。但这在计算上很昂贵,并假设很容易获得适当的程序性格。我们提出了两种用于估计玩家角色的方法,一种是使用定期监督的学习和启动游戏机制的汇总度量的方法,另一种是基于序列学习的序列学习的另一种方法。尽管这两种方法在预测与程序角色一致定义的游戏角色时都具有很高的精度,但它们完全无法预测玩家使用问卷的玩家本身定义的游戏风格。这个有趣的结果突出了使用计算方法定义游戏角色的价值。
translated by 谷歌翻译
胜利预测对于了解电子竞技中的技能建模,团队合作和对接至关重要。在本文中,我们提出了GCN-WP,这是基于图形卷积网络的电子竞技的半监督胜利预测模型。该模型在一个赛季(1年)的过程中了解了电子竞技联盟的结构,并在另一个类似的联赛上做出了预测。该模型集成了有关比赛和玩家的30多个功能,并采用图形卷积根据他们的附近进行分类。与机器学习或LOL的技能评级模型相比,我们的模型可实现最先进的预测准确性。该框架是可以推广的,因此可以轻松地扩展到其他多人游戏在线游戏。
translated by 谷歌翻译
在足球(或协会足球)中,球员迅速从英雄转变为零,反之亦然。性能不是静态度量,而是一种易变的措施。将绩效分析为时间序列而不是静止的时间点对于做出更好的决策至关重要。本文介绍并探讨了I-VAEP和O-VAEP模型,以评估行动和评估玩家的意图和执行。然后,我们随着时间的推移分析这些评级,并提出用例,以基本将我们将玩家评分视为连续问题的选择。结果,我们出席了谁是最好的球员以及他们的表现如何发展,定义波动率指标以衡量球员的一致性,并建立玩家发展曲线以帮助决策。
translated by 谷歌翻译
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from selfplay. Here, we introduce an algorithm based solely on reinforcement learning, without human data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo.Much progress towards artificial intelligence has been made using supervised learning systems that are trained to replicate the decisions of human experts 1-4 . However, expert data is often expensive, unreliable, or simply unavailable. Even when reliable data is available it may impose a ceiling on the performance of systems trained in this manner 5 . In contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking. Recently, there has been rapid progress towards this goal, using deep neural networks trained by reinforcement learning.These systems have outperformed humans in computer games such as Atari 6, 7 and 3D virtual environments [8][9][10] . However, the most challenging domains in terms of human intellect -such as the
translated by 谷歌翻译
考虑到人类行为的例子,我们考虑在多种代理决策问题中建立强大但人类的政策的任务。仿制学习在预测人类行为方面有效,但可能与专家人类的实力不符,而自助学习和搜索技术(例如,alphakero)导致强大的性能,但可能会产生难以理解和协调的政策。我们在国际象棋中显示,并通过应用Monte Carlo树搜索产生具有更高人为预测准确性的策略并比仿制政策更强大的kl差异,基于kl发散的正规化搜索策略。然后我们介绍一种新的遗憾最小化算法,该算法基于来自模仿的政策的KL发散规范,并显示将该算法应用于无按压外交产生的策略,使得在基本上同时保持与模仿学习相同的人类预测准确性的策略更强。
translated by 谷歌翻译
竞争性在线游戏使用评分系统进行对接;基于进步的算法可以根据他们玩游戏的结果来估计具有可解释评分的玩家的技能水平。但是,玩家的总体体验是由超出其游戏唯一结果的因素来影响的。在本文中,我们设计了从游戏统计信息到模拟玩家的几个功能,并创建了准确代表其行为和真实绩效水平的评分。然后,我们将行为评级的估计能力与通过三个主流评分系统创建的评分的估计能力进行了比较,通过预测竞争激烈的射击游戏类型的四种流行游戏模式中的玩家排名。我们的结果表明,行为等级在维持创建表示形式的解释性的同时提出了更准确的绩效估计。考虑玩家的演奏行为的不同方面和使用行为等级进行对接可能会导致对决,这些比赛与玩家的目标和兴趣更加一致,因此导致了更愉快的游戏体验。
translated by 谷歌翻译
自动适应玩家的游戏内容打开新的游戏开发门。在本文中,我们提出了一种使用人物代理和经验指标的架构,这使得能够在进行针对特定玩家人物的程序生成的水平。使用我们的游戏“Grave Rave”,我们证明了这种方法成功地适应了三个不同的三种不同体验指标的基于法则的角色代理。此外,该适应性被证明是特定的,这意味着水平是人的意识,而不仅仅是关于所选度量的一般优化。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
在免费增值游戏中,玩家的收入来自于应用内购买以及该玩家所曝光的广告。玩家玩游戏越长,他或她将在游戏中产生收入的机会就越高。在这种情况下,能够及时检测玩家即将退出比赛(Churn)以做出反应并尝试将玩家保留在游戏中,从而延长他或她的游戏寿命非常重要。在本文中,我们调查了如何通过使用不同的神经网络体系结构组合顺序和汇总数据来改善流失预测中最新的最新预测。比较分析的结果表明,两种数据类型的组合可以根据纯粹的顺序或纯聚合数据来提高预测准确性比预测因子。
translated by 谷歌翻译
这项研究旨在实现两个目标:第一个目标是策划一个大型且信息丰富的数据集,其中包含有关球员的行动和位置的关键和简洁的摘要,以及在专业和NCAA中排球的来回旅行模式Div-i室内排球游戏。尽管几项先前的研究旨在为其他运动创建类似的数据集(例如羽毛球和足球),但尚未实现为室内排球创建这样的数据集。第二个目标是引入排球描述性语言,以充分描述游戏中的集会过程并将语言应用于我们的数据集。基于精选的数据集和我们的描述性运动语言,我们使用我们的数据集介绍了三项用于自动化排球行动和战术分析的任务:(1)排球拉力赛预测,旨在预测集会的结果,并帮助球员和教练改善决策制定决策在实践中,(2)设置类型和命中类型预测,以帮助教练和球员更有效地为游戏做准备,以及(3)排球策略和进攻区统计,以提供高级排球统计数据,并帮助教练了解游戏和对手的策略更好的。我们进行了案例研究,以展示实验结果如何为排球分析社区提供见解。此外,基于现实世界数据的实验评估为我们的数据集和语言的未来研究和应用建立了基准。这项研究弥合了室内排球场与计算机科学之间的差距。
translated by 谷歌翻译
In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
本文介绍了一种扮演流行的第一人称射击(FPS)视频游戏的AI代理商的AI代理商;来自像素输入的全球攻势(CSGO)。代理人,一个深度神经网络,符合Deathmatch游戏模式内置AI内置AI的媒体难度的性能,同时采用人类的戏剧风格。与在游戏中的许多事先工作不同,CSGO没有API,因此算法必须培训并实时运行。这限制了可以生成的策略数据的数量,妨碍许多增强学习算法。我们的解决方案使用行为克隆 - 在从在线服务器上的人类播放(400万帧,大小与Imagenet相当的400万帧)上刮出的大型嘈杂数据集的行为克隆训练,以及一个较小的高质量专家演示数据集。这种比例是比FPS游戏中的模仿学习的先前工作的数量级。
translated by 谷歌翻译