随着alphago的突破,人机游戏的AI已经成为一个非常热门的话题,吸引了世界各地的研究人员,这通常是测试人工智能的有效标准。已经开发了各种游戏AI系统(AIS),如Plibratus,Openai Five和AlphaStar,击败了专业人员。在本文中,我们调查了最近的成功游戏AIS,覆盖棋盘游戏AIS,纸牌游戏AIS,第一人称射击游戏AIS和实时战略游戏AIS。通过这项调查,我们1)比较智能决策领域的不同类型游戏之间的主要困难; 2)说明了开发专业水平AIS的主流框架和技术; 3)提高当前AIS中的挑战或缺点,以实现智能决策; 4)试图提出奥运会和智能决策技巧的未来趋势。最后,我们希望这篇简短的审查可以为初学者提供介绍,激发了在游戏中AI提交的研究人员的见解。
translated by 谷歌翻译
With the breakthrough of AlphaGo, deep reinforcement learning becomes a recognized technique for solving sequential decision-making problems. Despite its reputation, data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning hard to be practical in a wide range of areas. Plenty of methods have been developed for sample efficient deep reinforcement learning, such as environment modeling, experience transfer, and distributed modifications, amongst which, distributed deep reinforcement learning has shown its potential in various applications, such as human-computer gaming, and intelligent transportation. In this paper, we conclude the state of this exciting field, by comparing the classical distributed deep reinforcement learning methods, and studying important components to achieve efficient distributed learning, covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning. Furthermore, we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions. By analyzing their strengths and weaknesses, a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released, which is further validated on Wargame, a complex environment, showing usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games. Finally, we try to point out challenges and future trends, hoping this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
通过少数院校拥有不懈的努力,最近在设计超人AIS中的重大进展,在无限制的德克萨斯州举行(NLTH)中,是大规模不完美信息游戏研究的主要测试平台。然而,新研究人员对新的研究人员来说仍然有挑战性,因为没有与现有方法相比,这严重阻碍了本研究区域的进一步发展。在这项工作中,我们展示了OpenHoldem,一个用于使用NLTH的大规模不完美信息游戏研究的集成工具包。 OpenHoldem对这一研究方向进行了三个主要贡献:1)用于彻底评估不同NLTH AIS,2)用于NLTH AI的四个公开可用的强大基线的标准化评估方案,以及3)一个在线测试平台,公众易于使用API nlth ai评估。我们在Holdem.Ia.ac.CN发布了OpenHoldem,希望它有助于进一步研究该领域的未解决的理论和计算问题,并培养对手建模和人机互动学习等关键研究问题。
translated by 谷歌翻译
游戏历史悠久的历史悠久地作为人工智能进步的基准。最近,使用搜索和学习的方法在一系列完美的信息游戏中表现出强烈的表现,并且使用游戏理论推理和学习的方法对特定的不完美信息扑克变体表示了很强的性能。我们介绍游戏玩家,一个通用算法,统一以前的方法,结合导游搜索,自助学习和游戏理论推理。游戏播放器是实现大型完美和不完美信息游戏中强大实证性能的第一个算法 - 这是一项真正的任意环境算法的重要一步。我们证明了游戏玩家是声音,融合到完美的游戏,因为可用的计算时间和近似容量增加。游戏播放器在国际象棋上达到了强大的表现,然后击败了最强大的公开可用的代理商,在头上没有限制德克萨斯州扑克(Slumbot),击败了苏格兰院子的最先进的代理人,这是一个不完美的信息游戏,说明了引导搜索,学习和游戏理论推理的价值。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
DeepMind的游戏理论与多代理团队研究多学科学习的几个方面,从计算近似值到游戏理论中的基本概念,再到在富裕的空间环境中模拟社会困境,并在困难的团队协调任务中培训3-D类人动物。我们小组的一个签名目的是使用DeepMind在DeepMind中提供的资源和专业知识,以深入强化学习来探索复杂环境中的多代理系统,并使用这些基准来提高我们的理解。在这里,我们总结了我们团队的最新工作,并提出了一种分类法,我们认为这重点介绍了多代理研究中许多重要的开放挑战。
translated by 谷歌翻译
Starcraft II(SC2)对强化学习(RL)提出了巨大的挑战,其中主要困难包括巨大的状态空间,不同的动作空间和长期的视野。在这项工作中,我们研究了《星际争霸II》全长游戏的一系列RL技术。我们研究了涉及提取的宏观活动和神经网络的层次结构的层次RL方法。我们研究了课程转移培训程序,并在具有4个GPU和48个CPU线的单台计算机上训练代理。在64x64地图并使用限制性单元上,我们对内置AI的获胜率达到99%。通过课程转移学习算法和战斗模型的混合物,我们在最困难的非作战水平内置AI(7级)中获得了93%的胜利率。在本文的扩展版本中,我们改进了架构,以针对作弊水平训练代理商,并在8级,9级和10级AIS上达到胜利率,为96%,97%和94 %, 分别。我们的代码在https://github.com/liuruoze/hiernet-sc2上。为了为我们的工作以及研究和开源社区提供基线,我们将其复制了一个缩放版本的Mini-Alphastar(MAS)。 MAS的最新版本为1.07,可以在具有564个动作的原始动作空间上进行培训。它旨在通过使超参数可调节来在单个普通机器上进行训练。然后,我们使用相同的资源将我们的工作与MAS进行比较,并表明我们的方法更有效。迷你α的代码在https://github.com/liuruoze/mini-alphastar上。我们希望我们的研究能够阐明对SC2和其他大型游戏有效增强学习的未来研究。
translated by 谷歌翻译
我们介绍了DeepNash,这是一种能够学习从头开始播放不完美的信息游戏策略的自主代理,直到人类的专家级别。 Stratego是人工智能(AI)尚未掌握的少数标志性棋盘游戏之一。这个受欢迎的游戏具有$ 10^{535} $节点的巨大游戏树,即,$ 10^{175} $倍的$倍于GO。它具有在不完美的信息下需要决策的其他复杂性,类似于德克萨斯州Hold'em扑克,该扑克的游戏树较小(以$ 10^{164} $节点为单位)。 Stratego中的决策是在许多离散的动作上做出的,而动作与结果之间没有明显的联系。情节很长,在球员获胜之前经常有数百次动作,而Stratego中的情况则不能像扑克中那样轻松地分解成管理大小的子问题。由于这些原因,Stratego几十年来一直是AI领域的巨大挑战,现有的AI方法几乎没有达到业余比赛水平。 Deepnash使用游戏理论,无模型的深钢筋学习方法,而无需搜索,该方法学会通过自我播放来掌握Stratego。 DeepNash的关键组成部分的正则化NASH Dynamics(R-NAD)算法通过直接修改基础多项式学习动力学来收敛到近似NASH平衡,而不是围绕它“循环”。 Deepnash在Stratego中击败了现有的最先进的AI方法,并在Gravon Games平台上获得了年度(2022年)和历史前3名,并与人类专家竞争。
translated by 谷歌翻译
Alphazero,Leela Chess Zero和Stockfish Nnue革新了计算机国际象棋。本书对此类引擎的技术内部工作进行了完整的介绍。该书分为四个主要章节 - 不包括第1章(简介)和第6章(结论):第2章引入神经网络,涵盖了所有用于构建深层网络的基本构建块,例如Alphazero使用的网络。内容包括感知器,后传播和梯度下降,分类,回归,多层感知器,矢量化技术,卷积网络,挤压网络,挤压和激发网络,完全连接的网络,批处理归一化和横向归一化和跨性线性单位,残留层,剩余层,过度效果和底漆。第3章介绍了用于国际象棋发动机以及Alphazero使用的经典搜索技术。内容包括minimax,alpha-beta搜索和蒙特卡洛树搜索。第4章展示了现代国际象棋发动机的设计。除了开创性的Alphago,Alphago Zero和Alphazero我们涵盖Leela Chess Zero,Fat Fritz,Fat Fritz 2以及有效更新的神经网络(NNUE)以及MAIA。第5章是关于实施微型α。 Shexapawn是国际象棋的简约版本,被用作为此的示例。 Minimax搜索可以解决六ap峰,并产生了监督学习的培训位置。然后,作为比较,实施了类似Alphazero的训练回路,其中通过自我游戏进行训练与强化学习结合在一起。最后,比较了类似α的培训和监督培训。
translated by 谷歌翻译
深度加强学习(RL)的最新进展导致许多2人零和游戏中的相当大的进展,如去,扑克和星际争霸。这种游戏的纯粹对抗性质允许概念上简单地应用R1方法。然而,现实世界的设置是许多代理商,代理交互是复杂的共同利益和竞争方面的混合物。我们认为外交,一个旨在突出由多种代理交互导致的困境的7人棋盘游戏。它还具有大型组合动作空间和同时移动,这对RL算法具有具有挑战性。我们提出了一个简单但有效的近似最佳响应操作员,旨在处理大型组合动作空间并同时移动。我们还介绍了一系列近似虚构游戏的政策迭代方法。通过这些方法,我们成功地将RL申请到外交:我们认为我们的代理商令人信服地令人信服地表明,游戏理论均衡分析表明新过程产生了一致的改进。
translated by 谷歌翻译
离线增强学习(离线RL)是一个新兴领域,由于其能够从早期收集的数据集中学习行为,该领域最近开始在各个应用领域中引起关注。当与环境进一步交互(计算或其他方式),不安全或完全不可行时,必须使用记录数据。离线RL被证明非常成功,为解决以前棘手的现实世界问题铺平了道路,我们旨在将此范式推广到多代理或多人游戏设置。由于缺乏标准化数据集和有意义的基准,因此在这一领域进行的研究很少,因为进展受到阻碍。在这项工作中,我们将术语“离线平衡发现(OEF)”创造,以描述该区域并构建多个数据集,这些数据集由使用多种既定方法在各种游戏中收集的策略组成。我们还提出了一种基准方法 - 行为克隆和基于模型的算法的合并。我们的两种基于模型的算法 - OEF-PSRO和OEF-CFR - 是在离线学习的背景下,广泛使用的平衡发现算法深入CFR和PSRO的适应。在经验部分中,我们评估了构造数据集上基准算法的性能。我们希望我们的努力可以帮助加速大规模平衡发现的研究。数据集和代码可在https://github.com/securitygames/oef上获得。
translated by 谷歌翻译
2048 is a single-player stochastic puzzle game. This intriguing and addictive game has been popular worldwide and has attracted researchers to develop game-playing programs. Due to its simplicity and complexity, 2048 has become an interesting and challenging platform for evaluating the effectiveness of machine learning methods. This dissertation conducts comprehensive research on reinforcement learning and computer game algorithms for 2048. First, this dissertation proposes optimistic temporal difference learning, which significantly improves the quality of learning by employing optimistic initialization to encourage exploration for 2048. Furthermore, based on this approach, a state-of-the-art program for 2048 is developed, which achieves the highest performance among all learning-based programs, namely an average score of 625377 points and a rate of 72% for reaching 32768-tiles. Second, this dissertation investigates several techniques related to 2048, including the n-tuple network ensemble learning, Monte Carlo tree search, and deep reinforcement learning. These techniques are promising for further improving the performance of the current state-of-the-art program. Finally, this dissertation discusses pedagogical applications related to 2048 by proposing course designs and summarizing the teaching experience. The proposed course designs use 2048-like games as materials for beginners to learn reinforcement learning and computer game algorithms. The courses have been successfully applied to graduate-level students and received well by student feedback.
translated by 谷歌翻译
反事实遗憾最小化(CFR)}是在具有不完美信息的两个玩家零和游戏中查找近似NASH均衡的流行方法。 CFR通过迭代地遍历全游戏树来解决游戏,这限制了其在更大的游戏中的可扩展性。在将CFR应用于以前解决大型游戏时,大型游戏首先被抽象成小型游戏。其次,CFR用于解决抽象游戏。最后,解决方案策略被映射到原始大规模游戏。然而,该过程需要相当大的专家知识,抽象的准确性与专业知识密切相关。此外,抽象还失去了某些信息,最终会影响解决方案策略的准确性。对此问题,最近的方法,\纺织{Deep CFR}通过将深神经网络直接应用于完整游戏中的CFR来缓解抽象和专家知识的需求。在本文中,我们介绍了\ Texit {神经网络反事实遗憾最小化(NNCFR)},一种改进的\ Texit {Deep CFR},通过构造Dueling NetWok作为价值网络而具有更快的收敛性。此外,通过组合价值网络和蒙特卡罗来设计评估模块,这减少了值网络的近似误差。此外,新的损失函数是在提议的\ Texit {NNCFR}中的培训策略网络的过程中设计的,这可能很好,使策略网络更稳定。进行了广泛的实验测试,以表明\ Textit {nncfr}会聚得更快,并且比\ texit {deep cfr}更稳定,并且在测试中倾斜\ yexit {deep cfr} uperforms游戏。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
考虑到人类行为的例子,我们考虑在多种代理决策问题中建立强大但人类的政策的任务。仿制学习在预测人类行为方面有效,但可能与专家人类的实力不符,而自助学习和搜索技术(例如,alphakero)导致强大的性能,但可能会产生难以理解和协调的政策。我们在国际象棋中显示,并通过应用Monte Carlo树搜索产生具有更高人为预测准确性的策略并比仿制政策更强大的kl差异,基于kl发散的正规化搜索策略。然后我们介绍一种新的遗憾最小化算法,该算法基于来自模仿的政策的KL发散规范,并显示将该算法应用于无按压外交产生的策略,使得在基本上同时保持与模仿学习相同的人类预测准确性的策略更强。
translated by 谷歌翻译
注入人类知识是加速加强学习(RL)的有效途径。但是,这些方法是缺乏缺陷的。本文介绍了我们发现的抽象前瞻性模型(思想游戏(TG))与转移学习(TL)相结合是有效的方式。我们将星际争霸II作为我们的学习环境。在设计的TG的帮助下,该代理可以在64x64地图上学习99%的速率,在一个商业机器中仅使用1.08小时的1级内置AI。我们还表明TG方法并不像被认为是限制性的。它可以使用粗略设计的TGS,并且在环境变化时也可以很有用。与以前的基于模型的RL相比,我们显示TG更有效。我们还提出了一种TG假设,其赋予TG不同保真度水平的影响。对于具有不等状态和行动空间的真实游戏,我们提出了一种新颖的XFRNET,其中有用性在验证有用性,同时达到欺骗级别-10 AI的90%的赢利。我们认为TG方法可能会在利用人类知识的进一步研究中进一步研究。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译