证明数字搜索(PNS)和蒙特卡洛树搜索(MCT)已成功地用于一系列游戏中的决策。本文提出了一种称为PN-MCTS的新方法,该方法通过将证明和调解数字的概念纳入MCT的UCT公式来结合这两种树搜索方法。实验结果表明,PN-MCTS在包括动作线,Minishogi,Knightthrough和Awari在内的多个游戏中优于基本MCT,达到了高达94.0%的获胜率。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
在许多游戏中,动作包括玩家制作的若干决定。这些决定可以被视为单独的动作,这在效率原因的多动作游戏中已经是一个常见的做法。播放器的这种划分进入一系列更简单/较低级别的移动,称为\ emph {拆分}。到目前为止,分裂移动已仅在顾问的直接案件中应用,此外,几乎没有研究揭示其对代理商的影响力量的影响。采取知识的视角,我们的目标是回答如何在Monte-Carlo树搜索(MCT)中有效地使用分裂移动,以及分裂设计对代理的实际影响是什么。本文提出了与任意分裂的动作有用的MCT的概括。我们设计了算法的几种变体,并尝试分别测量分离移动的影响,以分别对效率,MCT,模拟和基于动作的启发式的效率。测试是在一组棋盘游戏上进行,并使用常规的主台综合游戏进行播放形式主义进行,其中可以基于游戏的抽象描述自动派生不同粒度的分裂策略。结果以不同方式使用分流设计的代理行为概述。我们得出结论,拆分设计可能对单一以及多动作游戏有很大的利益。
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
传统的增强学习(RL)环境通常在培训和测试阶段都相同。因此,当前的RL方法在很大程度上不能推广到概念上相似但与已训练的方法不同的测试环境,我们将其称为新型测试环境。为了将RL研究推向可以推广到新的测试环境的算法,我们介绍了砖Tic-TAC-TOE(BTTT)测试床,其中在测试环境中的砖位与训练环境中的砖位不同。使用BTTT环境上的圆形锦标赛,我们表明传统的RL国家搜索方法,例如Monte Carlo Tree Search(MCTS)和Minimax,比Alphazero更广泛地对新型测试环境更具概括性。令人惊讶的是,Alphazero已被证明可以在GO,Chess和Shogi等环境中实现超人的性能,这可能会导致人们认为它在新颖的测试环境中的性能很好。我们的结果表明,BTTT虽然很简单,但足够丰富,可以探索Alphazero的普遍性。我们发现,仅增加MCT的lookahead迭代是不足以使Alphazero推广到一些新型的测试环境。相反,增加各种培训环境有助于逐步改善所有可能的起始砖配置中的普遍性。
translated by 谷歌翻译
Monte Carlo树搜索(MCT)是一种用于搜索最佳决策的采样最佳方法。 MCTS的受欢迎程度是基于其挑战基于两位玩家的游戏的非凡结果,这是一个比国际象棋更难的游戏,直到最近被认为是人工智能方法的不可行。 MCTS的成功大大取决于树的构建方式,选择过程在这方面发挥着重要作用。证明是可靠的一个特定选择机制是基于树的上部置信度,通常称为UCT。通过考虑存储在MCT的统计树中的值,UCT试图平衡探索和利用。但是,MCTS UCT的一些调整是必要的工作。在这项工作中,我们使用进化算法(EAS)来发展数学表达式,以替代UCT数学表达式。我们比较了我们提出的方法,称为MCTS(ES-MCTS)中的演化策略,对MCTS UCT的五种变体,算法的三种变体,算法中的算法,以及卡尔卡松游戏中的随机控制器。我们还使用所提出的基于EA的控制器的变体,被称为MCTS的ES部分集成。我们展示了ES-MCTS控制器的方式如何优于所有这10个智能控制器,包括强大的MCTS UCT控制器。
translated by 谷歌翻译
人工智能(AI)球员已经获得了像Go,国际象棋和奥赛罗(Reversi)这样的游戏的超人技能。换句话说,AI球员作为人类球员的对手变得太强。然后,我们不会与AI播放器一起玩棋盘游戏。为了娱乐人类球员,AI球员必须自动平衡其人类球员的技能。为了解决这个问题,我提出了一个具有动态难度调整的AI播放器的Alphadda,基于Alphazero。 alphadda包括一个深神经网络(DNN)和蒙特卡罗树搜索,如alphazero。 alphadda估计游戏状态的值仅使用DNN的板状态,并根据值改变其技能。 Alphadda可以仅使用游戏的状态调整Alphadda技能,而无需先验对对手的知识。在本研究中,Alphadda播放Connect4,6x6 othello,使用6x6尺寸板,与其他AI代理商使用6x6尺寸板,othello。其他AI代理商是alphazero,蒙特卡罗树搜索,minimax算法和随机播放器。本研究表明,除随机玩家外,alphadda实现了与其他AI代理的技能。 alphadda的DDA能力来自于从游戏状态的值的准确估计。我们将能够为任何游戏使用Alphadda的方法,因为DNN可以估计来自状态的值。
translated by 谷歌翻译
Alphazero,Leela Chess Zero和Stockfish Nnue革新了计算机国际象棋。本书对此类引擎的技术内部工作进行了完整的介绍。该书分为四个主要章节 - 不包括第1章(简介)和第6章(结论):第2章引入神经网络,涵盖了所有用于构建深层网络的基本构建块,例如Alphazero使用的网络。内容包括感知器,后传播和梯度下降,分类,回归,多层感知器,矢量化技术,卷积网络,挤压网络,挤压和激发网络,完全连接的网络,批处理归一化和横向归一化和跨性线性单位,残留层,剩余层,过度效果和底漆。第3章介绍了用于国际象棋发动机以及Alphazero使用的经典搜索技术。内容包括minimax,alpha-beta搜索和蒙特卡洛树搜索。第4章展示了现代国际象棋发动机的设计。除了开创性的Alphago,Alphago Zero和Alphazero我们涵盖Leela Chess Zero,Fat Fritz,Fat Fritz 2以及有效更新的神经网络(NNUE)以及MAIA。第5章是关于实施微型α。 Shexapawn是国际象棋的简约版本,被用作为此的示例。 Minimax搜索可以解决六ap峰,并产生了监督学习的培训位置。然后,作为比较,实施了类似Alphazero的训练回路,其中通过自我游戏进行训练与强化学习结合在一起。最后,比较了类似α的培训和监督培训。
translated by 谷歌翻译
Monte-Carlo Tree Search (MCTS) is an adversarial search paradigm that first found prominence with its success in the domain of computer Go. Early theoretical work established the game-theoretic soundness and convergence bounds for Upper Confidence bounds applied to Trees (UCT), the most popular instantiation of MCTS; however, there remain notable gaps in our understanding of how UCT behaves in practice. In this work, we address one such gap by considering the question of whether UCT can exhibit lookahead pathology -- a paradoxical phenomenon first observed in Minimax search where greater search effort leads to worse decision-making. We introduce a novel family of synthetic games that offer rich modeling possibilities while remaining amenable to mathematical analysis. Our theoretical and experimental results suggest that UCT is indeed susceptible to pathological behavior in a range of games drawn from this family.
translated by 谷歌翻译
最近,开创性算法Alphago和Alphazero在游戏学习和深入的强化学习方面开始了一个新时代。尽管Alphago和Alphazero的成就 - 在超级人类层面上玩的GO和其他复杂游戏 - 确实令人印象深刻,但这些架构的缺点是它们需要高度的计算资源。许多研究人员正在寻找类似于alphazero但计算需求较低的方法,因此更容易重现。在本文中,我们选择了Alphazero的重要元素 - 蒙特卡洛树搜索(MCTS)计划阶段 - 并将其与时间差异(TD)学习剂相结合。我们首次将MCT包裹在TD N培训网络上,我们仅在测试时间使用此包装来创建多功能代理,从而使计算需求保持较低。我们将这种新体系结构应用于多个复杂游戏(Othello,Connectfour,Rubik的Cube),并显示了这种受alphazero启发的MCTS包装器所获得的优势。特别是,我们提出的结果是,该代理是第一个在标准硬件(无GPU或TPU)上训练的代理商,击败非常强大的Othello计划EDAX到包括7级(大多数其他学习中的学习中,从而只能失败EDAX至2级)。
translated by 谷歌翻译
2048 is a single-player stochastic puzzle game. This intriguing and addictive game has been popular worldwide and has attracted researchers to develop game-playing programs. Due to its simplicity and complexity, 2048 has become an interesting and challenging platform for evaluating the effectiveness of machine learning methods. This dissertation conducts comprehensive research on reinforcement learning and computer game algorithms for 2048. First, this dissertation proposes optimistic temporal difference learning, which significantly improves the quality of learning by employing optimistic initialization to encourage exploration for 2048. Furthermore, based on this approach, a state-of-the-art program for 2048 is developed, which achieves the highest performance among all learning-based programs, namely an average score of 625377 points and a rate of 72% for reaching 32768-tiles. Second, this dissertation investigates several techniques related to 2048, including the n-tuple network ensemble learning, Monte Carlo tree search, and deep reinforcement learning. These techniques are promising for further improving the performance of the current state-of-the-art program. Finally, this dissertation discusses pedagogical applications related to 2048 by proposing course designs and summarizing the teaching experience. The proposed course designs use 2048-like games as materials for beginners to learn reinforcement learning and computer game algorithms. The courses have been successfully applied to graduate-level students and received well by student feedback.
translated by 谷歌翻译
人工智能,当与游戏进行合并时,使研究和推进领域的理想结构。多种代理游戏对每个代理具有多个控件,同时增加搜索复杂性的同时生成大量数据。因此,我们需要高级搜索方法来查找解决方案并创建人工智能代理。在本文中,我们提出了我们的小说进化蒙特卡罗树搜索(FEMCTS)代理商,借用从进化的Algorthims(EA)和Monte Carlo树搜索(MCT)的想法来玩Pommerman的比赛。它优于滚动地平线进化算法(Rhea)在高可观察性环境中显着,几乎和MCTS用于大多数游戏种子,在某些情况下表现优于它。
translated by 谷歌翻译
The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from selfplay. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess) as well as Go.The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms and theory to analyse and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a super-human level (1,2). However, these systems are highly tuned to their domain, and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak.A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go, by representing Go knowledge using deep convolutional neural networks (7, 8), trained solely by reinforcement learning from games
translated by 谷歌翻译
目标实现问题是建立特定情况的谜题,具体情况。一个良好研究的一个例子是Go的生死(L&D)问题的类别,这有助于玩家磨练他们识别区域安全的技能。许多以前的方法,如lambda搜索尝试首先移动null,然后派生所谓的相关区域(Rzs),外部不需要搜索。本文首先提出了一种基于RZ的基于RZ的方法,称为RZ的搜索(RZS),以解决L&D问题。 RZS尝试在确定它们是否为HOC后移动之前移动。这意味着我们不需要依靠空移启发式,从而产生更优雅的算法,因此它也可以在我们的解决者中无缝地纳入Alphakero的超级人类水平。为了解决alphazero来解决,我们还提出了一种新的培训方法,称为Life(FTL)更快,这会修改Alphazero诱使它更快地获胜。我们使用RZS和FTL来解决L&D问题,即在一个专业L&D书中的106个问题中解决68,而以前的程序仅解决11。最后,我们讨论了这种方法是通用的,即RZS适用于解决棋盘游戏的许多其他目标。
translated by 谷歌翻译
蒙特卡洛树搜索(MCTS)是一种搜索最佳决策的最佳先入点方法。 MCT的成功在很大程度上取决于树木的建造方式,并且选择过程在其中起着基本作用。被证明是可靠的一种特殊选择机制是基于树木(UCT)的上限置信度范围。 UCT试图通过考虑存储在MCT的统计树中的值来平衡探索和剥削。但是,对MCTS UCT的一些调整对于这是必要的。在这项工作中,我们使用进化算法(EAS)以替代UCT公式并在MCT中使用进化的表达式来进化数学表达式。更具体地说,我们通过在MCTS方法(SIEA-MCT)中提出的语义启发的进化算法来发展表达式。这是受遗传编程(GP)语义的启发,其中使用健身案例被视为在GP中采用的要求。健身病例通常用于确定个体的适应性,可用于计算个体的语义相似性(或差异)。但是,MCT中没有健身案例。我们通过使用MCT的多个奖励值来扩展此概念,从而使我们能够确定个人及其语义的适应性。通过这样做,我们展示了SIEA-MCT如何能够成功地发展数学表达式,而数学表达式与UCT相比,无需调整这些演变的表达式而产生更好或竞争的结果。我们比较了提出的SIEA-MCT与MCTS算法,MCTS快速动作值估计算法的性能, *-minimax家族的三种变体,一个随机控制器和另外两种EA方法。我们始终展示SIEA-MCT在挑战性的Carcassonne游戏中如何优于大多数这些智能控制者。
translated by 谷歌翻译
本文介绍了三种不同的播出优化实现,如Monte-Carlo树搜索等游戏播放算法常用。每个优化的实现都仅适用于根据其规则的特定游戏集。Ludii General游戏系统可以根据游戏的描述在其常规游戏描述语言中,是否适用任何优化的实现。经验评估展示了标准实施中的主要加速,其中运行播出的中位结果是快速的播出5.08倍,在Ludii中超过145个不同的游戏,其中一个优化的实现是适用的。
translated by 谷歌翻译
游戏历史悠久的历史悠久地作为人工智能进步的基准。最近,使用搜索和学习的方法在一系列完美的信息游戏中表现出强烈的表现,并且使用游戏理论推理和学习的方法对特定的不完美信息扑克变体表示了很强的性能。我们介绍游戏玩家,一个通用算法,统一以前的方法,结合导游搜索,自助学习和游戏理论推理。游戏播放器是实现大型完美和不完美信息游戏中强大实证性能的第一个算法 - 这是一项真正的任意环境算法的重要一步。我们证明了游戏玩家是声音,融合到完美的游戏,因为可用的计算时间和近似容量增加。游戏播放器在国际象棋上达到了强大的表现,然后击败了最强大的公开可用的代理商,在头上没有限制德克萨斯州扑克(Slumbot),击败了苏格兰院子的最先进的代理人,这是一个不完美的信息游戏,说明了引导搜索,学习和游戏理论推理的价值。
translated by 谷歌翻译
In manufacturing, the production is often done on out-of-the-shelf manufacturing lines, whose underlying scheduling heuristics are not known due to the intellectual property. We consider such a setting with a black-box job-shop system and an unknown scheduling heuristic that, for a given permutation of jobs, schedules the jobs for the black-box job-shop with the goal of minimizing the makespan. Here, the jobs need to enter the job-shop in the given order of the permutation, but may take different paths within the job shop, which depends on the black-box heuristic. The performance of the black-box heuristic depends on the order of the jobs, and the natural problem for the manufacturer is to find an optimum ordering of the jobs. Facing a real-world scenario as described above, we engineer the Monte-Carlo tree-search for finding a close-to-optimum ordering of jobs. To cope with a large solutions-space in planning scenarios, a hierarchical Monte-Carlo tree search (H-MCTS) is proposed based on abstraction of jobs. On synthetic and real-life problems, H-MCTS with integrated abstraction significantly outperforms pure heuristic-based techniques as well as other Monte-Carlo search variants. We furthermore show that, by modifying the evaluation metric in H-MCTS, it is possible to achieve other optimization objectives than what the scheduling heuristics are designed for -- e.g., minimizing the total completion time instead of the makespan. Our experimental observations have been also validated in real-life cases, and our H-MCTS approach has been implemented in a production plant's controller.
translated by 谷歌翻译
除了独奏游戏外,棋盘游戏至少需要其他玩家才能玩。因此,当对手失踪时,我们创建了人工智能(AI)代理商来对抗我们。这些AI代理是通过多种方式创建的,但是这些代理的一个挑战是,与我们相比,代理可以具有较高的能力。在这项工作中,我们描述了如何创建玩棋盘游戏的较弱的AI代理。我们使用Tic-Tac-toe,九名成员的莫里斯和曼卡拉,我们的技术使用了增强学习模型,代理商使用Q学习算法来学习这些游戏。我们展示了这些代理商如何学会完美地玩棋盘游戏,然后我们描述了制作这些代理商较弱版本的方法。最后,我们提供了比较AI代理的方法。
translated by 谷歌翻译
In recent years, Monte Carlo tree search (MCTS) has achieved widespread adoption within the game community. Its use in conjunction with deep reinforcement learning has produced success stories in many applications. While these approaches have been implemented in various games, from simple board games to more complicated video games such as StarCraft, the use of deep neural networks requires a substantial training period. In this work, we explore on-line adaptivity in MCTS without requiring pre-training. We present MCTS-TD, an adaptive MCTS algorithm improved with temporal difference learning. We demonstrate our new approach on the game miniXCOM, a simplified version of XCOM, a popular commercial franchise consisting of several turn-based tactical games, and show how adaptivity in MCTS-TD allows for improved performances against opponents.
translated by 谷歌翻译