通过基于文本的符号表示棋盘游戏及其位置,可以实现NLP应用程序的可能性。语言模型可以帮助您深入了解各种有趣的问题,例如游戏的无监督学习规则,检测玩家的行为模式,玩家归因,并最终学习游戏以击败最新技术。在这项研究中,我们将BERT模型应用于简单的NIM游戏,以在噪音的存在下进行几次学习架构的噪声分析。我们通过三个虚拟玩家,即Nim Guru,Random Player和Q-Learner分析了模型性能。在第二部分中,我们将游戏学习语言模型应用于国际象棋游戏,以及一系列带有详尽百科全书开口的大师游戏。最后,我们已经表明,模型实际上可以学习国际象棋游戏的规则,并且可以在类别的评分级别上与Stockfish一起生存。
translated by 谷歌翻译
诸如OpenAI的生成预训练的变压器(GPT-2/3)之类的语言模型捕获了在各种域(例如语言翻译器)和最近在游戏玩法(国际象棋,GO和Checkers)中生成文本所需的长期相关性。本研究同时应用较大的(GPT-3)和较小的(GPT-2)语言模型来探索奥赛罗(或逆转)游戏的复杂策略。鉴于《财富快速逆转》的游戏规则,语言模型不仅代表了基于以前的游戏动作的下一步动作的候选预测指标,而且还避免了游戏玩法中的稀疏奖励。语言模型会自动捕获或模拟冠军级策略。微调的GPT-2型号产生的Othello游戏范围从13-71%的完成范围不等,而较大的GPT-3型号则达到完整游戏的41%。像以前的国际象棋和Go一样,这些语言模型提供了一种新颖的方式来生成合理的游戏档案,尤其是用于比较比人类更大的样本的开放动作。这些模型的主要贡献(从两倍)放大(从1977 - 2022年开始的45年以来的45年)上放大了先前的记录,从而为研究界提供了使用其他强化学习技术进行采样的更多样化和原始的策略。
translated by 谷歌翻译
Alphazero,Leela Chess Zero和Stockfish Nnue革新了计算机国际象棋。本书对此类引擎的技术内部工作进行了完整的介绍。该书分为四个主要章节 - 不包括第1章(简介)和第6章(结论):第2章引入神经网络,涵盖了所有用于构建深层网络的基本构建块,例如Alphazero使用的网络。内容包括感知器,后传播和梯度下降,分类,回归,多层感知器,矢量化技术,卷积网络,挤压网络,挤压和激发网络,完全连接的网络,批处理归一化和横向归一化和跨性线性单位,残留层,剩余层,过度效果和底漆。第3章介绍了用于国际象棋发动机以及Alphazero使用的经典搜索技术。内容包括minimax,alpha-beta搜索和蒙特卡洛树搜索。第4章展示了现代国际象棋发动机的设计。除了开创性的Alphago,Alphago Zero和Alphazero我们涵盖Leela Chess Zero,Fat Fritz,Fat Fritz 2以及有效更新的神经网络(NNUE)以及MAIA。第5章是关于实施微型α。 Shexapawn是国际象棋的简约版本,被用作为此的示例。 Minimax搜索可以解决六ap峰,并产生了监督学习的培训位置。然后,作为比较,实施了类似Alphazero的训练回路,其中通过自我游戏进行训练与强化学习结合在一起。最后,比较了类似α的培训和监督培训。
translated by 谷歌翻译
传统的增强学习(RL)环境通常在培训和测试阶段都相同。因此,当前的RL方法在很大程度上不能推广到概念上相似但与已训练的方法不同的测试环境,我们将其称为新型测试环境。为了将RL研究推向可以推广到新的测试环境的算法,我们介绍了砖Tic-TAC-TOE(BTTT)测试床,其中在测试环境中的砖位与训练环境中的砖位不同。使用BTTT环境上的圆形锦标赛,我们表明传统的RL国家搜索方法,例如Monte Carlo Tree Search(MCTS)和Minimax,比Alphazero更广泛地对新型测试环境更具概括性。令人惊讶的是,Alphazero已被证明可以在GO,Chess和Shogi等环境中实现超人的性能,这可能会导致人们认为它在新颖的测试环境中的性能很好。我们的结果表明,BTTT虽然很简单,但足够丰富,可以探索Alphazero的普遍性。我们发现,仅增加MCT的lookahead迭代是不足以使Alphazero推广到一些新型的测试环境。相反,增加各种培训环境有助于逐步改善所有可能的起始砖配置中的普遍性。
translated by 谷歌翻译
The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from selfplay. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess) as well as Go.The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms and theory to analyse and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a super-human level (1,2). However, these systems are highly tuned to their domain, and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak.A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go, by representing Go knowledge using deep convolutional neural networks (7, 8), trained solely by reinforcement learning from games
translated by 谷歌翻译
文本到文本变压器在多任务转移学习的任务中表现出色,尤其是在自然语言处理(NLP)方面。但是,尽管已经有几次尝试在不同域上训练变压器,但这些域之间通常存在明确的关系,例如,代码摘要,自然语言摘要描述了代码。很少有尝试研究多任务转移学习如何在显着不同领域的任务上工作的方法。在这个项目中,我们使用多域文本到文本传输变压器(MD-T5)在两个域中的四个域 - Python Code和Chess上研究了多域,多任务学习的行为。我们使用三种流行的培训策略进行了广泛的实验:BERT风格的联合预处理 +连续的登录,GPT式关节预处理 +连续登录以及GPT风格的关节预处理 +关节登录。此外,我们评估了四个指标的模型 - 播放得分,评估得分,BLEU得分和多域学习分数(MDLS)。这些指标衡量各种任务和多域学习的性能。我们表明,尽管负面的知识转移和灾难性遗忘仍然是所有模型的巨大挑战,但GPT风格的联合预处理 +联合登录策略在多域,多任务学习中表现出最大的希望,因为它在所有四个任务中都表现良好同时仍保持其多域知识。
translated by 谷歌翻译
强化学习最近已成为解决棋盘游戏领域中复杂问题的非常强大的工具,其中通常需要代理来根据其自身的经验和收到的奖励来学习复杂的策略和移动。尽管RL胜过用于玩简单视频游戏和受欢迎的棋盘游戏的现有最新方法,但它尚未证明其在古代游戏中的能力。在这里,我们解决了一个这样的问题,在该问题中,我们使用不同的方法来训练代理商,即蒙特卡洛,Qlearning和Hir Hir Hight Sarsa能够学习最佳政策来发挥战略性的UR皇家游戏。我们游戏的状态空间很复杂,但是我们的代理商在玩游戏和学习重要的战略动作方面表现出令人鼓舞的结果。尽管很难得出结论,当接受有限的资源培训时,算法总体上的表现更好,但预计SARSA在学习最快的学习方面表现出了令人鼓舞的结果。
translated by 谷歌翻译
在这项工作中,我们适应了一种受原始Alphago系统启发的训练方法,以扮演不完美的侦察盲目信息游戏。我们仅使用观测值而不是对游戏状态的完整描述,我们首先在公开可用的游戏记录上训练监督代理。接下来,我们通过自我播放来提高代理商的性能,并使用彻底的强化学习算法近端策略优化。我们不使用任何搜索来避免由于游戏状态的部分可观察性引起的问题,而只使用策略网络在播放时生成动作。通过这种方法,我们在RBC排行榜上实现了1330的ELO,该纸板在撰写本文时将我们的经纪人处于27位。我们看到自我戏剧可显着提高性能,并且代理商在没有搜索的情况下可以很好地发挥,而无需对真实游戏状态做出假设。
translated by 谷歌翻译
使用深度学习神经网络的AI发动机为分析传统棋盘游戏提供了出色的工具。在这里,我们有兴趣获得对古老游戏的新见解。为此,我们需要根据发动机的原始输出来定义新的数值度量。在本文中,我们开发了一种数值工具,用于以上下文敏感的方式进行自动移动性能评估并识别游戏功能。我们通过传递成本来衡量移动的紧迫性,这是石头当前配置和在同一董事会位置的假设传递之后的得分值差。在这里,我们研究了此度量的属性并描述了一些应用。
translated by 谷歌翻译
除了独奏游戏外,棋盘游戏至少需要其他玩家才能玩。因此,当对手失踪时,我们创建了人工智能(AI)代理商来对抗我们。这些AI代理是通过多种方式创建的,但是这些代理的一个挑战是,与我们相比,代理可以具有较高的能力。在这项工作中,我们描述了如何创建玩棋盘游戏的较弱的AI代理。我们使用Tic-Tac-toe,九名成员的莫里斯和曼卡拉,我们的技术使用了增强学习模型,代理商使用Q学习算法来学习这些游戏。我们展示了这些代理商如何学会完美地玩棋盘游戏,然后我们描述了制作这些代理商较弱版本的方法。最后,我们提供了比较AI代理的方法。
translated by 谷歌翻译
Tic Tac Toe is amongst the most well-known games. It has already been shown that it is a biased game, giving more chances to win for the first player leaving only a draw or a loss as possibilities for the opponent, assuming both the players play optimally. Thus on average majority of the games played result in a draw. The majority of the latest research on how to solve a tic tac toe board state employs strategies such as Genetic Algorithms, Neural Networks, Co-Evolution, and Evolutionary Programming. But these approaches deal with a trivial board state of 3X3 and very little research has been done for a generalized algorithm to solve 4X4,5X5,6X6 and many higher states. Even though an algorithm exists which is Min-Max but it takes a lot of time in coming up with an ideal move due to its recursive nature of implementation. A Sample has been created on this link \url{https://bk-tic-tac-toe.herokuapp.com/} to prove this fact. This is the main problem that this study is aimed at solving i.e providing a generalized algorithm(Approximate method, Learning-Based) for higher board states of tic tac toe to make precise moves in a short period. Also, the code changes needed to accommodate higher board states will be nominal. The idea is to pose the tic tac toe game as a well-posed learning problem. The study and its results are promising, giving a high win to draw ratio with each epoch of training. This study could also be encouraging for other researchers to apply the same algorithm to other similar board games like Minesweeper, Chess, and GO for finding efficient strategies and comparing the results.
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
在人类可能希望从这些系统中学习,与它们合作或作为合作伙伴互动的情况下,可以捕获类似人类行为的AI系统越来越有用。为了开发以人为导向的AI系统,预测人类行为(而不是预测最佳行动)的问题受到了广泛关注。现有的工作集中在总体意义上捕获人类行为,这可能会限制任何特定个人可以从与这些系统互动中获得的收益。我们通过开发国际象棋中人类行为的高度准确的预测模型来扩展这一工作。国际象棋是探索人类互动的一个丰富领域,因为它结合了一套独特的属性:AI系统在多年前实现了超人类的表现,但人类仍然与他们以及对手和准备工具紧密互动,并且有一种关于单个玩家游戏的大量记录数据。从迈亚(Maia)开始,该版本的Alphazero经过了对人类人群的培训,我们证明我们可以通过应用一系列微调方法来显着提高特定玩家的举动的预测准确性。此外,我们的个性化模型可用于执行风格测定法 - 预测谁采取了一组给定的动作 - 表明他们在个人层面上捕获了人类的决策。我们的工作展示了一种使AI系统更好地与个人行为保持一致的方法,这可能会导致人类互动的大量改善。
translated by 谷歌翻译
超过人类决策能力的机器学习模型的出现,在复杂的领域中启动了一种运动,以构建与人类互动的AI系统。许多构建基础对于这项活动至关重要,中心是人类行为的算法表征。尽管现有的大部分工作都集中在人类的总体行为上,但一个重要的远程目标是开发专门针对个人人并可以在其中区分的行为模型。为了使这个过程形式化,我们研究了行为风格的问题,其中任务是仅从决策中确定决策者。我们提出了一种基于变压器的方法,用于在国际象棋的背景下进行行为风格测量法,其中有人试图识别玩一组游戏的玩家。我们的方法在几个弹药的分类框架中运行,并且可以在只有100个标签游戏的情况下正确地从成千上万的候选玩家中识别出98%精度的候选人。即使接受业余比赛的训练,我们的方法还是对大师级玩家的分布样本的概括,尽管业余球员和世界一流的球员之间存在巨大差异。最后,我们更广泛地考虑了我们所产生的嵌入有关国际象棋中人类风格的揭示的内容,以及在行为数据中识别个人的强大方法的潜在伦理含义。
translated by 谷歌翻译
最近,开创性算法Alphago和Alphazero在游戏学习和深入的强化学习方面开始了一个新时代。尽管Alphago和Alphazero的成就 - 在超级人类层面上玩的GO和其他复杂游戏 - 确实令人印象深刻,但这些架构的缺点是它们需要高度的计算资源。许多研究人员正在寻找类似于alphazero但计算需求较低的方法,因此更容易重现。在本文中,我们选择了Alphazero的重要元素 - 蒙特卡洛树搜索(MCTS)计划阶段 - 并将其与时间差异(TD)学习剂相结合。我们首次将MCT包裹在TD N培训网络上,我们仅在测试时间使用此包装来创建多功能代理,从而使计算需求保持较低。我们将这种新体系结构应用于多个复杂游戏(Othello,Connectfour,Rubik的Cube),并显示了这种受alphazero启发的MCTS包装器所获得的优势。特别是,我们提出的结果是,该代理是第一个在标准硬件(无GPU或TPU)上训练的代理商,击败非常强大的Othello计划EDAX到包括7级(大多数其他学习中的学习中,从而只能失败EDAX至2级)。
translated by 谷歌翻译
The highest grossing media franchise of all times, with over \$90 billion in total revenue, is Pokemon. The video games belong to the class of Japanese Role Playing Games (J-RPG). Developing a powerful AI agent for these games is very hard because they present big challenges to MinMax, Monte Carlo Tree Search and statistical Machine Learning, as they are vastly different from the well explored in AI literature games. An AI agent for one of these games means significant progress in AI agents for the entire class. Further, the key principles of such work can hopefully inspire approaches to several domains that require excellent teamwork under conditions of extreme uncertainty, including managing a team of doctors, robots or employees in an ever changing environment, like a pandemic stricken region or a war-zone. In this paper we first explain the mechanics of the game and we perform a game analysis. We continue by proposing unique AI algorithms based on our understanding that the two biggest challenges in the game are keeping a balanced team and dealing with three sources of uncertainty. Later on, we describe why evaluating the performance of such agents is challenging and we present the results of our approach. Our AI agent performed significantly better than all previous attempts and peaked at the 33rd place in the world, in one of the most popular battle formats, while running on only 4 single socket servers.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
Superhuman神经网络代理如alphazero是什么?这个问题是科学和实际的兴趣。如果强神经网络的陈述与人类概念没有相似之处,我们理解他们的决定的忠实解释的能力将受到限制,最终限制了我们可以通过神经网络解释来实现的。在这项工作中,我们提供了证据表明,人类知识是由alphapero神经网络获得的,因为它在国际象棋游戏中列车。通过探究广泛的人类象棋概念,我们在alphazero网络中显示了这些概念的时间和地点。我们还提供了一种关注开放游戏的行为分析,包括来自国际象棋Grandmaster Vladimir Kramnik的定性分析。最后,我们开展了初步调查,观察alphazero的表现的低级细节,并在线提供由此产生的行为和代表性分析。
translated by 谷歌翻译
The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions. In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at \url{https://PaperCode.cc/GoAttack}.
translated by 谷歌翻译
Despite many recent advancements in language modeling, state-of-the-art language models lack grounding in the real world and struggle with tasks involving complex reasoning. Meanwhile, advances in the symbolic reasoning capabilities of AI have led to systems that outperform humans in games like chess and Go (Silver et al., 2018). Chess commentary provides an interesting domain for bridging these two fields of research, as it requires reasoning over a complex board state and providing analyses in natural language. In this work we demonstrate how to combine symbolic reasoning engines with controllable language models to generate chess commentaries. We conduct experiments to demonstrate that our approach generates commentaries that are preferred by human judges over previous baselines.
translated by 谷歌翻译