本文介绍了一种“混合自我注意整洁”方法,以改善高维输入中增强拓扑(整洁)算法的原始神经发展。虽然整洁的算法显示出在不同具有挑战性的任务中的显着结果,但由于输入表示是高维度,但它无法创建一个良好的调谐网络。我们的研究通过使用自我关注作为间接编码方法来解决此限制,以选择输入的最重要部分。此外,我们在混合方法的帮助下提高了整体性能,以发展最终网络权重。主要结论是混合自我关注整洁可以消除原始整洁的限制。结果表明,与进化算法相比,我们的模型可以在ATARI游戏中获得与原始像素输入的可比分数,其中参数数量较少。
translated by 谷歌翻译
自从各种任务的自动化开始以来,自动驾驶车辆一直引起人们的兴趣。人类容易疲惫,在道路上的响应时间缓慢,最重要的是,每年约有135万道路交通事故死亡,这已经是一项危险的任务。预计自动驾驶可以减少世界上驾驶事故的数量,这就是为什么这个问题对研究人员感兴趣的原因。目前,自动驾驶汽车在使车辆自动驾驶时使用不同的算法来实现各种子问题。我们将重点关注增强学习算法,更具体地说是Q学习算法和增强拓扑的神经进化(NEAT),即进化算法和人工神经网络的组合,以训练模型代理,以学习如何在给定路径上驱动。本文将重点介绍上述两种算法之间的比较。
translated by 谷歌翻译
In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.
translated by 谷歌翻译
最近被证明在强化学习(RL)设置中显示出的神经形式非常竞争,并且能够减轻基于梯度的方法的一些缺点。本文将专注于使用简单的遗传算法(GA)来应用神经发展,以找到产生最佳表现代理的神经网络的权重。此外,我们提出了两种新颖的修改,以提高与初始实施相比的数据效率和收敛速度。在Openai健身房提供的汇聚环境中评估了修改,并证明明显优于基线方法。
translated by 谷歌翻译
人工智能,当与游戏进行合并时,使研究和推进领域的理想结构。多种代理游戏对每个代理具有多个控件,同时增加搜索复杂性的同时生成大量数据。因此,我们需要高级搜索方法来查找解决方案并创建人工智能代理。在本文中,我们提出了我们的小说进化蒙特卡罗树搜索(FEMCTS)代理商,借用从进化的Algorthims(EA)和Monte Carlo树搜索(MCT)的想法来玩Pommerman的比赛。它优于滚动地平线进化算法(Rhea)在高可观察性环境中显着,几乎和MCTS用于大多数游戏种子,在某些情况下表现优于它。
translated by 谷歌翻译
When simulating soft robots, both their morphology and their controllers play important roles in task performance. This paper introduces a new method to co-evolve these two components in the same process. We do that by using the hyperNEAT algorithm to generate two separate neural networks in one pass, one responsible for the design of the robot body structure and the other for the control of the robot. The key difference between our method and most existing approaches is that it does not treat the development of the morphology and the controller as separate processes. Similar to nature, our method derives both the "brain" and the "body" of an agent from a single genome and develops them together. While our approach is more realistic and doesn't require an arbitrary separation of processes during evolution, it also makes the problem more complex because the search space for this single genome becomes larger and any mutation to the genome affects "brain" and the "body" at the same time. Additionally, we present a new speciation function that takes into consideration both the genotypic distance, as is the standard for NEAT, and the similarity between robot bodies. By using this function, agents with very different bodies are more likely to be in different species, this allows robots with different morphologies to have more specialized controllers since they won't crossover with other robots that are too different from them. We evaluate the presented methods on four tasks and observe that even if the search space was larger, having a single genome makes the evolution process converge faster when compared to having separated genomes for body and control. The agents in our population also show morphologies with a high degree of regularity and controllers capable of coordinating the voxels to produce the necessary movements.
translated by 谷歌翻译
为了协助游戏开发人员制作游戏NPC,我们展示了EvolvingBehavior,这是一种新颖的工具,用于基因编程,以在不真实的引擎4中发展行为树4.在初步评估中,我们将演变的行为与我们的研究人员设计的手工制作的树木和随机的树木进行了比较 - 在3D生存游戏中种植的树木。我们发现,在这种情况下,EvolvingBehavior能够产生行为,以实现设计师的目标。最后,我们讨论了共同创造游戏AI设计工具的探索的含义和未来途径,以及行为树进化的挑战和困难。
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
A generative recurrent neural network is quickly trained in an unsupervised manner to model popular reinforcement learning environments through compressed spatiotemporal representations. The world model's extracted features are fed into compact and simple policies trained by evolution, achieving state of the art results in various environments. We also train our agent entirely inside of an environment generated by its own internal world model, and transfer this policy back into the actual environment. Interactive version of paper: https://worldmodels.github.io 32nd Conference on Neural Information Processing Systems (NIPS 2018),
translated by 谷歌翻译
同时发展机器人的形态(体)和控制器(大脑)可能导致后代遗传体和大脑之间的不匹配。为了缓解这个问题,相对较早地提出了通过所谓的生活框架的所谓的生命框架的学习期。但是,实证评估仍缺乏迄今为止。在本文中,我们研究了这种学习机制与不同视角的影响。使用广泛的模拟,我们认为,与纯粹的进化方法相比,学习可以大大提高任务性能并减少一定适合水平所需的几代人数。此外,虽然学习只直接影响控制器,但我们证明了进化的形态也将是不同的。这提供了定量演示,即大脑的变化可以诱导体内的变化。最后,我们研究了给定体学习的能力量化的形态智力的概念。我们观察到学习三角洲,继承与学习大脑之间的性能差异,在整个进化过程中都在增长。这表明演化正在生产具有越来越多的可塑性的机器人,即连续几代变得越来越好,更好的学习者,这反过来使它们更好,在给定的任务中更好地更好。总而言之,我们的结果表明,生活的三角形不仅是理论兴趣的概念,而且是一种具有实际好处的系统架构。
translated by 谷歌翻译
在这项工作中,我们考虑了视频游戏水平的程序内容生成问题。先前的方法依赖于能够生成不同级别的进化搜索方法,但是这一代过程很慢,这在实时设置中是有问题的。还提出了加强学习(RL)来解决相同的问题,尽管水平生成很快,但训练时间可能非常昂贵。我们提出了一个框架,以解决结合ES和RL的过程内容生成问题。特别是,我们的方法首先使用ES来生成一系列级别,然后使用行为克隆将这些级别的级别分配到策略中,然后可以查询该级别以快速产生新的水平。我们将方法应用于迷宫游戏和Super Mario Bros,结果表明我们的方法实际上会减少水平生成所需的时间,尤其是在需要越来越多的有效水平时。
translated by 谷歌翻译
Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
4月20日至22日,在马德里(西班牙)举行的EVO* 2022会议上提交了末期摘要。这些论文介绍了正在进行的研究和初步结果,这些结果研究了对不同问题的不同方法(主要是进化计算)的应用,其中大多数是现实世界中的方法。
translated by 谷歌翻译
在生存的背景下,可以单独繁殖在我们的机器中产生智力吗?在这项工作中,自我复制是在现代学习环境中出现智能行为的一种机制。通过纯粹专注于生存,在进行自然选择的同时,进化的生物被证明会产生有意义的,复杂和聪明的行为,从而在没有任何奖励或目标概念的情况下向挑战性问题展示了创造性的解决方案。Atari和机器人学习环境是根据自然选择重新定义的,在这些实验过程中自我复制生物中出现的行为进行了详细描述。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
基于搜索的程序内容生成(PCG)是一种众所周知的方法,用于游戏中的水平生成。它的主要优势是它是通用且能够满足功能约束的能力。但是,由于在线运行这些算法的大量计算成本,因此很少将基于搜索的PCG用于实时生成。在本文中,我们使用机器学习介绍了一种新型的迭代级生成器。我们训练模型以模仿进化过程,并使用模型生成水平。该训练有素的模型能够顺序修改嘈杂的水平,以创建更好的水平,而无需在推理过程中使用健身函数。我们在2D迷宫生成任务上评估了训练有素的模型。我们比较了该方法的几个不同版本:在进化结束时训练模型或每100代(辅助进化),并在进化过程中使用模型作为突变函数。使用辅助进化过程,最终训练的模型能够以99%的成功率产生迷宫,高度多样性为86%。这项工作为以进化过程为指导的一种新的学习水平生成器打开了大门,并可能会增加游戏行业中基于搜索的PCG的采用。
translated by 谷歌翻译
在时间序列预测的各种软计算方法中,模糊认知地图(FCM)已经显示出显着的结果作为模拟和分析复杂系统动态的工具。 FCM具有与经常性神经网络的相似之处,可以被分类为神经模糊方法。换句话说,FCMS是模糊逻辑,神经网络和专家系统方面的混合,它作为模拟和研究复杂系统的动态行为的强大工具。最有趣的特征是知识解释性,动态特征和学习能力。本调查纸的目标主要是在文献中提出的最相关和最近的基于FCCM的时间序列预测模型概述。此外,本文认为介绍FCM模型和学习方法的基础。此外,该调查提供了一些旨在提高FCM的能力的一些想法,以便在处理非稳定性数据和可扩展性问题等现实实验中涵盖一些挑战。此外,具有快速学习算法的FCMS是该领域的主要问题之一。
translated by 谷歌翻译