随着AI的进展继续前进,重要的是要知道高级系统将如何做出选择以及以什么方式失败。机器已经可以在某些领域中超越人类,并了解如何安全地构建可能在人类层面上具有或高于人类水平的能力的人特别关注。人们可能会怀疑,人为智能(AGI)和人为的超智能(ASI)系统应被建模为人类无法可靠地超越人类的东西。作为对这一假设的挑战,本文提出了阿喀琉斯高跟假说,该假设指出,即使是潜在的超级智能系统,也可能具有稳定的决策理论妄想,这会导致他们在对抗环境中做出明显的非理性决策。在对决策理论文献中相关困境和悖论的调查中,以此假设的背景讨论了许多潜在的致命弱点。为了理解这些弱点可能被植入系统的方式,做出了一些新颖的贡献。
translated by 谷歌翻译
在接下来的几十年中,人工通用情报(AGI)可能会超过人类在各种重要任务下的能力。该报告为为什么如果没有实质性采取行动来阻止它,AGI可能会利用他们的智能来追求目标,而这些目标是从人类的角度出发,可能会带来潜在的灾难性后果。该报告旨在涵盖激励对对齐问题的关注的关键论点,以尽可能简洁,具体和技术上的方式进行对齐问题。我认为,现实的培训过程可能会导致AGIS中未对准的目标,尤其是因为通过强化学习训练的神经网络将学会计划实现一系列目标;通过欺骗性追求未对准的目标获得更多奖励;并以破坏服从的方式概括。就像Cotra(2022)的较早报告中一样,我在参考说明性AGI培训过程中解释了我的主张,然后概述了解决问题的不同方面的可能的研究方向。
translated by 谷歌翻译
我们分析了学习型号(如神经网络)本身是优化器时发生的学习优化的类型 - 我们将作为MESA优化的情况,我们在本文中介绍的新闻。我们认为,MESA优化的可能性为先进机器学习系统的安全和透明度提出了两个重要问题。首先,在什么情况下学习模型是优化的,包括当他们不应该?其次,当学习模型是优化器时,它的目标是什么 - 它将如何与损失函数不同,它训练的损失 - 并且如何对齐?在本文中,我们对这两个主要问题进行了深入的分析,并提供了未来研究的主题概述。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
关于人类是否有自由的辩论是几个世纪以来的争夺。虽然有良好的论据,基于我们目前对大自然法律的理解,虽然人类不可能自由的意志,但大多数人都相信他们。这种差异乞求解释。如果我们接受我们没有自由的意志,我们面临着两个问题:(1)虽然自由是一个非常常用的概念,每个人都直觉理解,我们实际提到的是,当我们说行动或选择时,我们实际上是什么?免费“或不是?而且,(2)为什么自由的信念会如此共同?这种信念来自哪里,它的目的是什么?在本文中,我们从加强学习(RL)的角度来看这些问题。 RL是最初为培训人工智能代理制定的框架。然而,它也可以用作人为决策和学习的计算模型,并通过这样做,我们建议通过观察人们的常识理解自由来回回答第一问题与信息熵密切相关RL代理的归一化行动值,而第二个可以通过代理人来制定本身的必要性,就像他们在处理时间信用分配问题时所做的那样做出决定。简而言之,我们建议通过将RL框架应用为人类学习的模型,这变得明显,为了让我们有效地学习并聪明,我们需要将自己视为自由意志。
translated by 谷歌翻译
Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments.
translated by 谷歌翻译
人工智能(AI)有可能极大地改善社会,但是与任何强大的技术一样,它的风险和责任也增加。当前的AI研究缺乏有关如何管理AI系统(包括投机性长期风险)的长尾风险的系统讨论。请记住,AI可能是提高人类的长期潜力不可或缺的一部分,人们担心建立更聪明,更强大的AI系统最终可能会导致比我们更强大的系统。有人说这就像玩火,并推测这可能会造成生存风险(X风险)。为了增加这些讨论,我们回顾了来自危害分析和系统安全的时间测试概念的集合,这些概念旨在将大型流程引导到更安全的方向上。然后,我们讨论AI研究人员如何对AI系统的安全产生长期影响。最后,我们讨论如何稳健地塑造将影响安全和一般能力之间平衡的过程。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
在流行媒体中,人造代理商的意识出现与同时实现人类或超人水平智力的那些相同的代理之间通常存在联系。在这项工作中,我们探讨了意识和智力之间这种看似直观的联系的有效性和潜在应用。我们通过研究与三种当代意识功能理论相关的认知能力:全球工作空间理论(GWT),信息生成理论(IGT)和注意力模式理论(AST)。我们发现,这三种理论都将有意识的功能专门与人类领域将军智力的某些方面联系起来。有了这个见解,我们转向人工智能领域(AI),发现尽管远未证明一般智能,但许多最先进的深度学习方法已经开始纳入三个功能的关键方面理论。确定了这一趋势后,我们以人类心理时间旅行的激励例子来提出方式,其中三种理论中每种理论的见解都可以合并为一个单一的统一和可实施的模型。鉴于三种功能理论中的每一种都可以通过认知能力来实现这一可能,因此,具有精神时间旅行的人造代理不仅具有比当前方法更大的一般智力,而且还与我们当前对意识功能作用的理解更加一致在人类中,这使其成为AI研究的有希望的近期目标。
translated by 谷歌翻译
人工智能(AI)的价值分配问题询问我们如何确保人造系统的“价值”(即,客观函数)与人类的价值一致。在本文中,我认为语言交流(自然语言)是稳健价值对齐的必要条件。我讨论了这一主张的真相对试图确保AI系统价值一致的研究计划所带来的后果;或者,更谨慎地设计强大的有益或道德人造代理。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
确保未来变革性人工智能(TAI)或人工高智(ASI)系统的安全性存在几种不同的方法,以及不同方法的支持者对其在近期工作的重要性或有用性具有不同和辩论的索赔,以及未来的系统。高度可靠的代理设计(HRAD)是由机器智能研究所等领域的最具争议和雄心勃勃的方法之一,其中包括各种论点以及如何减少未来AI系统的风险。为了减少关于AI安全的辩论中的困惑,在这里,我们在米饭上建立了以前的讨论,该米饭收集并提出了四个中央论点,这些论点用于证明HRAD作为AI系统安全的路径。我们签下了争论(1)附带效用,(2)碎片灌注,(3)精确规范,(4)预测。这些中的每一个都具有不同的,部分冲突的索赔关于未来AI系统可能是风险的。我们已经根据发布和非正式文献的审查,并与对该主题职位的专家进行磋商,解释了假设和索赔。最后,我们简要概述了针对每种方法和整体议程的论据。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
药物的因果模型已用于分析机器学习系统的安全性方面。但是,识别代理是非平凡的 - 通常只是由建模者假设而没有太多理由来实现因果模型 - 建模失败可能会导致安全分析中的错误。本文提出了对代理商的第一个正式因果定义 - 大约是代理人是制度,如果他们的行为以不同的方式影响世界,则可以改善其政策。由此,我们得出了第一个用于从经验数据中发现代理的因果发现算法,并提供了用于在因果模型和游戏理论影响图之间转换的算法。我们通过解决不正确的因果模型引起的一些混乱来证明我们的方法。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
本文展示了单个机制如何通过直接从代理的原始传感器流流层构建层。这种机制,一般值函数(GVF)或“预测”,捕获高级,抽象知识,作为一组关于现有特征和知识的一组预测,其专门基于代理的低级感官和动作。因此,预测提供了将原始传感器数据组织成有用的抽象的表示 - 通过无限数量的层 - AI和认知科学的长寻求目标。本文的核心是一个详细的思想实验,提供了一个具体,逐步的正式说明,逐步的人工代理商如何从其原始的传感器体验中构建真实,有用的抽象知识。知识表示为关于代理人的观察到其行为后果的一组分层预测(预测)。该图示出了十二个独立的图层:最低的原始像素,触摸和力传感器以及少量动作;较高层次增加抽象,最终导致了对代理商世界的丰富知识,对应于门口,墙壁,房间和平面图。然后,我认为这种一般机制可以允许表示广泛的日常人类知识。
translated by 谷歌翻译
如果未来的AI系统在新的情况下是可靠的安全性,那么他们将需要纳入指导它们的一般原则,以便强烈地认识到哪些结果和行为将是有害的。这样的原则可能需要得到约束力的监管制度的支持,该法规需要广泛接受的基本原则。它们还应该足够具体用于技术实施。本文从法律中汲取灵感,解释了负面的人权如何履行此类原则的作用,并为国际监管制度以及为未来的AI系统建立技术安全限制的基础。
translated by 谷歌翻译
The reward hypothesis posits that, "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." We aim to fully settle this hypothesis. This will not conclude with a simple affirmation or refutation, but rather specify completely the implicit requirements on goals and purposes under which the hypothesis holds.
translated by 谷歌翻译
我们认为,被认为是成功执行任务的处置的情报是由代理及其上下文组成的系统的属性。这是扩展智力的论点。我们认为,如果允许其上下文变化,通常不会保留代理的性能。因此,这种处置不是由代理人独自拥有的,而是由由代理及其上下文组成的系统所拥有的,我们将其配置为具有代理的代理。代理商的背景可能包括环境,其他代理,文化文物(例如语言,技术)或所有这些,就像人类和人工智能系统以及许多非人类动物一样。根据扩展情报的论点,我们认为智能是上下文结合的,任务局部和不可限制的代理商。我们的论文对在心理学和人工智能的背景下如何分析智力具有很大的影响。
translated by 谷歌翻译