我们认为,被认为是成功执行任务的处置的情报是由代理及其上下文组成的系统的属性。这是扩展智力的论点。我们认为,如果允许其上下文变化,通常不会保留代理的性能。因此,这种处置不是由代理人独自拥有的,而是由由代理及其上下文组成的系统所拥有的,我们将其配置为具有代理的代理。代理商的背景可能包括环境,其他代理,文化文物(例如语言,技术)或所有这些,就像人类和人工智能系统以及许多非人类动物一样。根据扩展情报的论点,我们认为智能是上下文结合的,任务局部和不可限制的代理商。我们的论文对在心理学和人工智能的背景下如何分析智力具有很大的影响。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
在流行媒体中,人造代理商的意识出现与同时实现人类或超人水平智力的那些相同的代理之间通常存在联系。在这项工作中,我们探讨了意识和智力之间这种看似直观的联系的有效性和潜在应用。我们通过研究与三种当代意识功能理论相关的认知能力:全球工作空间理论(GWT),信息生成理论(IGT)和注意力模式理论(AST)。我们发现,这三种理论都将有意识的功能专门与人类领域将军智力的某些方面联系起来。有了这个见解,我们转向人工智能领域(AI),发现尽管远未证明一般智能,但许多最先进的深度学习方法已经开始纳入三个功能的关键方面理论。确定了这一趋势后,我们以人类心理时间旅行的激励例子来提出方式,其中三种理论中每种理论的见解都可以合并为一个单一的统一和可实施的模型。鉴于三种功能理论中的每一种都可以通过认知能力来实现这一可能,因此,具有精神时间旅行的人造代理不仅具有比当前方法更大的一般智力,而且还与我们当前对意识功能作用的理解更加一致在人类中,这使其成为AI研究的有希望的近期目标。
translated by 谷歌翻译
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American antidiscrimination law-more particularly, through Title
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
人工智能(AI)的价值分配问题询问我们如何确保人造系统的“价值”(即,客观函数)与人类的价值一致。在本文中,我认为语言交流(自然语言)是稳健价值对齐的必要条件。我讨论了这一主张的真相对试图确保AI系统价值一致的研究计划所带来的后果;或者,更谨慎地设计强大的有益或道德人造代理。
translated by 谷歌翻译
最近围绕语言处理模型的复杂性的最新炒作使人们对机器获得了类似人类自然语言的指挥的乐观情绪。人工智能中自然语言理解的领域声称在这一领域取得了长足的进步,但是,在这方面和其他学科中使用“理解”的概念性清晰,使我们很难辨别我们实际上有多近的距离。目前的方法和剩余挑战的全面,跨学科的概述尚待进行。除了语言知识之外,这还需要考虑我们特定于物种的能力,以对,记忆,标签和传达我们(足够相似的)体现和位置经验。此外,测量实际约束需要严格分析当前模型的技术能力,以及对理论可能性和局限性的更深入的哲学反思。在本文中,我将所有这些观点(哲学,认知语言和技术)团结在一起,以揭开达到真实(人类般的)语言理解所涉及的挑战。通过解开当前方法固有的理论假设,我希望说明我们距离实现这一目标的实际程度,如果确实是目标。
translated by 谷歌翻译
在AI研究中,到目前为止,尽管这一方面在智能系统的功能中突出特征,但对功能和负担的表征和代表的表征和代表的关注一直是零星和稀疏的。迄今为止,零星和稀疏的稀疏努力是对功能和负担的表征和理解,也没有一般框架可以统一与功能概念的表示和应用有关的所有不同使用域和情况。本文开发了这样的一般框架,一种方法强调了一个事实,即所涉及的表示必须是明确的认知和概念性的,它们还必须包含有关涉及的事件和过程的因果特征,并采用了概念上的结构,这些概念结构是扎根的为了达到最大的通用性,他们所指的指南。描述了基本的一般框架,以及一组有关功能表示的基本指南原则。为了正确,充分地表征和表示功能,需要一种描述性表示语言。该语言是定义和开发的,并描述了其使用的许多示例。一般框架是基于一般语言含义表示代表框架的概念依赖性的扩展而开发的。为了支持功能的一般表征和表示,基本的概念依赖框架通过称为结构锚和概念依赖性阐述的代表性设备以及一组地面概念的定义来增强。这些新颖的代表性构建体得到了定义,开发和描述。处理功能的一般框架将代表实现人工智能的重大步骤。
translated by 谷歌翻译
在过去的几年中,计算机视觉的显着进步总的来说是归因于深度学习,这是由于大量标记数据的可用性所推动的,并与GPU范式的爆炸性增长配对。在订阅这一观点的同时,本书批评了该领域中所谓的科学进步,并在基于信息的自然法则的框架内提出了对愿景的调查。具体而言,目前的作品提出了有关视觉的基本问题,这些问题尚未被理解,引导读者走上了一个由新颖挑战引起的与机器学习基础共鸣的旅程。中心论点是,要深入了解视觉计算过程,有必要超越通用机器学习算法的应用,而要专注于考虑到视觉信号的时空性质的适当学习理论。
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
我们分析了学习型号(如神经网络)本身是优化器时发生的学习优化的类型 - 我们将作为MESA优化的情况,我们在本文中介绍的新闻。我们认为,MESA优化的可能性为先进机器学习系统的安全和透明度提出了两个重要问题。首先,在什么情况下学习模型是优化的,包括当他们不应该?其次,当学习模型是优化器时,它的目标是什么 - 它将如何与损失函数不同,它训练的损失 - 并且如何对齐?在本文中,我们对这两个主要问题进行了深入的分析,并提供了未来研究的主题概述。
translated by 谷歌翻译
2021年8月,圣达菲研究所举办了一个关于集体智力的研讨会,是智力项目基础的一部分。该项目旨在通过促进智能性质的跨学科研究来推进人工智能领域。该研讨会汇集了计算机科学家,生物学家,哲学家,社会科学家和其他人,以分享他们对多种代理人之间的互动产生的洞察力的见解 - 是否这些代理商是机器,动物或人类。在本报告中,我们总结了每个会谈和随后的讨论。我们还借出了许多关键主题,并确定未来研究的重要前沿。
translated by 谷歌翻译
保证案件旨在为其最高主张的真理提供合理的信心,这通常涉及安全或保障。那么一个自然的问题是,案件提供了“多少”信心?我们认为,置信度不能简化为单个属性或测量。取而代之的是,我们建议它应该基于以三种不同观点的属性为基础:正面,消极和残留疑问。积极的观点考虑了该案件的证据和总体论点结合起来的程度,以表明其主张的信念是正当的。我们为理由设置了一个高标准,要求它是不可行的。对此的主要积极度量是健全性,它将论点解释为逻辑证明。对证据的信心可以概率地表达,我们使用确认措施来确保证据的“权重”跨越了一定的阈值。此外,可以通过使用概率逻辑的参数步骤从证据中汇总概率,以产生我们所谓的索赔概率估值。负面观点记录了对案件的怀疑和挑战,通常表示为叛逆者及其探索和解决。保证开发商必须防止确认偏见,并应在制定案件时大力探索潜在的叛逆者,并应记录下来及其解决方案,以避免返工并帮助审阅者。残留疑问:世界不确定,因此并非所有潜在的叛逆者都可以解决。我们探索风险,可能认为它们是可以接受或不可避免的。但是,至关重要的是,这些判断是有意识的判断,并且在保证案例中记录下来。本报告详细介绍了这些观点,并指示了我们的保证2.0的原型工具集如何协助他们的评估。
translated by 谷歌翻译
Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments.
translated by 谷歌翻译
讨论了与科学,工程,建筑和人为因素相关的月球表面上的运输设施问题。未来十年制造的后勤决策可能对财务成功至关重要。除了概述一些问题及其与数学和计算的关系外,本文还为决策者,科学家和工程师提供了有用的资源。
translated by 谷歌翻译
一个令人着迷的假设是,人类和动物的智力可以通过一些原则(而不是启发式方法的百科全书清单)来解释。如果这个假设是正确的,我们可以更容易地理解自己的智能并建造智能机器。就像物理学一样,原理本身不足以预测大脑等复杂系统的行为,并且可能需要大量计算来模拟人类式的智力。这一假设将表明,研究人类和动物所剥削的归纳偏见可以帮助阐明这些原则,并为AI研究和神经科学理论提供灵感。深度学习已经利用了几种关键的归纳偏见,这项工作考虑了更大的清单,重点是关注高级和顺序有意识的处理的工作。阐明这些特定原则的目的是,它们有可能帮助我们建立从人类的能力中受益于灵活分布和系统概括的能力的AI系统,目前,这是一个领域艺术机器学习和人类智力。
translated by 谷歌翻译
随着AI的进展继续前进,重要的是要知道高级系统将如何做出选择以及以什么方式失败。机器已经可以在某些领域中超越人类,并了解如何安全地构建可能在人类层面上具有或高于人类水平的能力的人特别关注。人们可能会怀疑,人为智能(AGI)和人为的超智能(ASI)系统应被建模为人类无法可靠地超越人类的东西。作为对这一假设的挑战,本文提出了阿喀琉斯高跟假说,该假设指出,即使是潜在的超级智能系统,也可能具有稳定的决策理论妄想,这会导致他们在对抗环境中做出明显的非理性决策。在对决策理论文献中相关困境和悖论的调查中,以此假设的背景讨论了许多潜在的致命弱点。为了理解这些弱点可能被植入系统的方式,做出了一些新颖的贡献。
translated by 谷歌翻译
意识和智力是通常被民间心理学和社会所理解的特性。人工智能一词及其在近年来设法解决的问题是一种论点,以确立机器经历某种意识。遵循罗素的类比,如果一台机器能够做一个有意识的人所做的事情,那么机器有意识的可能性就会增加。但是,这种类比的社会含义是灾难性的。具体而言,如果对可以解决神经典型人可能会解决的问题的实体赋予了权利,那么机器是否具有更多的残疾人权利?例如,自闭症综合征障碍频谱可以使一个人无法解决机器解决的问题。我们认为明显的答案是否定的,因为解决问题并不意味着意识。因此,我们将在本文中争论出惊人的意识和至少计算智力是独立的,以及为什么机器不具有惊人意识,尽管它们可能会发展出与人类相比更高的计算智力。为此,我们尝试制定计算智能的客观度量,并研究其在人类,动物和机器中的表现。类似地,我们将惊人的意识研究为二分法变量,以及它在人,动物和机器中的分布方式。由于现象意识和计算智力是独立的,因此这一事实对社会具有关键意义,我们在这项工作中也分析了这一事实。
translated by 谷歌翻译
This volume contains revised versions of the papers selected for the third volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
translated by 谷歌翻译
We argue that the attempt to build morality into machines is subject to what we call the Interpretation problem, whereby any rule we give the machine is open to infinite interpretation in ways that we might morally disapprove of, and that the interpretation problem in Artificial Intelligence is an illustration of Wittgenstein's general claim that no rule can contain the criteria for its own application. Using games as an example, we attempt to define the structure of normative spaces and argue that any rule-following within a normative space is guided by values that are external to that space and which cannot themselves be represented as rules. In light of this problem, we analyse the types of mistakes an artificial moral agent could make and we make suggestions about how to build morality into machines by getting them to interpret the rules we give in accordance with these external values, through explicit moral reasoning and the presence of structured values, the adjustment of causal power assigned to the agent and interaction with human agents, such that the machine develops a virtuous character and the impact of the interpretation problem is minimised.
translated by 谷歌翻译