从早期开始,人工智能社区内部的一个关键和有争议的问题是人工通用情报(AGI)是否可以实现。 AGI是机器和计算机程序实现人级智力并完成人类所能完成的所有任务的能力。尽管文献中有许多系统声称他们意识到了AGI,但其他一些研究人员认为,不可能实现它。在本文中,我们对这个问题有不同的看法。首先,我们讨论为了实现AGI,以及构建智能机器和程序,还应构建一个智能世界,一方面是对我们世界的准确近似,另一方面,这是推理的重要部分智能机器已经嵌入了这个世界。然后,我们讨论AGI不是产品或算法,而是一个连续的过程,随着时间的流逝,它将变得越来越成熟(例如人类的文明和智慧)。然后,我们认为预训练的嵌入在建立这个聪明的世界中起着关键作用,因此意识到了AGI。我们讨论了预训练的嵌入如何促进人类水平智力的几种特征,例如实施,常识知识,无意识的知识和学习的持续性,通过机器。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
内容的离散和连续表示(例如,语言或图像)具有有趣的属性,以便通过机器的理解或推理此内容来探索或推理。该职位论文提出了我们关于离散和持续陈述的作用及其在深度学习领域的作用的意见。目前的神经网络模型计算连续值数据。信息被压缩成密集,分布式嵌入式。通过Stark对比,人类在他们的语言中使用离散符号。此类符号代表了来自共享上下文信息的含义的世界的压缩版本。此外,人工推理涉及在认知水平处符号操纵,这促进了抽象的推理,知识和理解的构成,泛化和高效学习。通过这些见解的动机,在本文中,我们认为,结合离散和持续的陈述及其处理对于构建展示一般情报形式的系统至关重要。我们建议并讨论了几个途径,可以在包含离散元件来结合两种类型的陈述的优点来改进当前神经网络。
translated by 谷歌翻译
Neural-symbolic computing (NeSy), which pursues the integration of the symbolic and statistical paradigms of cognition, has been an active research area of Artificial Intelligence (AI) for many years. As NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks, it may serve as a catalyst for the next generation of AI. In the present paper, we provide a systematic overview of the important and recent developments of research on NeSy AI. Firstly, we introduce study history of this area, covering early work and foundations. We further discuss background concepts and identify key driving factors behind the development of NeSy. Afterward, we categorize recent landmark approaches along several main characteristics that underline this research paradigm, including neural-symbolic integration, knowledge representation, knowledge embedding, and functionality. Then, we briefly discuss the successful application of modern NeSy approaches in several domains. Finally, we identify the open problems together with potential future research directions. This survey is expected to help new researchers enter this rapidly-developing field and accelerate progress towards data-and knowledge-driven AI.
translated by 谷歌翻译
主张神经符号人工智能(NESY)断言,将深度学习与象征性推理相结合将导致AI更强大,而不是本身。像深度学习一样成功,人们普遍认为,即使我们最好的深度学习系统也不是很擅长抽象推理。而且,由于推理与语言密不可分,因此具有直觉的意义,即自然语言处理(NLP)将成为NESY特别适合的候选人。我们对实施NLP实施NESY的研究进行了结构化审查,目的是回答Nesy是否确实符合其承诺的问题:推理,分布概括,解释性,学习和从小数据的可转让性以及新的推理到新的域。我们研究了知识表示的影响,例如规则和语义网络,语言结构和关系结构,以及隐式或明确的推理是否有助于更高的承诺分数。我们发现,将逻辑编译到神经网络中的系统会导致满足最NESY的目标,而其他因素(例如知识表示或神经体系结构的类型)与实现目标没有明显的相关性。我们发现在推理的定义方式上,特别是与人类级别的推理有关的许多差异,这会影响有关模型架构的决策并推动结论,这些结论在整个研究中并不总是一致的。因此,我们倡导采取更加有条不紊的方法来应用人类推理的理论以及适当的基准的发展,我们希望这可以更好地理解该领域的进步。我们在GitHub上提供数据和代码以进行进一步分析。
translated by 谷歌翻译
在流行媒体中,人造代理商的意识出现与同时实现人类或超人水平智力的那些相同的代理之间通常存在联系。在这项工作中,我们探讨了意识和智力之间这种看似直观的联系的有效性和潜在应用。我们通过研究与三种当代意识功能理论相关的认知能力:全球工作空间理论(GWT),信息生成理论(IGT)和注意力模式理论(AST)。我们发现,这三种理论都将有意识的功能专门与人类领域将军智力的某些方面联系起来。有了这个见解,我们转向人工智能领域(AI),发现尽管远未证明一般智能,但许多最先进的深度学习方法已经开始纳入三个功能的关键方面理论。确定了这一趋势后,我们以人类心理时间旅行的激励例子来提出方式,其中三种理论中每种理论的见解都可以合并为一个单一的统一和可实施的模型。鉴于三种功能理论中的每一种都可以通过认知能力来实现这一可能,因此,具有精神时间旅行的人造代理不仅具有比当前方法更大的一般智力,而且还与我们当前对意识功能作用的理解更加一致在人类中,这使其成为AI研究的有希望的近期目标。
translated by 谷歌翻译
最近围绕语言处理模型的复杂性的最新炒作使人们对机器获得了类似人类自然语言的指挥的乐观情绪。人工智能中自然语言理解的领域声称在这一领域取得了长足的进步,但是,在这方面和其他学科中使用“理解”的概念性清晰,使我们很难辨别我们实际上有多近的距离。目前的方法和剩余挑战的全面,跨学科的概述尚待进行。除了语言知识之外,这还需要考虑我们特定于物种的能力,以对,记忆,标签和传达我们(足够相似的)体现和位置经验。此外,测量实际约束需要严格分析当前模型的技术能力,以及对理论可能性和局限性的更深入的哲学反思。在本文中,我将所有这些观点(哲学,认知语言和技术)团结在一起,以揭开达到真实(人类般的)语言理解所涉及的挑战。通过解开当前方法固有的理论假设,我希望说明我们距离实现这一目标的实际程度,如果确实是目标。
translated by 谷歌翻译
意识和智力是通常被民间心理学和社会所理解的特性。人工智能一词及其在近年来设法解决的问题是一种论点,以确立机器经历某种意识。遵循罗素的类比,如果一台机器能够做一个有意识的人所做的事情,那么机器有意识的可能性就会增加。但是,这种类比的社会含义是灾难性的。具体而言,如果对可以解决神经典型人可能会解决的问题的实体赋予了权利,那么机器是否具有更多的残疾人权利?例如,自闭症综合征障碍频谱可以使一个人无法解决机器解决的问题。我们认为明显的答案是否定的,因为解决问题并不意味着意识。因此,我们将在本文中争论出惊人的意识和至少计算智力是独立的,以及为什么机器不具有惊人意识,尽管它们可能会发展出与人类相比更高的计算智力。为此,我们尝试制定计算智能的客观度量,并研究其在人类,动物和机器中的表现。类似地,我们将惊人的意识研究为二分法变量,以及它在人,动物和机器中的分布方式。由于现象意识和计算智力是独立的,因此这一事实对社会具有关键意义,我们在这项工作中也分析了这一事实。
translated by 谷歌翻译
大型语言模型(LLMS)具有变革性。它们是预先训练的基础模型,可以通过微调来适应许多不同的自然语言任务,以前每个任务都需要单独的网络模型。这是接近人类语言的非凡多功能性的一步。 GPT-3和最近的LAMDA可以与人类进行对话,并在最少的启动之后与许多例子进行许多主题。但是,关于这些LLM是否了解他们在说什么或表现出智力迹象的反应。在与LLM的三次访谈中得出截然不同的结论中,这种较高的差异显示出来。发现了一种新的可能性,可以解释这种分歧。实际上,LLM中似乎是智慧的是反映面试官智力的镜子,这是一个显着的转折,可以被视为反向图灵测试。如果是这样,那么通过研究访谈,我们可能会更多地了解面试官的智力和信念,而不是LLM的智能。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
智能机器真的聪明吗?智力的基本哲学概念是否令人满意地描述当前系统的工作方式?了解智力的必要条件吗?如果一台机器能理解,我们应该将主观性归因于它吗?本文解决了决定所谓的“智能机器”是否能够理解而不是仅仅处理标志的问题。它处理语法和语义之间的关系。主要论文涉及语义的必然性对于建造有意识机器的可能性的任何讨论,并凝结为以下两个原则:直觉”; “如果语义不能简化为语法,那么机器就无法理解。”我们的结论指出,没有必要将理解归因于机器以解释其表现出的“智能”行为。仅仅是句法和机械智力的方法作为解决任务的工具,足以证明它可以在技术发展的当前状态中显示的操作范围。
translated by 谷歌翻译
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
translated by 谷歌翻译
Despite recent advances of AI research in many application-specific domains, we do not know how to build a human-level artificial intelligence (HLAI). We conjecture that learning from others' experience with the language is the essential characteristic that distinguishes human intelligence from the rest. Humans can update the action-value function with the verbal description as if they experience states, actions, and corresponding rewards sequences firsthand. In this paper, we present a classification of intelligence according to how individual agents learn and propose a definition and a test for HLAI. The main idea is that language acquisition without explicit rewards can be a sufficient test for HLAI.
translated by 谷歌翻译
The concept of intelligent system has emerged in information technology as a type of system derived from successful applications of artificial intelligence. The goal of this paper is to give a general description of an intelligent system, which integrates previous approaches and takes into account recent advances in artificial intelligence. The paper describes an intelligent system in a generic way, identifying its main properties and functional components. The presented description follows a pragmatic approach to be used in an engineering context as a general framework to analyze and build intelligent systems. Its generality and its use is illustrated with real-world system examples and related with artificial intelligence methods.
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
AI的蓬勃发展提示建议,AI技术应该是“以人为本”。然而,没有明确的定义,对人工人工智能或短,HCAI的含义。本文旨在通过解决HCAI的一些基础方面来改善这种情况。为此,我们介绍了术语HCAI代理商,以指配备有AI组件的任何物理或软件计算代理,并与人类交互和/或协作。本文识别参与HCAI代理的五个主要概念组件:观察,要求,行动,解释和模型。我们看到HCAI代理的概念,以及其组件和功能,作为弥合人以人为本的AI技术和非技术讨论的一种方式。在本文中,我们专注于采用在人类存在的动态环境中运行的单一代理的情况分析。
translated by 谷歌翻译
人工智能(AI)的价值分配问题询问我们如何确保人造系统的“价值”(即,客观函数)与人类的价值一致。在本文中,我认为语言交流(自然语言)是稳健价值对齐的必要条件。我讨论了这一主张的真相对试图确保AI系统价值一致的研究计划所带来的后果;或者,更谨慎地设计强大的有益或道德人造代理。
translated by 谷歌翻译
人工智能的象征主义,联系主义和行为主义方法在各种任务中取得了很多成功,而我们仍然没有对社区中达成足够共识的“智能”的明确定义(尽管有70多个不同的“版本”的“版本”定义)。智力的本质仍然处于黑暗状态。在这项工作中,我们不采用这三种传统方法中的任何一种,而是试图确定智力本质的某些基本方面,并构建一种数学模型来代表和潜在地重现这些基本方面。我们首先强调定义讨论范围和调查粒度的重要性。我们仔细比较了人工智能,并定性地展示了信息抽象过程,我们建议这是联系感知和认知的关键。然后,我们提出了“概念”的更广泛的概念,将自我模型的概念从世界模型中分离出来,并构建了一种称为世界自我模型(WSM)的新模型。我们展示了创建和连接概念的机制,以及WSM如何接收,处理和输出有关解决的问题的信息的流程。我们还考虑并讨论了所提出的理论框架的潜在计算机实施问题,最后我们提出了一个基于WSM的统一智能一般框架。
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译