The concept of intelligent system has emerged in information technology as a type of system derived from successful applications of artificial intelligence. The goal of this paper is to give a general description of an intelligent system, which integrates previous approaches and takes into account recent advances in artificial intelligence. The paper describes an intelligent system in a generic way, identifying its main properties and functional components. The presented description follows a pragmatic approach to be used in an engineering context as a general framework to analyze and build intelligent systems. Its generality and its use is illustrated with real-world system examples and related with artificial intelligence methods.
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
AI的蓬勃发展提示建议,AI技术应该是“以人为本”。然而,没有明确的定义,对人工人工智能或短,HCAI的含义。本文旨在通过解决HCAI的一些基础方面来改善这种情况。为此,我们介绍了术语HCAI代理商,以指配备有AI组件的任何物理或软件计算代理,并与人类交互和/或协作。本文识别参与HCAI代理的五个主要概念组件:观察,要求,行动,解释和模型。我们看到HCAI代理的概念,以及其组件和功能,作为弥合人以人为本的AI技术和非技术讨论的一种方式。在本文中,我们专注于采用在人类存在的动态环境中运行的单一代理的情况分析。
translated by 谷歌翻译
即将开发我们呼叫所体现的系统的新一代越来越自主和自学习系统。在将这些系统部署到真实上下文中,我们面临各种工程挑战,因为它以有益的方式协调所体现的系统的行为至关重要,确保他们与我们以人为本的社会价值观的兼容性,并且设计可验证安全可靠的人类-Machine互动。我们正在争辩说,引发系统工程将来自嵌入到体现系统的温室,并确保动态联合的可信度,这种情况意识到的情境意识,意图,探索,探险,不断发展,主要是不可预测的,越来越自主的体现系统在不确定,复杂和不可预测的现实世界环境中。我们还识别了许多迫切性的系统挑战,包括可信赖的体现系统,包括强大而人为的AI,认知架构,不确定性量化,值得信赖的自融化以及持续的分析和保证。
translated by 谷歌翻译
汽车行业在过去几十年中见证了越来越多的发展程度;从制造手动操作车辆到具有高自动化水平的制造车辆。随着近期人工智能(AI)的发展,汽车公司现在雇用BlackBox AI模型来使车辆能够感知其环境,并使人类少或没有输入的驾驶决策。希望能够在商业规模上部署自治车辆(AV),通过社会接受AV成为至关重要的,并且可能在很大程度上取决于其透明度,可信度和遵守法规的程度。通过为AVS行为的解释提供对这些接受要求的遵守对这些验收要求的评估。因此,解释性被视为AVS的重要要求。 AV应该能够解释他们在他们运作的环境中的“见到”。在本文中,我们对可解释的自动驾驶的现有工作体系进行了全面的调查。首先,我们通过突出显示并强调透明度,问责制和信任的重要性来开放一个解释的动机;并审查与AVS相关的现有法规和标准。其次,我们识别并分类了参与发展,使用和监管的不同利益相关者,并引出了AV的解释要求。第三,我们对以前的工作进行了严格的审查,以解释不同的AV操作(即,感知,本地化,规划,控制和系统管理)。最后,我们确定了相关的挑战并提供建议,例如AV可解释性的概念框架。该调查旨在提供对AVS中解释性感兴趣的研究人员所需的基本知识。
translated by 谷歌翻译
增强业务流程管理系统(ABPMS)是一类新兴的过程感知信息系统,可利用值得信赖的AI技术。ABPMS增强了业务流程的执行,目的是使这些过程更加适应性,主动,可解释和上下文敏感。该宣言为ABPMS提供了愿景,并讨论了需要克服实现这一愿景的研究挑战。为此,我们定义了ABPM的概念,概述了ABPMS中流程的生命周期,我们讨论了ABPMS的核心特征,并提出了一系列挑战以实现具有这些特征的系统。
translated by 谷歌翻译
最近围绕语言处理模型的复杂性的最新炒作使人们对机器获得了类似人类自然语言的指挥的乐观情绪。人工智能中自然语言理解的领域声称在这一领域取得了长足的进步,但是,在这方面和其他学科中使用“理解”的概念性清晰,使我们很难辨别我们实际上有多近的距离。目前的方法和剩余挑战的全面,跨学科的概述尚待进行。除了语言知识之外,这还需要考虑我们特定于物种的能力,以对,记忆,标签和传达我们(足够相似的)体现和位置经验。此外,测量实际约束需要严格分析当前模型的技术能力,以及对理论可能性和局限性的更深入的哲学反思。在本文中,我将所有这些观点(哲学,认知语言和技术)团结在一起,以揭开达到真实(人类般的)语言理解所涉及的挑战。通过解开当前方法固有的理论假设,我希望说明我们距离实现这一目标的实际程度,如果确实是目标。
translated by 谷歌翻译
自动驾驶在过去十年中取得了重大的研究和发展中的重要里程碑。在道路上的自动车辆部署时,对该领域的兴趣越来越令人兴趣,承诺更安全,更生态的运输系统。随着计算强大的人工智能(AI)技术的兴起,自动车辆可以用高精度感测它们的环境,进行安全的实时决策,并在没有人类干预的情况下更可靠地运行。然而,在现有技术中,人类智能决策通常不可能理解,这种缺陷阻碍了这种技术在社会上可接受。因此,除了制造安全的实时决策之外,自治车辆的AI系统还需要解释如何构建这些决策,以便在许多司法管辖区兼容监管。我们的研究在开发可解释的人工智能(XAI)的自治车辆方法上阐明了全面的光芒。特别是,我们做出以下贡献。首先,我们在最先进的自主车辆行业的解释方面彻底概述了目前的差距。然后,我们显示了该领域的解释和解释接收器的分类。第三,我们为端到端自主驾驶系统的架构提出了一个框架,并证明了Xai在调试和调节这些系统中的作用。最后,作为未来的研究方向,我们提供了XAI自主驾驶方法的实地指南,可以提高运营安全性和透明度,以实现监管机构,制造商和所有参与利益相关者的公共批准。
translated by 谷歌翻译
机器人系统的长期自主权隐含地需要可靠的平台,这些平台能够自然处理硬件和软件故障,行为问题或缺乏知识。基于模型的可靠平台还需要在系统开发过程中应用严格的方法,包括使用正确的构造技术来实现机器人行为。随着机器人的自治水平的提高,提供系统可靠性的提供成本也会增加。我们认为,自主机器人的可靠性可靠性可以从几种认知功能,知识处理,推理和元评估的正式模型中受益。在这里,我们为自动机器人代理的认知体系结构的生成模型提出了案例,该模型订阅了基于模型的工程和可靠性,自主计算和知识支持机器人技术的原则。
translated by 谷歌翻译
用声明知识(RDK)和顺序决策(SDM)推理是人工智能的两个关键研究领域。RDK方法的原因是具有声明领域知识,包括常识性知识,它是先验或随着时间的收购,而SDM方法(概率计划和强化学习)试图计算行动政策,以最大程度地提高时间范围内预期的累积效用;两类方法的原因是存在不确定性。尽管这两个领域拥有丰富的文献,但研究人员尚未完全探索他们的互补优势。在本文中,我们调查了利用RDK方法的算法,同时在不确定性下做出顺序决策。我们讨论重大发展,开放问题和未来工作的方向。
translated by 谷歌翻译
事实证明,在学习环境中,社会智能代理(SIA)的部署在不同的应用领域具有多个优势。社会代理创作工具使场景设计师能够创造出对SIAS行为的高度控制的量身定制体验,但是,另一方面,这是有代价的,因为该方案及其创作的复杂性可能变得霸道。在本文中,我们介绍了可解释的社会代理创作工具的概念,目的是分析社会代理的创作工具是否可以理解和解释。为此,我们检查了创作工具Fatima-Toolkit是否可以理解,并且从作者的角度来看,其创作步骤可以解释。我们进行了两项用户研究,以定量评估Fatima-Toolkit的解释性,可理解性和透明度,从场景设计师的角度来看。关键发现之一是,法蒂玛 - 库尔基特(Fatima-Toolkit)的概念模型通常是可以理解的,但是基于情感的概念并不那么容易理解和使用。尽管关于Fatima-Toolkit的解释性有一些积极的方面,但仍需要取得进展,以实现完全可以解释的社会代理商创作工具。我们提供一组关键概念和可能的解决方案,可以指导开发人员构建此类工具。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
我们的世界越来越被具有不同自治程度的智能机器人所笼罩。为了将自己无缝整合到我们的社会中,即使在没有人类的直接投入的情况下,这些机器也应具有导航日常工作复杂性的能力。换句话说,我们希望这些机器人了解其合作伙伴的意图,以预测帮助他们的最佳方法。在本文中,我们介绍了Casper(社会感知和在机器人中参与的认知体系结构):一种象征性认知体系结构,使用定性的空间推理来预测另一个代理的追求目标并计算最佳的协作行为。这是通过平行过程的集合来执行的,该过程对低级动作识别和高级目标理解进行建模,这两者都经过正式验证。我们已经在模拟的厨房环境中测试了这种体系结构,我们收集的结果表明,机器人能够认识到一个持续的目标并适当合作实现其成就。这证明了对定性空间关系的新使用,该空间关系应用于人类机器人相互作用领域的意图阅读问题。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
Many theories, based on neuroscientific and psychological empirical evidence and on computational concepts, have been elaborated to explain the emergence of consciousness in the central nervous system. These theories propose key fundamental mechanisms to explain consciousness, but they only partially connect such mechanisms to the possible functional and adaptive role of consciousness. Recently, some cognitive and neuroscientific models try to solve this gap by linking consciousness to various aspects of goal-directed behaviour, the pivotal cognitive process that allows mammals to flexibly act in challenging environments. Here we propose the Representation Internal-Manipulation (RIM) theory of consciousness, a theory that links the main elements of consciousness theories to components and functions of goal-directed behaviour, ascribing a central role for consciousness to the goal-directed manipulation of internal representations. This manipulation relies on four specific computational operations to perform the flexible internal adaptation of all key elements of goal-directed computation, from the representations of objects to those of goals, actions, and plans. Finally, we propose the concept of `manipulation agency' relating the sense of agency to the internal manipulation of representations. This allows us to propose that the subjective experience of consciousness is associated to the human capacity to generate and control a simulated internal reality that is vividly perceived and felt through the same perceptual and emotional mechanisms used to tackle the external world.
translated by 谷歌翻译
虽然AI有利于人类,但如果没有适当发展,它也可能会损害人类。 HCI工作的重点是从与非AI计算系统的传统人类交互转换,以与AI系统交互。我们在HCI视角下开展了高级文献综述,对当前工作的整体分析。我们的审核和分析突出了AI技术引入的新变更以及HCI专业人员在AI系统开发中应用人以人为本的AI(HCAI)方法时,新挑战的新挑战。我们还确定了与AI系统人类互动的七个主要问题,其中HCI专业人员在开发非AI计算系统时没有遇到。为了进一步实现HCAI方法的实施,我们确定了与特定的HCAI驱动的设计目标相关的新的HCI机会,以指导HCI专业人员解决这些新问题。最后,我们对当前HCI方法的评估显示了这些方法支持开发AI系统的局限性。我们提出了可以帮助克服这些局限性的替代方法,并有效帮助HCI专业人员将HCAI方法应用于AI系统的发展。我们还为HCI专业人员提供战略建议,以有效影响利用HCAI方法的AI系统的发展,最终发展HCAI系统。
translated by 谷歌翻译
在本文中,我们在人工代理中介绍了活跃的自我的计算建模叙述。特别是,我们专注于代理人如何配备控制意识以及它在自主位于行动中的方式以及反过来,影响行动控制。我们认为这需要铺设一个体现的认知模型,将自下而上的过程(传感器学习和对控制的细粒度适应)与自上而下的过程(战略选择和决策的认知过程)。我们基于预测处理和自由能量最小化的原理提出了这种概念计算架构。使用此常规模型,我们描述了控制层次结构的级别的控制感以及如何支持在不可预测的环境中的动作控制。我们在模型的实施以及模拟任务场景中的第一评估,其中自主代理必须应对不可预测的情况并经历相应的控制感。我们探讨了不同的型号参数设置,导致不同方式结合低电平和高级动作控制。结果表明,在低/高级动作控制需求的情况下适当加权信息的重要性,并且他们证明了控制的感觉如何促进这一点。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
两栖地面汽车将飞行和驾驶模式融合在一起,以实现更灵活的空中行动能力,并且最近受到了越来越多的关注。通过分析现有的两栖车辆,我们强调了在复杂的三维城市运输系统中有效使用两栖车辆的自动驾驶功能。我们审查并总结了现有两栖车辆设计中智能飞行驾驶的关键促成技术,确定主要的技术障碍,并提出潜在的解决方案,以实现未来的研究和创新。本文旨在作为研究和开发智能两栖车辆的指南,以实现未来的城市运输。
translated by 谷歌翻译