最近,提出了一种具有混合AI的设计模式的闺房(图形语言),结合了符号和子象征学习和推理。在本文中,我们将这种博语与演员及其互动扩展。本文的主要贡献是:1)分类法延长分布式混合AI系统,与演员和相互作用;2)示出了使用多种子体系统和人剂相互作用相关的一些设计模式的示例。
translated by 谷歌翻译
本章的目的是提出一些回顾性分析编程抽象的演变,来自{\​​ em程序},{\ em对象},{\ em actors},{\ em components},{\ em services},向上对于{\ em代理},%有一些比较软件组件和代理(和多代理系统)的比较概念,所选方法是通过在一般历史角度替换它们的方法。选择具有三个轴/尺寸的一些常见参考:{\ EM动作选择}实体之间的一个实体,{\ EM耦合灵活性},{\ em抽象级别}。我们确实可以观察到更高的灵活性(通过诸如{\ EM Connections})和{\ EM Connections}的{\ EM Reification}等概念)和更高级别的{\ em抽象}的概念追求。组件,服务和代理的概念具有一些共同的目标(特别是{\ EM软件模块化和重新配置性}),具有多种代理系统,提高了{\ EM自治权}和{\ EM协调}的进一步概念。特别是通过{\ em自动组织}的概念以及使用{\ EM知识}。我们希望这一分析有助于突出一些激励编程抽象进程的基本力量,因此可以为未来编程抽象的反映提供一些种子。
translated by 谷歌翻译
即将开发我们呼叫所体现的系统的新一代越来越自主和自学习系统。在将这些系统部署到真实上下文中,我们面临各种工程挑战,因为它以有益的方式协调所体现的系统的行为至关重要,确保他们与我们以人为本的社会价值观的兼容性,并且设计可验证安全可靠的人类-Machine互动。我们正在争辩说,引发系统工程将来自嵌入到体现系统的温室,并确保动态联合的可信度,这种情况意识到的情境意识,意图,探索,探险,不断发展,主要是不可预测的,越来越自主的体现系统在不确定,复杂和不可预测的现实世界环境中。我们还识别了许多迫切性的系统挑战,包括可信赖的体现系统,包括强大而人为的AI,认知架构,不确定性量化,值得信赖的自融化以及持续的分析和保证。
translated by 谷歌翻译
This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.
translated by 谷歌翻译
本文介绍了多代理系统中混合代理的开发的尴尬架构。尴尬的代理商可以实时重新配置他们的计划,以便在不断变化的环境和社会环境下与社会角色要求保持一致。拟议的混合体系结构利用面向行为的设计(BOD)来开发具有反应性计划和完善的歌剧框架的代理,以提供组织,社交和互动定义,以验证和调整代理的行为。 Opera和Bod可以共同实现代理计划的实时调整,以实现不断发展的社会角色,同时为促进各个代理商的行为变化的互动提供了透明度的额外好处。我们介绍了这种体系结构,以激发传统的符号和基于行为的AI社区之间的桥接,在该社区中,这种合并的解决方案可以帮助MAS研究人员追求建立更强大,更强大的智能代理团队。我们使用DOTA2,这是一种成功取决于社交互动的游戏,作为证明我们提出的混合体系结构的示例实现的媒介。
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
AI的蓬勃发展提示建议,AI技术应该是“以人为本”。然而,没有明确的定义,对人工人工智能或短,HCAI的含义。本文旨在通过解决HCAI的一些基础方面来改善这种情况。为此,我们介绍了术语HCAI代理商,以指配备有AI组件的任何物理或软件计算代理,并与人类交互和/或协作。本文识别参与HCAI代理的五个主要概念组件:观察,要求,行动,解释和模型。我们看到HCAI代理的概念,以及其组件和功能,作为弥合人以人为本的AI技术和非技术讨论的一种方式。在本文中,我们专注于采用在人类存在的动态环境中运行的单一代理的情况分析。
translated by 谷歌翻译
The concept of intelligent system has emerged in information technology as a type of system derived from successful applications of artificial intelligence. The goal of this paper is to give a general description of an intelligent system, which integrates previous approaches and takes into account recent advances in artificial intelligence. The paper describes an intelligent system in a generic way, identifying its main properties and functional components. The presented description follows a pragmatic approach to be used in an engineering context as a general framework to analyze and build intelligent systems. Its generality and its use is illustrated with real-world system examples and related with artificial intelligence methods.
translated by 谷歌翻译
我们的世界越来越被具有不同自治程度的智能机器人所笼罩。为了将自己无缝整合到我们的社会中,即使在没有人类的直接投入的情况下,这些机器也应具有导航日常工作复杂性的能力。换句话说,我们希望这些机器人了解其合作伙伴的意图,以预测帮助他们的最佳方法。在本文中,我们介绍了Casper(社会感知和在机器人中参与的认知体系结构):一种象征性认知体系结构,使用定性的空间推理来预测另一个代理的追求目标并计算最佳的协作行为。这是通过平行过程的集合来执行的,该过程对低级动作识别和高级目标理解进行建模,这两者都经过正式验证。我们已经在模拟的厨房环境中测试了这种体系结构,我们收集的结果表明,机器人能够认识到一个持续的目标并适当合作实现其成就。这证明了对定性空间关系的新使用,该空间关系应用于人类机器人相互作用领域的意图阅读问题。
translated by 谷歌翻译
虽然AI有利于人类,但如果没有适当发展,它也可能会损害人类。 HCI工作的重点是从与非AI计算系统的传统人类交互转换,以与AI系统交互。我们在HCI视角下开展了高级文献综述,对当前工作的整体分析。我们的审核和分析突出了AI技术引入的新变更以及HCI专业人员在AI系统开发中应用人以人为本的AI(HCAI)方法时,新挑战的新挑战。我们还确定了与AI系统人类互动的七个主要问题,其中HCI专业人员在开发非AI计算系统时没有遇到。为了进一步实现HCAI方法的实施,我们确定了与特定的HCAI驱动的设计目标相关的新的HCI机会,以指导HCI专业人员解决这些新问题。最后,我们对当前HCI方法的评估显示了这些方法支持开发AI系统的局限性。我们提出了可以帮助克服这些局限性的替代方法,并有效帮助HCI专业人员将HCAI方法应用于AI系统的发展。我们还为HCI专业人员提供战略建议,以有效影响利用HCAI方法的AI系统的发展,最终发展HCAI系统。
translated by 谷歌翻译
增强业务流程管理系统(ABPMS)是一类新兴的过程感知信息系统,可利用值得信赖的AI技术。ABPMS增强了业务流程的执行,目的是使这些过程更加适应性,主动,可解释和上下文敏感。该宣言为ABPMS提供了愿景,并讨论了需要克服实现这一愿景的研究挑战。为此,我们定义了ABPM的概念,概述了ABPMS中流程的生命周期,我们讨论了ABPMS的核心特征,并提出了一系列挑战以实现具有这些特征的系统。
translated by 谷歌翻译
本文使用JACAMO框架提供了多代理系统(MAS)的运行时验证(RV)方法。我们的目标是为MAS带来一层安全性。该层能够在系统执行过程中控制事件,而无需在每个代理的行为中进行特定的实现来识别事件。MAS已在混合智能的背景下使用。这种使用需要软件代理与人类之间的通信。在某些情况下,通过自然语言对话进行沟通。但是,这种沟通使我们引起了与控制对话流有关的关注,因此代理可以防止讨论主题的任何变化可能会损害其推理。我们证明了一个监视器的实施,该监视器旨在控制MAS中的对话流,该对话流通过自然语言与用户沟通以帮助医院病床分配的决策。
translated by 谷歌翻译
我们将存储系统视为任何技术认知系统的关键组成部分,这些系统可以在弥合用于推理,计划和语义场景的高级符号离散表示之间弥合差距,以了解用于控制,用于控制。在这项工作中,我们描述了概念和技术特征,其中的内存系统必须与基础数据表示一起实现。我们根据我们在开发ARMAR类人体机器人系统中获得的经验来确定这些特征,并讨论实践示例,这些例子证明了在以人为中心的环境中执行任务的类人生物机器人的记忆系统应支持,例如多模式,内态性,异性恋,Hetero关联性,可预测性或固有的发作结构。基于这些特征,我们将机器人软件框架ARMARX扩展到了统一的认知架构,该架构用于Armar Humanoid Robot家族的机器人。此外,我们描述了机器人软件的开发如何导致我们采用这种新颖的启用内存的认知体系结构,并展示了机器人如何使用内存来实现内存驱动的行为。
translated by 谷歌翻译
This volume contains revised versions of the papers selected for the third volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
translated by 谷歌翻译
Many theories, based on neuroscientific and psychological empirical evidence and on computational concepts, have been elaborated to explain the emergence of consciousness in the central nervous system. These theories propose key fundamental mechanisms to explain consciousness, but they only partially connect such mechanisms to the possible functional and adaptive role of consciousness. Recently, some cognitive and neuroscientific models try to solve this gap by linking consciousness to various aspects of goal-directed behaviour, the pivotal cognitive process that allows mammals to flexibly act in challenging environments. Here we propose the Representation Internal-Manipulation (RIM) theory of consciousness, a theory that links the main elements of consciousness theories to components and functions of goal-directed behaviour, ascribing a central role for consciousness to the goal-directed manipulation of internal representations. This manipulation relies on four specific computational operations to perform the flexible internal adaptation of all key elements of goal-directed computation, from the representations of objects to those of goals, actions, and plans. Finally, we propose the concept of `manipulation agency' relating the sense of agency to the internal manipulation of representations. This allows us to propose that the subjective experience of consciousness is associated to the human capacity to generate and control a simulated internal reality that is vividly perceived and felt through the same perceptual and emotional mechanisms used to tackle the external world.
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
移动机器人的推理和计划是一个具有挑战性的问题,随着世界的发展,机器人的目标可能会改变。解决这个问题的一种技术是目标推理,代理人不仅原因是其行动的原因,而且还要实现哪些目标。尽管已经对单个代理的目标推理进行了广泛的研究,但分布式,多代理目标推理带来了其他挑战,尤其是在分布式环境中。在这种情况下,必须进行某种形式的协调以实现合作行为。先前的目标推理方法与其他代理商共享代理商的世界模型,这已经实现了基本的合作。但是,代理商的目标及其意图通常没有共享。在本文中,我们提出了一种解决此限制的方法。扩展了现有的目标推理框架,我们建议通过承诺在多个代理之间实现合作行为,在这种情况下,代理商可能会保证某些事实在将来的某个时候将是正确的。分享这些诺言使其他代理人不仅可以考虑世界的当前状态,而且还可以在决定下一步追求哪个目标时其他代理商的意图。我们描述了如何将承诺纳入目标生命周期,这是一种常用的目标改进机制。然后,我们通过将PDDL计划的定时初始文字(TIL)连接到计划特定目标时如何使用承诺。最后,我们在简化的物流方案中评估了我们的原型实现。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
机器人系统的长期自主权隐含地需要可靠的平台,这些平台能够自然处理硬件和软件故障,行为问题或缺乏知识。基于模型的可靠平台还需要在系统开发过程中应用严格的方法,包括使用正确的构造技术来实现机器人行为。随着机器人的自治水平的提高,提供系统可靠性的提供成本也会增加。我们认为,自主机器人的可靠性可靠性可以从几种认知功能,知识处理,推理和元评估的正式模型中受益。在这里,我们为自动机器人代理的认知体系结构的生成模型提出了案例,该模型订阅了基于模型的工程和可靠性,自主计算和知识支持机器人技术的原则。
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译