Knowledge about space and time is necessary to solve problems in the physical world: An AI agent situated in the physical world and interacting with objects often needs to reason about positions of and relations between objects; and as soon as the agent plans its actions to solve a task, it needs to consider the temporal aspect (e.g., what actions to perform over time). Spatio-temporal knowledge, however, is required beyond interacting with the physical world, and is also often transferred to the abstract world of concepts through analogies and metaphors (e.g., "a threat that is hanging over our heads"). As spatial and temporal reasoning is ubiquitous, different attempts have been made to integrate this into AI systems. In the area of knowledge representation, spatial and temporal reasoning has been largely limited to modeling objects and relations and developing reasoning methods to verify statements about objects and relations. On the other hand, neural network researchers have tried to teach models to learn spatial relations from data with limited reasoning capabilities. Bridging the gap between these two approaches in a mutually beneficial way could allow us to tackle many complex real-world problems, such as natural language processing, visual question answering, and semantic image segmentation. In this chapter, we view this integration problem from the perspective of Neuro-Symbolic AI. Specifically, we propose a synergy between logical reasoning and machine learning that will be grounded on spatial and temporal knowledge. Describing some successful applications, remaining challenges, and evaluation datasets pertaining to this direction is the main topic of this contribution.
translated by 谷歌翻译
根据认知心理学和相关学科,生物学剂中复杂的解决问题行为的发展取决于等级认知机制。分层增强学习是一种有前途的计算方法,最终可能在人工代理和机器人中产生可比的解决问题的行为。但是,迄今为止,许多人类和非人类动物的解决问题能力显然优于人造系统的能力。在这里,我们提出了整合生物学启发的层次机制的步骤,以实现人造代理中的高级解决问题的技能。因此,我们首先回顾了认知心理学中的文献,以强调构图抽象和预测性处理的重要性。然后,我们将获得的见解与当代分层的强化学习方法联系起来。有趣的是,我们的结果表明,所有确定的认知机制均已在孤立的计算体系结构中单独实施,这提出了一个问题,为什么没有单个统一体系结构可以集成它们。作为我们的最终贡献,我们通过对开发这种统一体系结构的计算挑战的综合观点来解决这个问题。我们希望我们的结果可以指导更复杂的认知启发的分层机器学习体系结构的发展。
translated by 谷歌翻译
灵活地处理各种机器人动作语言翻译任务是机器人和人之间自然相互作用的必不可少的要求。以前的方法需要更改推理过程中每个任务的模型体系结构的配置,这破坏了多任务学习的前提。在这项工作中,我们提出了配对的门控自动编码器(PGAE),以在桌面对象操纵方案中的机器人动作和语言描述之间进行灵活翻译。我们通过将每个动作与包含信号通知翻译方向的信号的适当描述配对,以端到端的方式训练模型。在推断期间,我们的模型可以从动作转化为语言,反之亦然,根据给定的语言信号。此外,为了选择使用预算语言模型作为语言编码器,我们的模型有可能识别看不见的自然语言输入。我们模型的另一个功能是,它可以通过使用机器人演示来识别和模仿另一个代理的动作。该实验结果突出了我们方法的灵活双向翻译能力,同时又可以推广到相反剂的作用。
translated by 谷歌翻译
空间推理给智能代理带来了一个特殊的挑战,同时是他们在物理世界中成功互动和交流的先决条件。这样的推理任务是描述目标对象在通过相对方向的某些参考对象的固有方向方面的位置。在本文中,我们介绍了基于抽象对象的新型诊断视觉询问(VQA)数据集。我们的数据集允许对端到端VQA模型对地面相对方向的功能进行细粒度分析。同时,与现有数据集相比,模型培训需要少得多的计算资源,但产生可比甚至更高的性能。除了新数据集外,我们还基于在Grid-A-3D训练的两个端到端的VQA架构进行彻底评估。我们证明,在几个时期内,以相对方向进行推理所需的子任务,例如在场景中识别和定位对象并估算其内在方向,以直观的方式处理相对方向。
translated by 谷歌翻译
由于Covid-19大流行,机器人可以被视为任务中的潜在资源,如帮助人们从远程工作,维持社会疏散和改善精神或身体健康。为了提高人机互动,通过在复杂的真实环境中处理多个社会线索,机器人必须变得更加社交。我们的研究采用了凝视触发的视听跨透视整合的神经毒性范例,使ICUB机器人表达人类的社会关注反应。起初,在37名人体参与者进行行为实验。为了提高生态有效性,设计了一个具有三个蒙面动画头像的圆桌会议场景,其中包括能够进行凝视偏移的中间的一个,以及能够产生声音的其他两个。凝视方向和声音位置是一致或不一致的。掩模用于覆盖除了头像之外的所有面部视觉线索。我们观察到,阿凡达的目光可以在视听通道条件下具有更好的人类性能来引发跨型社会关注,而不是在不一致状态。然后,我们的计算模型,喘气,培训,以实现社会提示检测,视听显着性预测和选择性关注。在完成模型培训之后,ICUB机器人被暴露于与人类参与者相似的实验室条件,表明它可以将类似的关注响应作为人类的同时性和不协调性表现进行复制,而人类表现仍然优越。因此,这种跨学科工作提供了对跨型社会关注机制的新见解以及如何在复杂环境中为机器人建模的机制。
translated by 谷歌翻译
认知心理学和相关学科已经确定了几种关键机制,使智能生物学药物能够学会解决复杂的问题。存在紧迫的证据表明,这些物种中能够解决问题技能的认知机制以等级心理表征为基础。在为人工代理和机器人提供基于学习的问题解决能力的最有希望的计算方法之一是分层增强学习。但是,到目前为止,现有的计算方法尚未能够为人工代理提供与智能动物相媲美的解决问题的能力,包括人类和非人类灵长类动物,乌鸦或章鱼。在这里,我们首先调查了认知心理学和相关学科的文献,发现许多重要的心理机制涉及组成抽象,好奇心和前瞻性模型。然后,我们将这些见解与当代分层的增强学习方法联系起来,并确定实现这些机制的关键机器智能方法。作为我们的主要结果,我们表明所有重要的认知机制均已在孤立的计算体系结构中独立实施,并且缺乏适当整合它们的方法。我们希望我们的结果指导更复杂的认知启发性层次结构方法的发展,以便未来的人工代理在智能动物水平上实现解决问题的性能。
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
With the advent of Neural Style Transfer (NST), stylizing an image has become quite popular. A convenient way for extending stylization techniques to videos is by applying them on a per-frame basis. However, such per-frame application usually lacks temporal-consistency expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal-consistency suffers from one or more of the following drawbacks. They (1) are only suitable for a limited range of stylization techniques, (2) can only be applied in an offline fashion requiring the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency-control. Note that existing consistent video-filtering approaches aim to completely remove flickering artifacts and thus do not respect any specific consistency-control aspect. For stylization tasks, however, consistency-control is an essential requirement where a certain amount of flickering can add to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that can stylize video streams while providing interactive consistency-control. Apart from stylization, our approach also supports various other image processing filters. For achieving interactive performance, we develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. We show that the final consistent video-output using our flow network is comparable to that being obtained using state-of-the-art optical-flow network. Further, we employ an adaptive combination of local and global consistent features and enable interactive selection between the two. By objective and subjective evaluation, we show that our method is superior to state-of-the-art approaches.
translated by 谷歌翻译
Vision transformers have emerged as powerful tools for many computer vision tasks. It has been shown that their features and class tokens can be used for salient object segmentation. However, the properties of segmentation transformers remain largely unstudied. In this work we conduct an in-depth study of the spatial attentions of different backbone layers of semantic segmentation transformers and uncover interesting properties. The spatial attentions of a patch intersecting with an object tend to concentrate within the object, whereas the attentions of larger, more uniform image areas rather follow a diffusive behavior. In other words, vision transformers trained to segment a fixed set of object classes generalize to objects well beyond this set. We exploit this by extracting heatmaps that can be used to segment unknown objects within diverse backgrounds, such as obstacles in traffic scenes. Our method is training-free and its computational overhead negligible. We use off-the-shelf transformers trained for street-scene segmentation to process other scene types.
translated by 谷歌翻译
The problem of generating an optimal coalition structure for a given coalition game of rational agents is to find a partition that maximizes their social welfare and is known to be NP-hard. This paper proposes GCS-Q, a novel quantum-supported solution for Induced Subgraph Games (ISGs) in coalition structure generation. GCS-Q starts by considering the grand coalition as initial coalition structure and proceeds by iteratively splitting the coalitions into two nonempty subsets to obtain a coalition structure with a higher coalition value. In particular, given an $n$-agent ISG, the GCS-Q solves the optimal split problem $\mathcal{O} (n)$ times using a quantum annealing device, exploring $\mathcal{O}(2^n)$ partitions at each step. We show that GCS-Q outperforms the currently best classical solvers with its runtime in the order of $n^2$ and an expected worst-case approximation ratio of $93\%$ on standard benchmark datasets.
translated by 谷歌翻译