Spatial understanding is a fundamental aspect of computer vision and integral for human-level reasoning about images, making it an important component for grounded language understanding. While recent large-scale text-to-image synthesis (T2I) models have shown unprecedented improvements in photorealism, it is unclear whether they have reliable spatial understanding capabilities. We investigate the ability of T2I models to generate correct spatial relationships among objects and present VISOR, an evaluation metric that captures how accurately the spatial relationship described in text is generated in the image. To benchmark existing models, we introduce a large-scale challenge dataset SR2D that contains sentences describing two objects and the spatial relationship between them. We construct and harness an automated evaluation pipeline that employs computer vision to recognize objects and their spatial relationships, and we employ it in a large-scale evaluation of T2I models. Our experiments reveal a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. Our analyses demonstrate several biases and artifacts of T2I models such as the difficulty with generating multiple objects, a bias towards generating the first object mentioned, spatially inconsistent outputs for equivalent relationships, and a correlation between object co-occurrence and spatial understanding capabilities. We conduct a human study that shows the alignment between VISOR and human judgment about spatial understanding. We offer the SR2D dataset and the VISOR metric to the community in support of T2I spatial reasoning research.
translated by 谷歌翻译
'Actions' play a vital role in how humans interact with the world. Thus, autonomous agents that would assist us in everyday tasks also require the capability to perform 'Reasoning about Actions & Change' (RAC). This has been an important research direction in Artificial Intelligence (AI) in general, but the study of RAC with visual and linguistic inputs is relatively recent. The CLEVR_HYP (Sampat et. al., 2021) is one such testbed for hypothetical vision-language reasoning with actions as the key focus. In this work, we propose a novel learning strategy that can improve reasoning about the effects of actions. We implement an encoder-decoder architecture to learn the representation of actions as vectors. We combine the aforementioned encoder-decoder architecture with existing modality parsers and a scene graph question answering model to evaluate our proposed system on the CLEVR_HYP dataset. We conduct thorough experiments to demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
translated by 谷歌翻译
'Actions' play a vital role in how humans interact with the world. Thus, autonomous agents that would assist us in everyday tasks also require the capability to perform 'Reasoning about Actions & Change' (RAC). Recently, there has been growing interest in the study of RAC with visual and linguistic inputs. Graphs are often used to represent semantic structure of the visual content (i.e. objects, their attributes and relationships among objects), commonly referred to as scene-graphs. In this work, we propose a novel method that leverages scene-graph representation of images to reason about the effects of actions described in natural language. We experiment with existing CLEVR_HYP (Sampat et. al, 2021) dataset and show that our proposed approach is effective in terms of performance, data efficiency, and generalization capability compared to existing models.
translated by 谷歌翻译
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
translated by 谷歌翻译
“行动”在人类与世界互动并使他们实现理想的目标方面起着至关重要的作用。结果,对人类的最常识(CS)知识围绕着行动。尽管“关于行动与变革的推理”(RAC)在知识代表社区中得到了广泛的研究,但它最近引起了NLP和计算机视觉研究人员的兴趣。本文调查了现有的任务,基准数据集,各种技术和模型,以及它们在视觉和语言领域中RAC中进步的各自绩效。最后,我们总结了我们的关键要点,讨论该研究领域面临的目前挑战,并概述了未来研究的潜在方向。
translated by 谷歌翻译
自动化车辆(AV)在很大程度上取决于强大的感知系统。评估视觉系统的当前方法主要关注逐帧性能。当在AV中使用时,这种评估方法似乎不足以评估感知子系统的性能。在本文中,我们提出了一种逻辑(称为时空感知逻辑(STPL)),该逻辑同时使用了空间和时间方式。STPL可以使用空间和时间关系来实现对感知数据的推理。STPL的一个主要优点是,即使在某些情况下没有地面真相数据,它也可以促进感知系统实时性能的基本理智检查。我们确定了STPL的片段,该片段是在多项式时间内有效地监视离线的。最后,我们提供了一系列针对AV感知系统的规格,以突出显示可以通过STPL通过离线监控来表达和分析的要求类型。
translated by 谷歌翻译
为了在单一源领域的概括中取得成功,最大化合成域的多样性已成为最有效的策略之一。最近的许多成功都来自预先指定模型在培训期间暴露于多样性类型的方法,因此它最终可以很好地概括为新领域。但是,基于na \“基于多样性的增强也不能因为它们无法对大型域移动建模,或者因为预先指定的变换的跨度不能涵盖域概括中通常发生的转移类型。解决这个问题,我们提出了一个新颖的框架,该框架使用神经网络使用对抗学习的转换(ALT)来建模可欺骗分类器的合理但硬的图像转换。该网络是为每个批次的随机初始初始初始初始初始初始化的,并培训了固定数量的步骤。为了最大化分类错误。此外,我们在分类器对干净和转化的图像的预测之间实现一致性。通过广泛的经验分析,我们发现这种对抗性转换的新形式同时实现了多样性和硬度的目标,并超越了所有现有技术,以实现竞争性的所有技术单源域概括的基准。我们还显示了T HAT ALT可以自然地与现有的多样性模块合作,从而产生高度独特的源域,导致最先进的性能。
translated by 谷歌翻译
近年来在开发更好的图像标题模型方面取得了巨大进展,但其中大多数依赖于单独的对象探测器来提取区域特征。最近的视觉语言研究通过利用网格表示来实现更灵活的模型训练和更快推理速度的速度来转向探测器趋势。但是,这种发展主要专注于图像理解任务,并且对标题生成任务的研究仍然较少。在本文中,我们涉及一种更好的无需探测器图像标题模型,并提出了一种基于纯视觉变压器的图像标题模型,称为VITCAP,其中使用了网格表示而不提取区域特征。为了提高性能,我们介绍了一种新颖的概念令牌网络(CTN)来预测语义概念,然后将它们纳入端到端的标题。特别地,CTN是基于视觉变换器构建的,并且旨在通过分类任务预测概念令牌,其中包含丰富的语义信息极大地利益标题任务。与以前的探测器的模型相比,Vitcap大大简化了架构,同时在各种具有挑战性的图像标题数据集上实现了竞争性能。特别是,Vitcap分别达到138.1苹果酒分数,即在Nocaps上的Coco-Caption Karpatal-Splity,93.8和108.6苹果酒分数和Google-CC标题数据集上分别达到138.1苹果酒分数。
translated by 谷歌翻译
视觉标题的开放性质使其成为评估的具有挑战性的区域。大多数拟议模型依赖于专业培训来改善人类关联,导致采用有限,普遍性和索引。我们介绍了“典型性”,一种新的评价制定,根植于信息理论,这是唯一适合缺乏明确的实践的问题。典型程度是我们开发新颖语义比较,SPARC的框架,以及引用的流畅评估度量。在我们的分析过程中,流利的两个单独的流利程度自然出现:风格,由公制刺激和语法捕获,以语法异常罚款的形式捕获。通过对基准数据集进行广泛的实验和消融研究,我们展示了这些语义和流畅程度的这些分解维度如何为标题差异提供更大的系统级洞察。与其他基于规则的评估指标相比,我们拟议的指标与他们的组合,SMURF,达到了人为判断的最先进的相关性。
translated by 谷歌翻译
视觉室内导航(VIN)任务已从数据驱动的机器学习社区中引起了人们的关注,尤其是在最近报告的基于学习方法的成功中。由于这项任务的先天复杂性,研究人员尝试从各种不同角度解决问题,其全部范围尚未在总体报告中捕获。这项调查首先总结了VIN任务的基于学习的方法的代表性工作,然后确定并讨论了阻碍VIN绩效的问题,并激发了值得探索社区的这些关键领域的未来研究。
translated by 谷歌翻译