视觉关系检测旨在检测图像中对象之间的相互作用。但是,由于对象和相互作用的多样性,此任务遭受了组合爆炸的影响。由于与同一对象相关的相互作用是依赖的,因此我们探讨了相互作用的依赖性以减少搜索空间。我们通过交互图明确地对象和交互对象进行建模,然后提出一种消息式风格的算法来传播上下文信息。因此,我们称为建议的方法神经信息传递(NMP)。我们进一步整合了语言先验和空间线索,以排除不切实际的互动并捕获空间互动。两个基准数据集的实验结果证明了我们提出的方法的优越性。我们的代码可在https://github.com/phyllish/nmp上找到。
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.
translated by 谷歌翻译
场景图是一个场景的结构化表示,可以清楚地表达场景中对象之间的对象,属性和关系。随着计算机视觉技术继续发展,只需检测和识别图像中的对象,人们不再满足。相反,人们期待着对视觉场景更高的理解和推理。例如,给定图像,我们希望不仅检测和识别图像中的对象,还要知道对象之间的关系(视觉关系检测),并基于图像内容生成文本描述(图像标题)。或者,我们可能希望机器告诉我们图像中的小女孩正在做什么(视觉问题应答(VQA)),甚至从图像中移除狗并找到类似的图像(图像编辑和检索)等。这些任务需要更高水平的图像视觉任务的理解和推理。场景图只是场景理解的强大工具。因此,场景图引起了大量研究人员的注意力,相关的研究往往是跨模型,复杂,快速发展的。然而,目前没有对场景图的相对系统的调查。为此,本调查对现行场景图研究进行了全面调查。更具体地说,我们首先总结了场景图的一般定义,随后对场景图(SGG)和SGG的发电方法进行了全面和系统的讨论,借助于先验知识。然后,我们调查了场景图的主要应用,并汇总了最常用的数据集。最后,我们对场景图的未来发展提供了一些见解。我们相信这将是未来研究场景图的一个非常有帮助的基础。
translated by 谷歌翻译
Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel endto-end model that generates such structured scene representation from an input image. The model solves the scene graph inference problem using standard RNNs and learns to iteratively improves its predictions via message passing. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods for generating scene graphs using Visual Genome dataset and inferring support relations with NYU Depth v2 dataset.
translated by 谷歌翻译
Recent scene graph generation (SGG) frameworks have focused on learning complex relationships among multiple objects in an image. Thanks to the nature of the message passing neural network (MPNN) that models high-order interactions between objects and their neighboring objects, they are dominant representation learning modules for SGG. However, existing MPNN-based frameworks assume the scene graph as a homogeneous graph, which restricts the context-awareness of visual relations between objects. That is, they overlook the fact that the relations tend to be highly dependent on the objects with which the relations are associated. In this paper, we propose an unbiased heterogeneous scene graph generation (HetSGG) framework that captures relation-aware context using message passing neural networks. We devise a novel message passing layer, called relation-aware message passing neural network (RMP), that aggregates the contextual information of an image considering the predicate type between objects. Our extensive evaluations demonstrate that HetSGG outperforms state-of-the-art methods, especially outperforming on tail predicate classes.
translated by 谷歌翻译
在图像理解项目中越来越多的情况下,场景图一代在电脑视觉研究中获得了很多关注,如视觉问题应答,图像标题,自动驾驶汽车,人群行为分析,活动识别等等。场景图,图像的视觉图形结构,非常有助于简化图像理解任务。在这项工作中,我们介绍了一个称为几何上下文的后处理算法,以了解视觉场景更好的几何上。我们使用该后处理算法在对象对与先前模型之间添加和改进几何关系。我们通过计算对象对之间的方向和距离来利用此上下文。我们使用知识嵌入式路由网络(KERN)作为我们的基准模型,将工作与我们的算法扩展,并显示最近最先进的算法上的可比结果。
translated by 谷歌翻译
视觉关系检测(VRD)促使计算机视觉模型“看到”超越单个对象实例,并“理解”场景中不同对象是如何相关的。 VRD的传统方式首先检测图像中的对象,然后单独预测检测到的对象实例之间的关系。这种不相交的方法很容易预测具有相似语义含义的同一对象对之间的冗余关系标签(即谓词),或者具有与地面真实含义相似但在语义上不正确的含义相似的语义含义。为了解决这个问题,我们建议共同训练具有视觉对象特征和语义关系特征的VRD模型。为此,我们提出了弗雷伯特(Vrebert),这是一种类似于伯特的变压器模型,用于通过多阶段训练策略进行视觉关系检测,以共同处理视觉和语义特征。我们表明,我们简单的类似BERT的模型能够超越谓词预测中最先进的VRD模型。此外,我们表明,通过使用预先训练的Vrebert模型,我们的模型通过明显的余量(+8.49 r@50和+8.99 R@100)推动了最新的零拍谓语预测。
translated by 谷歌翻译
We investigate the problem of producing structured graph representations of visual scenes. Our work analyzes the role of motifs: regularly appearing substructures in scene graphs. We present new quantitative insights on such repeated structures in the Visual Genome dataset. Our analysis shows that object labels are highly predictive of relation labels but not vice-versa. We also find that there are recurring patterns even in larger subgraphs: more than 50% of graphs contain motifs involving at least two relations. Our analysis motivates a new baseline: given object detections, predict the most frequent relation between object pairs with the given labels, as seen in the training set. This baseline improves on the previous state-of-the-art by an average of 3.6% relative improvement across evaluation settings. We then introduce Stacked Motif Networks, a new architecture designed to capture higher order motifs in scene graphs that further improves over our strong baseline by an average 7.1% relative gain. Our code is available at github.com/rowanz/neural-motifs.
translated by 谷歌翻译
同一场景中的不同对象彼此之间或多或少相关,但是只有有限数量的这些关系值得注意。受到对象检测效果的DETR的启发,我们将场景图生成视为集合预测问题,并提出了具有编码器decoder架构的端到端场景图生成模型RELTR。关于视觉特征上下文的编码器原因是,解码器使用带有耦合主题和对象查询的不同类型的注意机制渗透了一组固定大小的三胞胎主题prodicate-object。我们设计了一套预测损失,以执行地面真相与预测三胞胎之间的匹配。与大多数现有场景图生成方法相反,Reltr是一种单阶段方法,它仅使用视觉外观直接预测一组关系,而无需结合实体并标记所有可能的谓词。视觉基因组和开放图像V6数据集的广泛实验证明了我们模型的出色性能和快速推断。
translated by 谷歌翻译
Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. "man riding bicycle" and "man pushing bicycle"). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. "man" and "bicycle") and predicates (e.g. "riding" and "pushing") independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.
translated by 谷歌翻译
动态场景图表形式的结构化视频表示是有关多个视频理解任务的有效工具。与场景图的任务相比,由于场景的时间动态和预测的固有时间波动,动态场景图生成是更具挑战性。我们表明捕获长期依赖性是有效生成动态场景图的关键。我们通过从视频中构造一致的长期对象轨迹来介绍检测跟踪 - 识别范例,然后是捕获对象和视觉关系的动态。实验结果表明,我们的动态场景图检测变压器(DSG-DETR)在基准数据集动作基因组上的显着余量优于最先进的方法。我们还进行消融研究并验证所提出的方法的每个组成部分的有效性。
translated by 谷歌翻译
Scene graphs provide a rich, structured representation of a scene by encoding the entities (objects) and their spatial relationships in a graphical format. This representation has proven useful in several tasks, such as question answering, captioning, and even object detection, to name a few. Current approaches take a generation-by-classification approach where the scene graph is generated through labeling of all possible edges between objects in a scene, which adds computational overhead to the approach. This work introduces a generative transformer-based approach to generating scene graphs beyond link prediction. Using two transformer-based components, we first sample a possible scene graph structure from detected objects and their visual features. We then perform predicate classification on the sampled edges to generate the final scene graph. This approach allows us to efficiently generate scene graphs from images with minimal inference overhead. Extensive experiments on the Visual Genome dataset demonstrate the efficiency of the proposed approach. Without bells and whistles, we obtain, on average, 20.7% mean recall (mR@100) across different settings for scene graph generation (SGG), outperforming state-of-the-art SGG approaches while offering competitive performance to unbiased SGG approaches.
translated by 谷歌翻译
This paper presents a framework for jointly grounding objects that follow certain semantic relationship constraints given in a scene graph. A typical natural scene contains several objects, often exhibiting visual relationships of varied complexities between them. These inter-object relationships provide strong contextual cues toward improving grounding performance compared to a traditional object query-only-based localization task. A scene graph is an efficient and structured way to represent all the objects and their semantic relationships in the image. In an attempt towards bridging these two modalities representing scenes and utilizing contextual information for improving object localization, we rigorously study the problem of grounding scene graphs on natural images. To this end, we propose a novel graph neural network-based approach referred to as Visio-Lingual Message PAssing Graph Neural Network (VL-MPAG Net). In VL-MPAG Net, we first construct a directed graph with object proposals as nodes and an edge between a pair of nodes representing a plausible relation between them. Then a three-step inter-graph and intra-graph message passing is performed to learn the context-dependent representation of the proposals and query objects. These object representations are used to score the proposals to generate object localization. The proposed method significantly outperforms the baselines on four public datasets.
translated by 谷歌翻译
今天的VIDSGG模型是基于建议的方法,即,它们首先生成众多配对的主题对象片段作为提案,然后对每个提案进行谓词分类。在本文中,我们认为这种普遍的基于建议的框架有三个固有的缺点:1)建议的地面真理谓词标签部分是正确的。 2)他们打破了相同主题对象对的不同谓词实例之间的高阶关系。 3)Vidsgg性能是由提案质量的大约。为此,我们向Vidsgg提出了一个新的分类 - 然后接地框架,可以避免所有三个被忽视的缺点。同时,在此框架下,我们将视频场景图形为临时二分形图形,其中实体和谓词是具有时隙的两种类型的节点,并且边缘在这些节点之间表示不同的语义角色。此配方充分利用了我们的新框架。因此,我们进一步提出了一种基于新的二分曲线图的SGG模型:大。具体而言,大由两部分组成:分类阶段和接地阶段,前者旨在对所有节点和边缘的类别进行分类,并且后者试图本地化每个关系实例的时间位置。两个Vidsgg数据集上的广泛消融已证明我们框架和大的有效性。
translated by 谷歌翻译
由Hong和Pavlic(2021)引入的单隐式层随机加权特征网络(RWFN)被开发为关系学习任务的神经张量网络方法的替代方案。其相对较小的占地面积结合使用了两个随机输入投影 - 一种昆虫 - 脑激发的输入表示和随机傅里叶特征 - 允许它以相对较低的培训成本实现有关关系的丰富表现力。特别是,当红和帕德奇比较RWFN到逻辑张量网络(LTNS)进行语义图像解释(SII)任务以提取图像的结构化语义描述,他们表明,两个隐藏的RWFN集成更好地捕获输入之间的关系具有更快的培训过程,即使它使用了更少的学习参数。在本文中,我们使用RWFN来执行视觉关系检测(VRD)任务,这些任务是更具挑战性的SII任务。零拍摄学习方法与RWFN一起使用,可以利用与其他所见关系和背景知识的相似性 - 以对象,关系和对象之间的逻辑约束表示 - 实现能够预测未出现在培训中的三维群体放。在视觉关系数据集上的实验,用于比较RWFN和LTNS之间的性能,其中一个领先的统计关系学习框架之一,显示RWFNS以谓词检测任务的销售胜过LTNS,同时使用较少数量的适应性参数(1:56比率)。此外,即使RWFNS的空间复杂性远小于LTNS(1:27比率),RWFN表示的背景技术也可用于减轻训练集的不完整性。
translated by 谷歌翻译
场景图一代旨在通过显式建模潜在对象及其关系来解释输入图像,这主要由先前的方法通过神经网络模型来解决。目前,这种近似模型通常假设输出变量完全独立,因此忽略了信息性的高阶交互。这可能导致输入图像的不一致解释。在本文中,我们提出了一种新的神经信仰传播方法来产生所得到的场景图。它采用结构贝尔近似而不是平均场近似,以推断相关的边缘。为了找到更好的偏差方差权衡,所提出的模型不仅包含成对交互,而且还包含更高的顺序相互作用进入相关的评分功能。它达到了各种流行的场景图生成基准的最先进的性能。
translated by 谷歌翻译
场景图生成(SGG)任务旨在在给定图像中检测所有对象及其成对的视觉关系。尽管SGG在过去几年中取得了显着的进展,但几乎所有现有的SGG模型都遵循相同的训练范式:他们将SGG中的对象和谓词分类视为单标签分类问题,而地面真实性是一个hot目标。标签。但是,这种普遍的训练范式忽略了当前SGG数据集的两个特征:1)对于正样本,某些特定的主题对象实例可能具有多个合理的谓词。 2)对于负样本,有许多缺失的注释。不管这两个特征如何,SGG模型都很容易被混淆并做出错误的预测。为此,我们为无偏SGG提出了一种新颖的模型不合命相的标签语义知识蒸馏(LS-KD)。具体而言,LS-KD通过将预测的标签语义分布(LSD)与其原始的单热目标标签融合来动态生成每个主题对象实例的软标签。 LSD反映了此实例和多个谓词类别之间的相关性。同时,我们提出了两种不同的策略来预测LSD:迭代自我KD和同步自我KD。大量的消融和对三项SGG任务的结果证明了我们所提出的LS-KD的优势和普遍性,这些LS-KD可以始终如一地实现不同谓词类别之间的不错的权衡绩效。
translated by 谷歌翻译
3D密集字幕是最近提供的新型任务,其中点云包含比2D对应物更多的几何信息。但是,由于点云中包含的更高复杂性和更广泛的对象关系,它也更具挑战性。现有方法仅将这种关系视为图表中对象特征学习的副产品,而无需特别编码它们,从而导致了亚最佳结果。在本文中,旨在通过捕获和利用3D场景中的复杂关系来改善3D密集的字幕,我们提出了更多的多阶关系挖掘模型,以支持产生更多的描述性和全面标题。从技术上讲,我们更多地以渐进的方式编码对象关系,因为可以从有限数量的基本关系中推论复杂的关系。我们首先设计了一种新型的空间布局图卷积(SLGC),该图形将几个一阶关系编码为在3D对象建议上构造的图的边缘。接下来,从结果图中,我们进一步提取多个三重态,这些三重态将基本的一阶关系封装为基本单元,并构造几个以对象为中心的三重态注意图(OTAG),以推断每个目标对象的多阶关系。将OTAG的更新的节点功能聚合并输入标题解码器,以提供丰富的关系提示,因此可以生成包括与上下文对象的不同关系的字幕。 SCAN2CAP数据集的广泛实验证明了我们提出的更多及其组件的有效性,并且我们也表现优于当前最新方法。我们的代码可从https://github.com/sxjyjay/more获得。
translated by 谷歌翻译
场景图生成(SGG)代表对象及其与图形结构的交互。最近,许多作品致力于解决SGG中的不平衡问题。但是,在整个训练过程中低估了头部谓词,他们破坏了为尾部提供一般特征的头部谓词的特征。此外,对尾部谓词的过多注意会导致语义偏差。基于此,我们提出了一个新颖的SGG框架,学习以从头到尾生成场景图(SGG-HT),其中包含课程重新定位机制(CRM)和语义上下文上下文模块(SCM)。 CRM首先学习头/简单样品,以获得头部谓词的稳健特征,然后逐渐专注于尾部/硬质。建议通过确保在全球和局部表示中生成的场景图与地面真相之间的语义一致性来缓解语义偏差。实验表明,SGG-HT显着减轻了视觉基因组上最先进的表现。
translated by 谷歌翻译