Assessing the critical view of safety in laparoscopic cholecystectomy requires accurate identification and localization of key anatomical structures, reasoning about their geometric relationships to one another, and determining the quality of their exposure. In this work, we propose to capture each of these aspects by modeling the surgical scene with a disentangled latent scene graph representation, which we can then process using a graph neural network. Unlike previous approaches using graph representations, we explicitly encode in our graphs semantic information such as object locations and shapes, class probabilities and visual features. We also incorporate an auxiliary image reconstruction objective to help train the latent graph representations. We demonstrate the value of these components through comprehensive ablation studies and achieve state-of-the-art results for critical view of safety prediction across multiple experimental settings.
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
场景图是一个场景的结构化表示,可以清楚地表达场景中对象之间的对象,属性和关系。随着计算机视觉技术继续发展,只需检测和识别图像中的对象,人们不再满足。相反,人们期待着对视觉场景更高的理解和推理。例如,给定图像,我们希望不仅检测和识别图像中的对象,还要知道对象之间的关系(视觉关系检测),并基于图像内容生成文本描述(图像标题)。或者,我们可能希望机器告诉我们图像中的小女孩正在做什么(视觉问题应答(VQA)),甚至从图像中移除狗并找到类似的图像(图像编辑和检索)等。这些任务需要更高水平的图像视觉任务的理解和推理。场景图只是场景理解的强大工具。因此,场景图引起了大量研究人员的注意力,相关的研究往往是跨模型,复杂,快速发展的。然而,目前没有对场景图的相对系统的调查。为此,本调查对现行场景图研究进行了全面调查。更具体地说,我们首先总结了场景图的一般定义,随后对场景图(SGG)和SGG的发电方法进行了全面和系统的讨论,借助于先验知识。然后,我们调查了场景图的主要应用,并汇总了最常用的数据集。最后,我们对场景图的未来发展提供了一些见解。我们相信这将是未来研究场景图的一个非常有帮助的基础。
translated by 谷歌翻译
为有效语义分割和特别是视频语义分割构建模型的主要障碍是缺乏大型和良好的注释数据集。这种瓶颈在高度专业化的和监管领域特别禁止,例如医学和手术,视频语义细分可能具有重要应用,但数据和专家注释是稀缺的。在这些设置中,可以在培训期间利用时间线索和解剖结构来提高性能。在这里,我们呈现时间限制的神经网络(TCNN),是用于外科视频的视频语义分割的半监督框架。在这项工作中,我们表明AutoEncoder网络可用于有效地提供空间和时间监控信号来培训深度学习模型。我们在新推出的腹腔镜胆囊切除术文程序,内测序和对CADIS,CADIS的公共数据集的适应时测试我们的方法。我们证明,可以利用预测面罩的较低尺寸表示,以在稀疏标记的数据集中提供一致的改进,这些数据集在推理时间不具有额外的计算成本。此外,TCNN框架是模型无关的,可以与其他模型设计选择结合使用,具有最小的额外复杂性。
translated by 谷歌翻译
最近的动作识别模型通过整合对象,其位置和互动来取得令人印象深刻的结果。但是,为每个框架获得密集的结构化注释是乏味且耗时的,使这些方法的训练昂贵且可扩展性较低。同时,如果可以在感兴趣的域内或之外使用一小部分带注释的图像,我们如何将它们用于下游任务的视频?我们提出了一个学习框架的结构(简称SVIT),该结构证明了仅在训练过程中仅可用的少量图像的结构才能改善视频模型。 SVIT依靠两个关键见解。首先,由于图像和视频都包含结构化信息,因此我们用一组\ emph {对象令牌}丰富了一个可以在图像和视频中使用的\ emph {对象令牌}的模型。其次,视频中各个帧的场景表示应与静止图像的场景表示“对齐”。这是通过\ emph {frame-clip一致性}损失来实现的,该损失可确保图像和视频之间结构化信息的流动。我们探索场景结构的特定实例化,即\ emph {手对象图},由手和对象组成,其位置为节点,以及触点/no-contact的物理关系作为边缘。 SVIT在多个视频理解任务和数据集上显示出强烈的性能改进;它在EGO4D CVPR'22对象状态本地化挑战中赢得了第一名。对于代码和预算模型,请访问\ url {https://eladb3.github.io/svit/}的项目页面
translated by 谷歌翻译
This paper presents a framework for jointly grounding objects that follow certain semantic relationship constraints given in a scene graph. A typical natural scene contains several objects, often exhibiting visual relationships of varied complexities between them. These inter-object relationships provide strong contextual cues toward improving grounding performance compared to a traditional object query-only-based localization task. A scene graph is an efficient and structured way to represent all the objects and their semantic relationships in the image. In an attempt towards bridging these two modalities representing scenes and utilizing contextual information for improving object localization, we rigorously study the problem of grounding scene graphs on natural images. To this end, we propose a novel graph neural network-based approach referred to as Visio-Lingual Message PAssing Graph Neural Network (VL-MPAG Net). In VL-MPAG Net, we first construct a directed graph with object proposals as nodes and an edge between a pair of nodes representing a plausible relation between them. Then a three-step inter-graph and intra-graph message passing is performed to learn the context-dependent representation of the proposals and query objects. These object representations are used to score the proposals to generate object localization. The proposed method significantly outperforms the baselines on four public datasets.
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
在这项工作中,我们提出了一个具有结构性图形的新型不确定性感知对象检测框架,其中节点和边缘分别用对象及其空间语义相似性表示。具体而言,我们旨在考虑对象之间的关系,以有效地将它们背景化。为了实现这一目标,我们首先检测对象,然后测量其语义和空间距离以构建对象图,然后由图形神经网络(GNN)表示,用于完善对象的视觉CNN特征。但是,精炼CNN功能和每个对象的检测结果效率低下,可能不需要,因为其中包括不确定性低的正确预测。因此,我们建议通过将表示形式从某些对象(源)转移到有向图上的不确定对象(目标)来处理不确定的对象,而且还仅在对象上改善CNN功能,因为对象被认为是不确定的,其代表性输出来自GNN。此外,我们通过在不确定的物体上给予更大的权重来计算训练损失,以专注于改善不确定的对象预测,同时保持某些对象的高性能。我们将模型称为对象检测(UAGDET)的不确定性感知图网络。然后,我们在实验中验证了我们的大规模空中图像数据集,即DOTA,该数据集由大量对象组成,这些对象在图像中具有很小至大的对象,在该图像上,我们的对象可以改善现有对象检测网络的性能。
translated by 谷歌翻译
基于文本的视频细分旨在通过用文本查询指定演员及其表演动作来细分视频序列中的演员。由于\ emph {emph {语义不对称}的问题,以前的方法无法根据演员及其动作以细粒度的方式将视频内容与文本查询对齐。 \ emph {语义不对称}意味着在多模式融合过程中包含不同量的语义信息。为了减轻这个问题,我们提出了一个新颖的演员和动作模块化网络,该网络将演员及其动作分别定位在两个单独的模块中。具体来说,我们首先从视频和文本查询中学习与参与者相关的内容,然后以对称方式匹配它们以定位目标管。目标管包含所需的参与者和动作,然后将其送入完全卷积的网络,以预测演员的分割掩模。我们的方法还建立了对象的关联,使其与所提出的时间建议聚合机制交叉多个框架。这使我们的方法能够有效地细分视频并保持预测的时间一致性。整个模型允许联合学习参与者的匹配和细分,并在A2D句子和J-HMDB句子数据集上实现单帧细分和完整视频细分的最新性能。
translated by 谷歌翻译
We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.
translated by 谷歌翻译
Graph representation of objects and their relations in a scene, known as a scene graph, provides a precise and discernible interface to manipulate a scene by modifying the nodes or the edges in the graph. Although existing works have shown promising results in modifying the placement and pose of objects, scene manipulation often leads to losing some visual characteristics like the appearance or identity of objects. In this work, we propose DisPositioNet, a model that learns a disentangled representation for each object for the task of image manipulation using scene graphs in a self-supervised manner. Our framework enables the disentanglement of the variational latent embeddings as well as the feature representation in the graph. In addition to producing more realistic images due to the decomposition of features like pose and identity, our method takes advantage of the probabilistic sampling in the intermediate features to generate more diverse images in object replacement or addition tasks. The results of our experiments show that disentangling the feature representations in the latent manifold of the model outperforms the previous works qualitatively and quantitatively on two public benchmarks. Project Page: https://scenegenie.github.io/DispositioNet/
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
图形神经网络非常适合捕获时空域中各个实体之间的潜在相互作用(例如视频)。但是,当不可用的显式结构时,它不明显的原子元素应该表示为节点。当前工作通常使用预先训练的对象探测器或固定的预定义区域来提取曲线节点。我们提出的模型改进了这一点,了解动态地附加到沉重的突出区域的节点,其与更高级别的任务相关,而不使用任何对象级监督。构建这些本地化的自适应节点,使我们的模型感应偏向为中心的表示,并且我们表明它发现与视频中的对象完全相关的区域。在广泛的消融研究和两个具有挑战性数据集的实验中,我们向前图神经网络模型显示出卓越的性能,用于视频分类。
translated by 谷歌翻译
这项工作的目的是学习以对象为中心的视频表示形式,以改善对新任务的可转让性,即与动作分类前训练任务不同的任务。为此,我们介绍了基于变压器体系结构的新的以对象为中心的视频识别模型。该模型学习了视频中以对象为中心的摘要向量,并使用这些向量融合视频剪辑的视觉和时空轨迹“模态”。我们还引入了一种新型的轨迹对比损失,以进一步增强这些摘要矢量的物质性。通过在四个数据集上进行实验 - Somethingsometh-v2,Somethingse,Action Genome和Epickitchens-我们表明,以对象为中心的模型优于先验的视频表示(对象 - 敏捷和对象感知)看不见的对象和看不见的环境; (2)小型学习新课程; (3)线性探测到其他下游任务;以及(4)用于标准动作分类。
translated by 谷歌翻译
Graph neural networks have shown to learn effective node representations, enabling node-, link-, and graph-level inference. Conventional graph networks assume static relations between nodes, while relations between entities in a video often evolve over time, with nodes entering and exiting dynamically. In such temporally-dynamic graphs, a core problem is inferring the future state of spatio-temporal edges, which can constitute multiple types of relations. To address this problem, we propose MTD-GNN, a graph network for predicting temporally-dynamic edges for multiple types of relations. We propose a factorized spatio-temporal graph attention layer to learn dynamic node representations and present a multi-task edge prediction loss that models multiple relations simultaneously. The proposed architecture operates on top of scene graphs that we obtain from videos through object detection and spatio-temporal linking. Experimental evaluations on ActionGenome and CLEVRER show that modeling multiple relations in our temporally-dynamic graph network can be mutually beneficial, outperforming existing static and spatio-temporal graph neural networks, as well as state-of-the-art predicate classification methods.
translated by 谷歌翻译
移动屏幕的布局是UI设计研究和对屏幕的语义理解的关键数据源。但是,现有数据集中的UI布局通常是嘈杂的,具有与其视觉表示的不匹配,或者由难以分析和模型的通用或应用特定类型组成。在本文中,我们提出了使用深度学习方法的粘土管道,用于去噪UI布局,允许我们在比例下自动改进现有的移动UI布局数据集。我们的管道采用屏幕截图和原始UI布局,通过删除不正确的节点并向每个节点分配语义有意义的类型来注释原始布局。为了实验我们的数据清洁管道,我们根据来自Rico的截图和原始布局,创建59,555个人注释的屏幕布局的粘土数据集,该网站上是一个公共移动UI语料库。我们的深度模型可实现高精度,F1分数为82.7%,用于检测没有有效的视觉表示的布局对象,85.9%用于识别对象类型,这显着优于启发式基线。我们的工作为创建大规模高质量的UI布局数据集提供了用于数据驱动的移动UI研究的基础,并减少了手动标签的需要,这些努力非常昂贵。
translated by 谷歌翻译
Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel endto-end model that generates such structured scene representation from an input image. The model solves the scene graph inference problem using standard RNNs and learns to iteratively improves its predictions via message passing. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods for generating scene graphs using Visual Genome dataset and inferring support relations with NYU Depth v2 dataset.
translated by 谷歌翻译
图提供了一种自然的方式来制定多个对象跟踪(MOT)和多个对象跟踪和分割(MOTS),逐个检测范式中。但是,他们还引入了学习方法的主要挑战,因为定义可以在这种结构化领域运行的模型并不是微不足道的。在这项工作中,我们利用MOT的经典网络流程公式来定义基于消息传递网络(MPN)的完全微分框架。通过直接在图形域上操作,我们的方法可以在整个检测和利用上下文特征上全球推理。然后,它共同预测了数据关联问题的最终解决方案和场景中所有对象的分割掩码,同时利用这两个任务之间的协同作用。我们在几个公开可用的数据集中获得跟踪和细分的最新结果。我们的代码可在github.com/ocetintas/mpntrackseg上找到。
translated by 谷歌翻译
当前的3D分割方法很大程度上依赖于大规模的点状数据集,众所周知,这些数据集众所周知。很少有尝试规避需要每点注释的需求。在这项工作中,我们研究了弱监督的3D语义实例分割。关键的想法是利用3D边界框标签,更容易,更快地注释。确实,我们表明只有仅使用边界框标签训练密集的分割模型。在我们方法的核心上,\ name {}是一个深层模型,灵感来自经典的霍夫投票,直接投票赞成边界框参数,并且是专门针对边界盒票的专门定制的群集方法。这超出了常用的中心票,这不会完全利用边界框注释。在扫描仪测试中,我们弱监督的模型在其他弱监督的方法中获得了领先的性能(+18 MAP@50)。值得注意的是,它还达到了当前完全监督模型的50分数的地图的97%。为了进一步说明我们的工作的实用性,我们在最近发布的Arkitscenes数据集中训练Box2mask,该数据集仅使用3D边界框注释,并首次显示引人注目的3D实例细分掩码。
translated by 谷歌翻译
视频字幕定位目标将复杂的视觉内容解释为文本说明,这要求模型充分了解包括对象及其交互的视频场景。流行的方法采用现成的对象检测网络来提供对象建议,并使用注意机制来建模对象之间的关系。他们通常会错过一些预验证模型的不确定语义概念,并且无法识别对象之间的确切谓词关系。在本文中,我们研究了为给定视频生成文本描述的开放研究任务,并提出了带有元概念的跨模式图(CMG)。具体而言,为了涵盖视频字幕中有用的语义概念,我们弱地学习了文本描述的相应视觉区域,其中相关的视觉区域和文本单词被命名为跨模式元概念。我们通过学习的跨模式元概念动态地构建元概念图。我们还构建了整体视频级别和本地框架级视频图,并具有预测的谓词,以建模视频序列结构。我们通过广泛的实验来验证我们提出的技术的功效,并在两个公共数据集上实现最新结果。
translated by 谷歌翻译