在本文中,我们解决了大型数据集中的高性能和基于计算有效的基于内容的视频检索问题。当前方法通常提出:(i)采用时空表示和相似性计算的细粒度方法,以高计算成本以高性能获得高性能,或(ii)代表/索引视频作为全球向量的粗粒粒度方法,其中时空 - 时间结构丢失,提供较低的性能,但计算成本也很低。在这项工作中,我们提出了一个知识蒸馏框架,称为Distill-Select(DNS),该框架从表现良好的细颗粒教师网络开始学习:a)具有不同检索性能和计算效率折衷和计算效率的学生网络b)在测试时间迅速将样本引导到合适的学生以保持高检索性能和高计算效率的选择网络。我们培训几个具有不同架构的学生,并得出不同的性能和效率的不同权衡,即速度和存储要求,包括使用二进制表示的精细颗粒学生。重要的是,提出的计划允许在大型,未标记的数据集中进行知识蒸馏 - 这导致了好学生。我们在三个不同的视频检索任务上评估了五个公共数据集的DNS,并证明a)我们的学生在几种情况下达到最先进的性能,b)b)DNS框架在检索性能,计算中提供了极好的权衡速度和存储空间。在特定的配置中,所提出的方法可以通过老师获得相似的地图,但要快20倍,需要减少240倍的存储空间。收集到的数据集和实施已公开可用:https://github.com/mever-team/distill-and-select。
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
计算机视觉任务可以从估计突出物区域和这些对象区域之间的相互作用中受益。识别对象区域涉及利用预借鉴模型来执行对象检测,对象分割和/或对象姿势估计。但是,由于以下原因,在实践中不可行:1)预用模型的训练数据集的对象类别可能不会涵盖一般计算机视觉任务的所有对象类别,2)佩戴型模型训练数据集之间的域间隙并且目标任务的数据集可能会影响性能,3)预磨模模型中存在的偏差和方差可能泄漏到导致无意中偏置的目标模型的目标任务中。为了克服这些缺点,我们建议利用一系列视频帧捕获一组公共对象和它们之间的相互作用的公共基本原理,因此视频帧特征之间的共分割的概念可以用自动的能力装配模型专注于突出区域,以最终的方式提高潜在的任务的性能。在这方面,我们提出了一种称为“共分割激活模块”(COSAM)的通用模块,其可以被插入任何CNN,以促进基于CNN的任何CNN的概念在一系列视频帧特征中的关注。我们在三个基于视频的任务中展示Cosam的应用即1)基于视频的人Re-ID,2)视频字幕分类,并证明COSAM能够在视频帧中捕获突出区域,从而引导对于显着的性能改进以及可解释的关注图。
translated by 谷歌翻译
文本和视频之间交叉模态检索的任务旨在了解视觉和语言之间的对应关系。现有研究遵循基于文本和视频嵌入的测量文本视频相似度的趋势。在常见的做法中,通过将视频帧馈送到用于全球视觉特征提取的视频帧或仅通过使用图形卷积网络使用本地细粒度的框架区域来实现简单的语义关系来构造视频表示。然而,这些视频表示在学习视频表示中的视觉组件之间没有充分利用时空关系,从而无法区分具有相同视觉组件但具有不同关系的视频。为了解决这个问题,我们提出了一种视觉时空关系增强的网络(VSR-Net),这是一种新的跨模型检索框架,其考虑组件之间的空间视觉关系,以增强桥接文本 - 视频模型中的全局视频表示。具体地,使用多层时空变压器来编码视觉时空关系,以学习视觉关系特征。我们将全局视觉和细粒度的关系功能与两个嵌入空格上的文本功能对齐,用于交叉模态文本 - 视频检索。在MSR-VTT和MSVD数据集中进行了广泛的实验。结果表明了我们提出的模型的有效性。我们将发布促进未来研究的代码。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
在本文中,一种称为VigAt的纯粹发行的自下而上的方法,该方法将对象检测器与视觉变压器(VIT)骨干网络一起得出对象和框架功能,以及一个头网络来处理这些功能,以处理事件的任务提出了视频中的识别和解释。VIGAT头由沿空间和时间维度分解的图形注意网络(GAT)组成,以便有效捕获对象或帧之间的局部和长期依赖性。此外,使用从各个GAT块的邻接矩阵得出的加权内(wids),我们表明所提出的体系结构可以识别解释网络决策的最显着对象和框架。进行了全面的评估研究,表明所提出的方法在三个大型公开视频数据集(FCVID,Mini-Kinetics,ActivityNet)上提供了最先进的结果。
translated by 谷歌翻译
排名模型是信息检索系统的主要组成部分。排名的几种方法是基于传统的机器学习算法,使用一组手工制作的功能。最近,研究人员在信息检索中利用了深度学习模型。这些模型的培训结束于结束,以提取来自RAW数据的特征来排序任务,因此它们克服了手工制作功能的局限性。已经提出了各种深度学习模型,每个模型都呈现了一组神经网络组件,以提取用于排名的特征。在本文中,我们在不同方面比较文献中提出的模型,以了解每个模型的主要贡献和限制。在我们对文献的讨论中,我们分析了有前途的神经元件,并提出了未来的研究方向。我们还显示文档检索和其他检索任务之间的类比,其中排名的项目是结构化文档,答案,图像和视频。
translated by 谷歌翻译
视觉变压器正在成为解决计算机视觉问题的强大工具。最近的技术还证明了超出图像域之外的变压器来解决许多与视频相关的任务的功效。其中,由于其广泛的应用,人类的行动识别是从研究界受到特别关注。本文提供了对动作识别的视觉变压器技术的首次全面调查。我们朝着这个方向分析并总结了现有文献和新兴文献,同时突出了适应变形金刚以进行动作识别的流行趋势。由于其专业应用,我们将这些方法统称为``动作变压器''。我们的文献综述根据其架构,方式和预期目标为动作变压器提供了适当的分类法。在动作变压器的背景下,我们探讨了编码时空数据,降低维度降低,框架贴片和时空立方体构造以及各种表示方法的技术。我们还研究了变压器层中时空注意的优化,以处理更长的序列,通常通过减少单个注意操作中的令牌数量。此外,我们还研究了不同的网络学习策略,例如自我监督和零局学习,以及它们对基于变压器的行动识别的相关损失。这项调查还总结了在具有动作变压器重要基准的评估度量评分方面取得的进步。最后,它提供了有关该研究方向的挑战,前景和未来途径的讨论。
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
We propose a very fast frame-level model for anomaly detection in video, which learns to detect anomalies by distilling knowledge from multiple highly accurate object-level teacher models. To improve the fidelity of our student, we distill the low-resolution anomaly maps of the teachers by jointly applying standard and adversarial distillation, introducing an adversarial discriminator for each teacher to distinguish between target and generated anomaly maps. We conduct experiments on three benchmarks (Avenue, ShanghaiTech, UCSD Ped2), showing that our method is over 7 times faster than the fastest competing method, and between 28 and 62 times faster than object-centric models, while obtaining comparable results to recent methods. Our evaluation also indicates that our model achieves the best trade-off between speed and accuracy, due to its previously unheard-of speed of 1480 FPS. In addition, we carry out a comprehensive ablation study to justify our architectural design choices.
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
近年来,机器人社区已经广泛检查了关于同时定位和映射应用范围内的地点识别任务的方法。这篇文章提出了一种基于外观的循环闭合检测管道,命名为“fild ++”(快速和增量环闭合检测) .First,系统由连续图像馈送,并且通过通过单个卷积神经网络通过两次,通过单个卷积神经网络来提取全局和局部深度特征。灵活,分级导航的小世界图逐步构建表示机器人遍历路径的可视数据库基于计算的全局特征。最后,每个时间步骤抓取查询映像,被设置为在遍历的路线上检索类似的位置。遵循的图像到图像配对,它利用本地特征来评估空间信息。因此,在拟议的文章中,我们向全球和本地特征提取提出了一个网络与我们之前的一个网络工作(FILD),而在生成的深度本地特征上采用了彻底搜索验证过程,避免利用哈希代码。关于11个公共数据集的详尽实验表现出系统的高性能(实现其中八个的最高召回得分)和低执行时间(在新学院平均22.05毫秒,这是与其他国家相比包含52480图像的最大版本) - 最艺术方法。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
准确且强大的视觉对象跟踪是最具挑战性和最基本的计算机视觉问题之一。它需要在图像序列中估计目标的轨迹,仅给出其初始位置和分段,或者在边界框的形式中粗略近似。判别相关滤波器(DCF)和深度暹罗网络(SNS)被出现为主导跟踪范式,这导致了重大进展。在过去十年的视觉对象跟踪快速演变之后,该调查介绍了90多个DCFS和暹罗跟踪器的系统和彻底审查,基于九个跟踪基准。首先,我们介绍了DCF和暹罗跟踪核心配方的背景理论。然后,我们在这些跟踪范式中区分和全面地审查共享以及具体的开放研究挑战。此外,我们彻底分析了DCF和暹罗跟踪器对九个基准的性能,涵盖了视觉跟踪的不同实验方面:数据集,评估度量,性能和速度比较。通过提出根据我们的分析提出尊重开放挑战的建议和建议来完成调查。
translated by 谷歌翻译
我们提出了一个名为Star-GNN的视频特征表示学习框架,该框架在多尺度晶格功能图上应用了可插入的图形神经网络组件。 Star-GNN的本质是利用时间动力学和空间内容以及帧中不同尺度区域之间的视觉连接。它对带有晶格特征图的视频进行建模,其中节点代表不同粒度的区域,其加权边缘代表空间和时间链接。上下文节点通过图形神经网络同时汇总,并具有训练有检索三重损失的参数。在实验中,我们表明Star-GNN有效地在视频框架序列上实现了动态注意机制,从而强调了视频中动态和语义丰富的内容,并且对噪声和冗余是强大的。经验结果表明,STAR-GNN可实现基于内容的视频检索的最新性能。
translated by 谷歌翻译
基于视觉的人类活动识别已成为视频分析领域的重要研究领域之一。在过去的十年中,已经引入了许多先进的深度学习算法,以识别视频流中复杂的人类行为。这些深度学习算法对人类活动识别任务显示出令人印象深刻的表现。但是,这些新引入的方法仅专注于模型性能或这些模型在计算效率和鲁棒性方面的有效性,从而导致其解决挑战性人类活动识别问题的提议中的偏差折衷。为了克服当代深度学习模型对人类活动识别的局限性,本文提出了一个计算高效但通用的空间级联框架,该框架利用了深层歧视性的空间和时间特征,以识别人类活动的识别。为了有效地表示人类行动,我们提出了有效的双重注意卷积神经网络(CNN)体系结构,该结构利用统一的通道空间注意机制来提取视频框架中以人为中心的显着特征。双通道空间注意力层与卷积层一起学会在具有特征图数量的物体的空间接收场中更加专注。然后将提取的判别显着特征转发到堆叠的双向封闭式复发单元(BI-GRU),以使用前进和后传球梯度学习,以实现长期时间建模和对人类行为的识别。进行了广泛的实验,其中获得的结果表明,与大多数当代动作识别方法相比,所提出的框架的执行时间的改善最高167倍。
translated by 谷歌翻译