Most deep-learning-based continuous sign language recognition (CSLR) models share a similar backbone consisting of a visual module, a sequential module, and an alignment module. However, due to limited training samples, a connectionist temporal classification loss may not train such CSLR backbones sufficiently. In this work, we propose three auxiliary tasks to enhance the CSLR backbones. The first task enhances the visual module, which is sensitive to the insufficient training problem, from the perspective of consistency. Specifically, since the information of sign languages is mainly included in signers' facial expressions and hand movements, a keypoint-guided spatial attention module is developed to enforce the visual module to focus on informative regions, i.e., spatial attention consistency. Second, noticing that both the output features of the visual and sequential modules represent the same sentence, to better exploit the backbone's power, a sentence embedding consistency constraint is imposed between the visual and sequential modules to enhance the representation power of both features. We name the CSLR model trained with the above auxiliary tasks as consistency-enhanced CSLR, which performs well on signer-dependent datasets in which all signers appear during both training and testing. To make it more robust for the signer-independent setting, a signer removal module based on feature disentanglement is further proposed to remove signer information from the backbone. Extensive ablation studies are conducted to validate the effectiveness of these auxiliary tasks. More remarkably, with a transformer-based backbone, our model achieves state-of-the-art or competitive performance on five benchmarks, PHOENIX-2014, PHOENIX-2014-T, PHOENIX-2014-SI, CSL, and CSL-Daily.
translated by 谷歌翻译
计算机视觉任务可以从估计突出物区域和这些对象区域之间的相互作用中受益。识别对象区域涉及利用预借鉴模型来执行对象检测,对象分割和/或对象姿势估计。但是,由于以下原因,在实践中不可行:1)预用模型的训练数据集的对象类别可能不会涵盖一般计算机视觉任务的所有对象类别,2)佩戴型模型训练数据集之间的域间隙并且目标任务的数据集可能会影响性能,3)预磨模模型中存在的偏差和方差可能泄漏到导致无意中偏置的目标模型的目标任务中。为了克服这些缺点,我们建议利用一系列视频帧捕获一组公共对象和它们之间的相互作用的公共基本原理,因此视频帧特征之间的共分割的概念可以用自动的能力装配模型专注于突出区域,以最终的方式提高潜在的任务的性能。在这方面,我们提出了一种称为“共分割激活模块”(COSAM)的通用模块,其可以被插入任何CNN,以促进基于CNN的任何CNN的概念在一系列视频帧特征中的关注。我们在三个基于视频的任务中展示Cosam的应用即1)基于视频的人Re-ID,2)视频字幕分类,并证明COSAM能够在视频帧中捕获突出区域,从而引导对于显着的性能改进以及可解释的关注图。
translated by 谷歌翻译
Hand and face play an important role in expressing sign language. Their features are usually especially leveraged to improve system performance. However, to effectively extract visual representations and capture trajectories for hands and face, previous methods always come at high computations with increased training complexity. They usually employ extra heavy pose-estimation networks to locate human body keypoints or rely on additional pre-extracted heatmaps for supervision. To relieve this problem, we propose a self-emphasizing network (SEN) to emphasize informative spatial regions in a self-motivated way, with few extra computations and without additional expensive supervision. Specifically, SEN first employs a lightweight subnetwork to incorporate local spatial-temporal features to identify informative regions, and then dynamically augment original features via attention maps. It's also observed that not all frames contribute equally to recognition. We present a temporal self-emphasizing module to adaptively emphasize those discriminative frames and suppress redundant ones. A comprehensive comparison with previous methods equipped with hand and face features demonstrates the superiority of our method, even though they always require huge computations and rely on expensive extra supervision. Remarkably, with few extra computations, SEN achieves new state-of-the-art accuracy on four large-scale datasets, PHOENIX14, PHOENIX14-T, CSL-Daily, and CSL. Visualizations verify the effects of SEN on emphasizing informative spatial and temporal features. Code is available at https://github.com/hulianyuyy/SEN_CSLR
translated by 谷歌翻译
连续的手语识别(CSLR)是一项具有挑战性的研究任务,因为对手语数据的时间顺序缺乏准确的注释。最近流行的用法是基于CSLR的“ CNN + RNN”的混合模型。但是,当在这些作品中提取时间特征时,大多数方法都使用固定的时间接受字段,并且不能很好地提取每个手语单词的时间功能。为了获得更准确的时间特征,本文提出了一个多尺度的时间网络(MSTNET)。网络主要由三个部分组成。重新连接和两个完全连接(FC)层构成框架特征提取部分。时间方面的特征提取部分通过首先使用拟议的多尺度时间块(MST-block)提高不同尺度的时间功能来进行时间特征学习,以提高时间建模能力,然后进一步编码不同的时间特征。通过变压器模块缩放以获得更准确的时间特征。最后,拟议的多级连接派时间分类(CTC)损失零件用于训练以获得识别结果。多级CTC损失可以更好地学习和更新CNN中的浅网络参数,该方法没有参数增加,并且可以灵活地嵌入其他模型中。两个公开可用数据集的实验结果表明,我们的方法可以在没有任何先验知识的情况下以端到端的方式有效地提取手语特征,从而提高CSLR的准确性并实现竞争成果。
translated by 谷歌翻译
人类自然有效地在复杂的场景中找到突出区域。通过这种观察的动机,引入了计算机视觉中的注意力机制,目的是模仿人类视觉系统的这一方面。这种注意机制可以基于输入图像的特征被视为动态权重调整过程。注意机制在许多视觉任务中取得了巨大的成功,包括图像分类,对象检测,语义分割,视频理解,图像生成,3D视觉,多模态任务和自我监督的学习。在本调查中,我们对计算机愿景中的各种关注机制进行了全面的审查,并根据渠道注意,空间关注,暂时关注和分支注意力进行分类。相关的存储库https://github.com/menghaoguo/awesome-vision-tions致力于收集相关的工作。我们还建议了未来的注意机制研究方向。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
在本文中,我们基于任何卷积神经网络中中间注意图的弱监督生成机制,并更加直接地披露了注意模块的有效性,以充分利用其潜力。鉴于现有的神经网络配备了任意注意模块,我们介绍了一个元评论家网络,以评估主网络中注意力图的质量。由于我们设计的奖励的离散性,提出的学习方法是在强化学习环境中安排的,在此设置中,注意力参与者和经常性的批评家交替优化,以提供临时注意力表示的即时批评和修订,因此,由于深度强化的注意力学习而引起了人们的关注。 (Dreal)。它可以普遍应用于具有不同类型的注意模块的网络体系结构,并通过最大程度地提高每个单独注意模块产生的最终识别性能的相对增益来促进其表现能力,如类别和实例识别基准的广泛实验所证明的那样。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
Sign language recognition (SLR) aims to overcome the communication barrier for the people with deafness or the people with hard hearing. Most existing approaches can be typically divided into two lines, i.e., Skeleton-based and RGB-based methods, but both the two lines of methods have their limitations. RGB-based approaches usually overlook the fine-grained hand structure, while Skeleton-based methods do not take the facial expression into account. In attempts to address both limitations, we propose a new framework named Spatial-temporal Part-aware network (StepNet), based on RGB parts. As the name implies, StepNet consists of two modules: Part-level Spatial Modeling and Part-level Temporal Modeling. Particularly, without using any keypoint-level annotations, Part-level Spatial Modeling implicitly captures the appearance-based properties, such as hands and faces, in the feature space. On the other hand, Part-level Temporal Modeling captures the pertinent properties over time by implicitly mining the long-short term context. Extensive experiments show that our StepNet, thanks to Spatial-temporal modules, achieves competitive Top-1 Per-instance accuracy on three widely-used SLR benchmarks, i.e., 56.89% on WLASL, 77.2% on NMFs-CSL, and 77.1% on BOBSL. Moreover, the proposed method is compatible with the optical flow input, and can yield higher performance if fused. We hope that this work can serve as a preliminary step for the people with deafness.
translated by 谷歌翻译
主动演讲者的检测和语音增强已成为视听场景中越来越有吸引力的主题。根据它们各自的特征,独立设计的体系结构方案已被广泛用于与每个任务的对应。这可能导致模型特定于任务所学的表示形式,并且不可避免地会导致基于多模式建模的功能缺乏概括能力。最近的研究表明,建立听觉和视觉流之间的跨模式关系是针对视听多任务学习挑战的有前途的解决方案。因此,作为弥合视听任务中多模式关联的动机,提出了一个统一的框架,以通过在本研究中通过联合学习视听模型来实现目标扬声器的检测和语音增强。
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
本文的目标是学习强烈的唇读模型,可以在静音视频中识别语音。大多数事先有效地处理开放式视觉语音识别问题,通过调整在漫步的可视化功能之上的现有自动语音识别技术。相反,在本文中,我们专注于唇读中遇到的独特挑战,并提出量身定制的解决方案。为此,我们提出以下贡献:(1)我们提出了一种基于关注的汇集机制来聚合视觉语音表示; (2)我们首次使用Sub-Word单元进行唇读,并显示这使我们能够更好地模拟任务的含糊不限; (3)我们提出了一种用于视觉语音检测(VSD)的模型,在唇读网络顶部培训。在上文之后,我们在公共数据集训练时获得最先进的LRS2和LRS3基准,甚至通过使用更少的数据量级验证的大规模工业数据集培训的型号。我们最好的模型在LRS2数据集中实现了22.6%的字错误率,这是唇读模型前所未有的性能,显着降低了唇读和自动语音识别之间的性能差距。此外,在AVA-ActiveSpeaker基准测试中,我们的VSD模型超越了所有可视基线,甚至优于最近的几种视听方法。
translated by 谷歌翻译
多模式变压器表现出高容量和灵活性,可将图像和文本对齐以进行视觉接地。然而,由于自我发挥操作的二次时间复杂性,仅编码的接地框架(例如,transvg)遭受了沉重的计算。为了解决这个问题,我们通过将整个接地过程解散为编码和解码阶段,提出了一种新的多模式变压器体系结构,以动态MDETR形成。关键观察是,图像中存在很高的空间冗余。因此,我们通过在加快视觉接地过程之前利用这种稀疏性来设计一种新的动态多模式变压器解码器。具体而言,我们的动态解码器由2D自适应采样模块和文本引导的解码模块组成。采样模块旨在通过预测参考点的偏移来选择这些信息补丁,而解码模块则可以通过在图像功能和文本功能之间执行交叉注意来提取接地对象信息。这两个模块也被堆叠起来,以逐渐弥合模态间隙,并迭代地完善接地对象的参考点,最终实现了视觉接地的目的。对五个基准测试的广泛实验表明,我们提出的动态MDETR实现了计算和准确性之间的竞争权衡。值得注意的是,在解码器中仅使用9%的特征点,我们可以降低〜44%的多模式变压器的GLOP,但仍然比仅编码器的对应物更高的精度。此外,为了验证其概括能力并扩展我们的动态MDETR,我们构建了第一个单级剪辑授权的视觉接地框架,并在这些基准测试中实现最先进的性能。
translated by 谷歌翻译
对象通常与多个属性相关联,这些属性通常显示出很高的相关性。建模属性之间的复杂关系为多属性学习带来了巨大的挑战。本文提出了一个名为label2label的简单而通用的框架,以利用复杂的属性相关性。 Label2Label是从语言建模的角度来进行多属性预测的首次尝试。具体而言,它将每个属性标签视为描述样本的“单词”。当每个样本带有多个属性标签注释时,这些“单词”自然会形成无序但有意义的“句子”,该句子描述了相应样本的语义信息。受到NLP预训练语言模型的显着成功的启发,Label2Label引入了图像条件的掩盖语言模型,该模型随机掩盖了标签“句子”中的一些“单词”令牌,并旨在基于“蒙版”恢复它们。句子和图像特征传达的上下文。我们的直觉是,如果神经网可以根据上下文和其余属性提示推断丢失的属性,那么实例的属性关系就会得到很好的掌握。 Label2Label在概念上是简单且经验强大的。与高度自定义的特定领域方法相比,我们的方法在不融合特定于任务的先验知识和高度专业的网络设计的情况下,在三个不同的多属性学习任务上获得了最新的结果。代码可从https://github.com/li-wanhua/label2label获得。
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
最近提出的检测变压器(DETR)已建立了一个完全端到端的范式以进行对象检测。但是,DETR遭受慢训练的融合,这阻碍了其对各种检测任务的适用性。我们观察到,由于对象查询和编码图像特征之间的语义不一致,DETR的缓慢收敛在很大程度上归因于将对象查询与相关区域匹配的困难。通过此观察,我们设计了与DETR ++(SAM-DETR ++)设计的语义对齐匹配,以加速DETR的收敛并改善检测性能。 SAM-DETR ++的核心是一个插件模块,该模块将对象查询和编码图像功能投射到相同的功能嵌入空间中,在该空间中,每个对象查询都可以轻松地与具有相似语义的相关区域匹配。此外,SAM-DETR ++搜索了多个代表性关键点,并利用其功能以具有增强的表示能力的语义对齐匹配。此外,SAM-DETR ++可以根据设计的语义对准匹配,以粗到5的方式有效地融合多尺度特征。广泛的实验表明,所提出的SAM-DETR ++实现了优越的收敛速度和竞争性检测准确性。此外,作为一种插件方法,SAM-DETR ++可以以更好的性能补充现有的DITR收敛解决方案,仅使用12个训练时代获得44.8%的AP和49.1%的AP,并使用Resnet-50上的CoCo Val2017上的50个训练时代获得50个训练时期。代码可在https://github.com/zhanggongjie/sam-detr上找到。
translated by 谷歌翻译
The ultimate goal of continuous sign language recognition(CSLR) is to facilitate the communication between special people and normal people, which requires a certain degree of real-time and deploy-ability of the model. However, in the previous research on CSLR, little attention has been paid to the real-time and deploy-ability. In order to improve the real-time and deploy-ability of the model, this paper proposes a zero parameter, zero computation temporal superposition crossover module(TSCM), and combines it with 2D convolution to form a "TSCM+2D convolution" hybrid convolution, which enables 2D convolution to have strong spatial-temporal modelling capability with zero parameter increase and lower deployment cost compared with other spatial-temporal convolutions. The overall CSLR model based on TSCM is built on the improved ResBlockT network in this paper. The hybrid convolution of "TSCM+2D convolution" is applied to the ResBlock of the ResNet network to form the new ResBlockT, and random gradient stop and multi-level CTC loss are introduced to train the model, which reduces the final recognition WER while reducing the training memory usage, and extends the ResNet network from image classification task to video recognition task. In addition, this study is the first in CSLR to use only 2D convolution extraction of sign language video temporal-spatial features for end-to-end learning for recognition. Experiments on two large-scale continuous sign language datasets demonstrate the effectiveness of the proposed method and achieve highly competitive results.
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
Video recognition in an open and dynamic world is quite challenging, as we need to handle different settings such as close-set, long-tail, few-shot and open-set. By leveraging semantic knowledge from noisy text descriptions crawled from the Internet, we focus on the general video recognition (GVR) problem of solving different recognition tasks within a unified framework. The core contribution of this paper is twofold. First, we build a comprehensive video recognition benchmark of Kinetics-GVR, including four sub-task datasets to cover the mentioned settings. To facilitate the research of GVR, we propose to utilize external textual knowledge from the Internet and provide multi-source text descriptions for all action classes. Second, inspired by the flexibility of language representation, we present a unified visual-linguistic framework (VLG) to solve the problem of GVR by an effective two-stage training paradigm. Our VLG is first pre-trained on video and language datasets to learn a shared feature space, and then devises a flexible bi-modal attention head to collaborate high-level semantic concepts under different settings. Extensive results show that our VLG obtains the state-of-the-art performance under four settings. The superior performance demonstrates the effectiveness and generalization ability of our proposed framework. We hope our work makes a step towards the general video recognition and could serve as a baseline for future research. The code and models will be available at https://github.com/MCG-NJU/VLG.
translated by 谷歌翻译