细粒度的图像识别是具有挑战性的,因为鉴别性线索通常是碎片化的,无论是来自单个图像还是多个图像。尽管有重要的改进,但大多数现有方法仍然专注于从单个图像中的最辨别部分,忽略其他地区的信息细节,缺乏从其他相关图像的线索考虑。在本文中,我们从新的角度分析了微粒图像识别的困难,并提出了一种具有峰值抑制模块和知识引导模块的变压器架构,其尊重单个图像中辨别特征的多样化和鉴别线索的聚合在多个图像中。具体地,峰值抑制模块首先利用线性投影来将输入图像转换为顺序令牌。然后,它基于变压器编码器产生的注意响应来阻止令牌。该模块因特征学习过程中的最辨别部分而受到惩罚,因此,提高了忽视区域的信息利用。知识引导模块将从峰值抑制模块生成的基于图像的表示与被学习的知识嵌入集进行比较,以获得知识响应系数。之后,使用响应系数作为分类分数,将知识学习形式形式化为分类问题。在训练期间更新知识嵌入和基于图像的表示,以便知识嵌入包括不同图像的鉴别线索。最后,我们将所获得的知识嵌入纳入基于形象的表示,作为全面的表示,导致性能显着提高。对六个流行数据集的广泛评估证明了所提出的方法的优势。
translated by 谷歌翻译
旨在识别来自子类别的对象的细粒度视觉分类(FGVC)是一个非常具有挑战性的任务,因为固有的微妙级别差异。大多数现有工程主要通过重用骨干网络来提取检测到的歧视区域的特征来解决这个问题。然而,该策略不可避免地使管道复杂化并推动所提出的区域,其中大多数物体的大多数部分未能定位真正重要的部分。最近,视觉变压器(VIT)在传统的分类任务中表现出其强大的表现。变压器的自我关注机制将每个补丁令牌链接到分类令牌。在这项工作中,我们首先评估vit框架在细粒度识别环境中的有效性。然后,由于注意力的强度,可以直观地被认为是令牌重要性的指标,我们进一步提出了一种新颖的部分选择模块,可以应用于我们整合变压器的所有原始注意力的变压器架构进入注意地图,用于指导网络以有效,准确地选择鉴别的图像斑块并计算它们的关系。应用对比损失来扩大混淆类的特征表示之间的距离。我们将基于增强的变压器的模型Transfg命名,并通过在我们实现最先进的绩效的五个流行的细粒度基准测试中进行实验来展示它的价值。提出了更好地理解模型的定性结果。
translated by 谷歌翻译
细粒度的视觉分类(FGVC)旨在识别类似下属类别的对象,这对于人类的准确自动识别需求而言是挑战性和实用性的。大多数FGVC方法都集中在判别区域开采的注意力机制研究上,同时忽略了它们的相互依赖性和组成的整体对象结构,这对于模型的判别信息本地化和理解能力至关重要。为了解决上述限制,我们建议结构信息建模变压器(SIM-TRANS)将对象结构信息纳入变压器,以增强判别性表示学习,以包含外观信息和结构信息。具体而言,我们将图像编码为一系列贴片令牌,并使用两个精心设计的模块构建强大的视觉变压器框架:(i)提出了结构信息学习(SIL)模块以挖掘出在该模块中的空间上下文关系,对象范围借助变压器的自我发项权重,进一步注入导入结构信息的模型; (ii)引入了多级特征增强(MFB)模块,以利用类中多级特征和对比度学习的互补性,以增强功能鲁棒性,以获得准确的识别。提出的两个模块具有轻加权,可以插入任何变压器网络并轻松地端到端训练,这仅取决于视觉变压器本身带来的注意力重量。广泛的实验和分析表明,所提出的SIM-TRANS在细粒度的视觉分类基准上实现了最先进的性能。该代码可在https://github.com/pku-icst-mipl/sim-trans_acmmm2022上获得。
translated by 谷歌翻译
在过去的几年中,基于深度卷积神经网络(CNN)的图像识别已取得了重大进展。这主要是由于此类网络在挖掘判别对象姿势以及质地和形状的零件信息方面具有强大的能力。这通常不适合细粒度的视觉分类(FGVC),因为它由于阻塞,变形,照明等而表现出较高的类内和较低的阶层差异。表征对象/场景。为此,我们提出了一种方法,该方法可以通过汇总大多数相关图像区域的上下文感知特征及其在区分细颗粒类别中避免边界框和/或可区分的零件注释中的重要性来有效捕获细微的变化。我们的方法的灵感来自最新的自我注意力和图形神经网络(GNNS)方法的启发端到端的学习过程。我们的模型在八个基准数据集上进行了评估,该数据集由细粒对象和人类对象相互作用组成。它的表现优于最先进的方法,其识别准确性很大。
translated by 谷歌翻译
Fine-grained visual recognition is to classify objects with visually similar appearances into subcategories, which has made great progress with the development of deep CNNs. However, handling subtle differences between different subcategories still remains a challenge. In this paper, we propose to solve this issue in one unified framework from two aspects, i.e., constructing feature-level interrelationships, and capturing part-level discriminative features. This framework, namely PArt-guided Relational Transformers (PART), is proposed to learn the discriminative part features with an automatic part discovery module, and to explore the intrinsic correlations with a feature transformation module by adapting the Transformer models from the field of natural language processing. The part discovery module efficiently discovers the discriminative regions which are highly-corresponded to the gradient descent procedure. Then the second feature transformation module builds correlations within the global embedding and multiple part embedding, enhancing spatial interactions among semantic pixels. Moreover, our proposed approach does not rely on additional part branches in the inference time and reaches state-of-the-art performance on 3 widely-used fine-grained object recognition benchmarks. Experimental results and explainable visualizations demonstrate the effectiveness of our proposed approach. The code can be found at https://github.com/iCVTEAM/PART.
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
弱监督对象本地化(WSOL)旨在仅通过使用图像级标签来学习对象本地化器。基于卷积神经网络(CNN)的技术通常导致突出显示物体的最辨别部分,同时忽略整个对象范围。最近,变压器架构已经部署到WSOL,以捕获具有自我关注机制和多层的Perceptron结构的远程特征依赖性。然而,变压器缺乏CNN所固有的局部感应偏差,因此可以恶化WSOL中的局部特征细节。在本文中,我们提出了一种基于变压器的新型框架,称为LCTR(局部连续性变压器),该框架被称为LCTR(局部连续性变压器),该框架在长期特征依赖项中提高全局特征的本地感知能力。为此,我们提出了一个关系的修补程序注意模块(RPAM),其考虑全球跨补丁信息。我们进一步设计了一个CUE挖掘模块(CDM),它利用本地特征来指导模型的学习趋势,以突出弱局部响应。最后,在两个广泛使用的数据集,即Cub-200-2011和ILSVRC上进行综合实验,以验证我们方法的有效性。
translated by 谷歌翻译
很少有细粒度的学习旨在将查询图像分类为具有细粒度差异的一组支持类别之一。尽管学习不同对象通过深神网络的局部差异取得了成功,但如何在基于变压器的架构中利用查询支持的跨图像对象语义关系在几个摄像机的细粒度场景中仍未得到充分探索。在这项工作中,我们提出了一个基于变压器的双螺旋模型,即HelixFormer,以双向和对称方式实现跨图像对象语义挖掘。 HelixFormer由两个步骤组成:1)跨不同分支的关系挖掘过程(RMP),以及2)在每个分支中表示增强过程(REP)。通过设计的RMP,每个分支都可以使用来自另一个分支的信息提取细粒对象级跨图义语义关系图(CSRMS),从而确保在语义相关的本地对象区域中更好地跨图像相互作用。此外,借助CSRMS,开发的REP可以增强每个分支中发现的与语义相关的局部区域的提取特征,从而增强模型区分细粒物体的细微特征差异的能力。在五个公共细粒基准上进行的广泛实验表明,螺旋形式可以有效地增强识别细颗粒物体的跨图像对象语义关系匹配,从而在1次以下的大多数先进方法中实现更好的性能,并且5击场景。我们的代码可在以下网址找到:https://github.com/jiakangyuan/helixformer
translated by 谷歌翻译
零射击学习(ZSL)通过将语义知识转移到看不见者的语义知识来解决新的类识别问题。通过单独使用单向关注,现有的基于关注的模型在单个图像中努力学习劣势区域特征,这忽略了视觉特征的可转换性和辨别属性定位。在本文中,我们提出了一个跨属性引导的变换器网络,称为Transzero ++,以改进可视化功能,并学习精确的属性本地化,用于ZSL中的语义增强的可视嵌入表示。 Transzero ++由Attribute $ \ LightArrow $ Visual Transformer子网(AVT)和Visual $ \ LightArrow $属性变压器子网(增值税)组成。具体而言,AVT首先采用功能增强编码器来缓解交叉数据集问题,并通过减少区域特征之间的缠绕的相对几何关系来提高视觉特征的可转换性。然后,使用属性$ \ lightArrow $可视解码器来本地化与基于属性的可视特征表示的给定图像中的每个属性最相关的图像区域。类似地,VAT使用类似的功能增强编码器来改进视觉功能,这些功能进一步应用于Visual $ \ lightarrow $属性解码器,以学习基于Visual-基的属性功能。通过进一步引入语义协作损失,两个属性引导的变压器通过语义协作学习互相教导学习语义增强的视觉嵌入。广泛的实验表明,Transzero ++在三个挑战ZSL基准上实现了新的最先进的结果。该代码可用于:\ url {https://github.com/shiming-chen/transzero_pp}。
translated by 谷歌翻译
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
translated by 谷歌翻译
细粒度的视觉分类(FGVC)旨在识别子类别的对象。由于级细的级别差异,这是一个非常具有挑战性的任务。现有的研究将大型卷积神经网络或视觉变压器应用为特征提取器,这是极其计算昂贵的。实际上,实际的细粒度识别的场景通常需要更轻薄的移动网络可以离线使用。然而,基本移动网络特征提取能力比大规模模型弱。本文基于轻质MobileNetv2,我们提出了一种具有递归马赛克发生器(RMG-PMSI)的逐步多级交互训练方法。首先,我们提出了一种递归马赛克发生器(RMG),其产生不同阶段的不同粒度的图像。然后,不同阶段的特征通过多级相互作用(MSI)模块,其增强和补充不同阶段的相应特征。最后,使用渐进式训练(P),可以充分利用不同阶段中的模型提取的特征并彼此融合。三个着名的细粒度基准测试的实验表明,RMG-PMSI可以显着提高性能,具有良好的稳健性和可转移性。
translated by 谷歌翻译
为了充分利用潜在的分钟和微妙的差异,细粒度分类器收集有关阶级变异的信息。由于同一类实体中的颜色,观点和结构之间的差异,任务是非常具有挑战性的。由于与自己的其他类别和差异的观点之间的差异之间的相似性,分类变得更加困难。在这项工作中,我们调查了地标通用CNN分类器的性能,它在细粒度数据集上呈现了大规模分类数据集的顶部缺口结果,并将其与最先进的细粒度分类器进行比较。在本文中,我们提出了两个特定问题:(i)一般的CNN分类器是否可以实现与细粒度的分类器相当的结果? (ii)将军CNN分类器是否需要任何特定信息来改善细粒度的信息?在整个工作中,我们培训一般的CNN分类器而不引入特定于细粒度数据集的任何方面。我们对六个数据集进行了广泛的评估,以确定细粒度分类器是否能够在实验中提升基线。
translated by 谷歌翻译
最近,Vision Transformer模型已成为一系列视觉任务的重要模型。但是,这些模型通常是不透明的,特征可解释性较弱。此外,目前尚无针对本质上可解释的变压器构建的方法,该方法能够解释其推理过程并提供忠实的解释。为了缩小这些关键差距,我们提出了一种新型视觉变压器,称为“可解释的视觉变压器”(Ex-Vit),这是一种本质上可解释的变压器模型,能够共同发现可鲁棒的可解释特征并执行预测。具体而言,前vit由可解释的多头注意(E-MHA)模块,属性引导的解释器(ATTE)模块和自我监督属性引导的损失组成。 E-MHA裁缝可以解释的注意力重量,能够从本地贴片中学习具有噪音稳健性的模型决策的语义解释表示。同时,提议通过不同的属性发现来编码目标对象的歧视性属性特征,该发现构成了模型预测的忠实证据。此外,为我们的前武器开发了自我监督的属性引导损失,该损失旨在通过属性可区分性机制和属性多样性机制来学习增强表示形式,以定位多样性和歧视性属性并产生更健壮的解释。结果,我们可以通过拟议的前武器发现具有多种属性的忠实和强大的解释。
translated by 谷歌翻译
可以通过对手动预定义目标的监督(例如,一hot或Hadamard代码)进行深入的表示学习来解决细粒度的视觉分类。这种目标编码方案对于模型间相关性的灵活性较小,并且对稀疏和不平衡的数据分布也很敏感。鉴于此,本文介绍了一种新颖的目标编码方案 - 动态目标关系图(DTRG),作为辅助特征正则化,是一个自生成的结构输出,可根据输入图像映射。具体而言,类级特征中心的在线计算旨在在表示空间中生成跨类别距离,因此可以通过非参数方式通过动态图来描绘。明确最大程度地减少锚定在这些级别中心的阶层内特征变化可以鼓励学习判别特征。此外,由于利用了类间的依赖性,提出的目标图可以减轻代表学习中的数据稀疏性和不稳定。受混合风格数据增强的最新成功的启发,本文将随机性引入了动态目标关系图的软结构,以进一步探索目标类别的关系多样性。实验结果可以证明我们方法对多个视觉分类任务的许多不同基准的有效性,尤其是在流行的细粒对象基准上实现最先进的性能以及针对稀疏和不平衡数据的出色鲁棒性。源代码可在https://github.com/akonlau/dtrg上公开提供。
translated by 谷歌翻译
Vision transformers (ViTs) encoding an image as a sequence of patches bring new paradigms for semantic segmentation.We present an efficient framework of representation separation in local-patch level and global-region level for semantic segmentation with ViTs. It is targeted for the peculiar over-smoothness of ViTs in semantic segmentation, and therefore differs from current popular paradigms of context modeling and most existing related methods reinforcing the advantage of attention. We first deliver the decoupled two-pathway network in which another pathway enhances and passes down local-patch discrepancy complementary to global representations of transformers. We then propose the spatially adaptive separation module to obtain more separate deep representations and the discriminative cross-attention which yields more discriminative region representations through novel auxiliary supervisions. The proposed methods achieve some impressive results: 1) incorporated with large-scale plain ViTs, our methods achieve new state-of-the-art performances on five widely used benchmarks; 2) using masked pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new record; 3) pyramid ViTs integrated with the decoupled two-pathway network even surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved representations by our framework have favorable transferability in images with natural corruptions. The codes will be released publicly.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
Different from the general visual classification, some classification tasks are more challenging as they need the professional categories of the images. In the paper, we call them expert-level classification. Previous fine-grained vision classification (FGVC) has made many efforts on some of its specific sub-tasks. However, they are difficult to expand to the general cases which rely on the comprehensive analysis of part-global correlation and the hierarchical features interaction. In this paper, we propose Expert Network (ExpNet) to address the unique challenges of expert-level classification through a unified network. In ExpNet, we hierarchically decouple the part and context features and individually process them using a novel attentive mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part feature for the subsequent abstraction and memorizes a context-related embedding. Then we fuse the final focal embedding with all memorized context-related embedding to make the prediction. Such an architecture realizes the dual-track processing of partial and global information and hierarchical feature interactions. We conduct the experiments over three representative expert-level classification tasks: FGVC, disease classification, and artwork attributes classification. In these experiments, superior performance of our ExpNet is observed comparing to the state-of-the-arts in a wide range of fields, indicating the effectiveness and generalization of our ExpNet. The code will be made publicly available.
translated by 谷歌翻译
The recently developed vision transformer (ViT) has achieved promising results on image classification compared to convolutional neural networks. Inspired by this, in this paper, we study how to learn multi-scale feature representations in transformer models for image classification. To this end, we propose a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features. Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity and these tokens are then fused purely by attention multiple times to complement each other. Furthermore, to reduce computation, we develop a simple yet effective token fusion module based on cross attention, which uses a single token for each branch as a query to exchange information with other branches. Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise. Extensive experiments demonstrate that our approach performs better than or on par with several concurrent works on vision transformer, in addition to efficient CNN models. For example, on the ImageNet1K dataset, with some architectural changes, our approach outperforms the recent DeiT by a large margin of 2% with a small to moderate increase in FLOPs and model parameters. Our source codes and models are available at https://github.com/IBM/CrossViT.
translated by 谷歌翻译
弱监督的对象本地化是一项具有挑战性的任务,旨在将对象定位具有粗糙注释(例如图像类别)。现有的深网方法主要基于类激活图,该图的重点是突出显示歧视性局部区域,同时忽略了整个对象。此外,基于变压器的技术不断地重点放在阻碍识别完整对象的能力的背景上。为了解决这些问题,我们提出了一种称为令牌改进变压器(TRT)的重新注意事项机制,该机制捕获了对象级语义,以很好地指导本地化。具体而言,TRT引入了一个名为令牌优先级评分模块(TPSM)的新型模块,以抑制背景噪声的效果,同时重点放在目标对象上。然后,我们将类激活图作为语义意识的输入合并,以将注意力图限制为目标对象。在两个基准测试上进行的广泛实验展示了我们提出的方法与现有方法的优势,该方法具有带有图像类别注释的现有方法。源代码可在\ url {https://github.com/su-hui-zz/reattentiontransformer}中获得。
translated by 谷歌翻译
With the rapid advances of image editing techniques in recent years, image manipulation detection has attracted considerable attention since the increasing security risks posed by tampered images. To address these challenges, a novel multi-scale multi-grained deep network (MSMG-Net) is proposed to automatically identify manipulated regions. In our MSMG-Net, a parallel multi-scale feature extraction structure is used to extract multi-scale features. Then the multi-grained feature learning is utilized to perceive object-level semantics relation of multi-scale features by introducing the shunted self-attention. To fuse multi-scale multi-grained features, global and local feature fusion block are designed for manipulated region segmentation by a bottom-up approach and multi-level feature aggregation block is designed for edge artifacts detection by a top-down approach. Thus, MSMG-Net can effectively perceive the object-level semantics and encode the edge artifact. Experimental results on five benchmark datasets justify the superior performance of the proposed method, outperforming state-of-the-art manipulation detection and localization methods. Extensive ablation experiments and feature visualization demonstrate the multi-scale multi-grained learning can present effective visual representations of manipulated regions. In addition, MSMG-Net shows better robustness when various post-processing methods further manipulate images.
translated by 谷歌翻译