虽然卷积神经网络(CNNS)在许多愿景任务中显示出显着的结果,但它们仍然是通过简单但具有挑战性的视觉推理问题所紧张的。在计算机视觉中最近的变压器网络成功的启发,在本文中,我们介绍了经常性视觉变压器(RVIT)模型。由于经常性连接和空间注意在推理任务中的影响,该网络实现了来自SVRT数据集的同样不同视觉推理问题的竞争结果。空间和深度尺寸中的重量共享正规化模型,允许它使用较少的自由参数学习,仅使用28K培训样本。全面的消融研究证实了混合CNN +变压器架构的重要性和反馈连接的作用,其迭代地细化内部表示直到获得稳定的预测。最后,本研究可以更深入地了解对求解视觉抽象推理任务的注意力和经常性联系的作用。
translated by 谷歌翻译
视觉理解需要了解场景中对象之间的复杂视觉关系。在这里,我们寻求描述抽象视觉推理的计算需求。我们通过系统地评估现代深度卷积神经网络(CNNS)的能力来学习解决“综合视觉推理测试”(SVRT)挑战,是二十三个视觉推理问题的集合。我们的分析揭示了视觉推理任务的新型分类,这可以通过关系类型(相同的与空间关系判断)和用于构成基本规则的关系数量来解释。先前的认知神经科学工作表明,注意力在人类的视觉推理能力中发挥着关键作用。为了测试这一假设,我们将CNN扩展了基于空间和基于特征的注意力机制。在第二系列实验中,我们评估了这些注意网络学习解决SVRT挑战的能力,并发现所产生的架构在解决这些视觉推理任务中最艰难的架构。最重要的是,对个人任务的相应改进部分地解释了我们的新型分类法。总体而言,这项工作提供了视觉推理的粒度计算账户,并产生关于基于特征的与空间关注的差异需求的可测试神经科学预测,具体取决于视觉推理问题的类型。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
随着变压器作为语言处理的标准及其在计算机视觉方面的进步,参数大小和培训数据的数量相应地增长。许多人开始相信,因此,变形金刚不适合少量数据。这种趋势引起了人们的关注,例如:某些科学领域中数据的可用性有限,并且排除了该领域研究资源有限的人。在本文中,我们旨在通过引入紧凑型变压器来提出一种小规模学习的方法。我们首次表明,具有正确的尺寸,卷积令牌化,变压器可以避免在小数据集上过度拟合和优于最先进的CNN。我们的模型在模型大小方面具有灵活性,并且在获得竞争成果的同时,参数可能仅为0.28亿。当在CIFAR-10上训练Cifar-10,只有370万参数训练时,我们的最佳模型可以达到98%的准确性,这是与以前的基于变形金刚的模型相比,数据效率的显着提高,比其他变压器小于10倍,并且是15%的大小。在实现类似性能的同时,重新NET50。 CCT还表现优于许多基于CNN的现代方法,甚至超过一些基于NAS的方法。此外,我们在Flowers-102上获得了新的SOTA,具有99.76%的TOP-1准确性,并改善了Imagenet上现有基线(82.71%精度,具有29%的VIT参数)以及NLP任务。我们针对变压器的简单而紧凑的设计使它们更可行,可以为那些计算资源和/或处理小型数据集的人学习,同时扩展了在数据高效变压器中的现有研究工作。我们的代码和预培训模型可在https://github.com/shi-labs/compact-transformers上公开获得。
translated by 谷歌翻译
人类在解析和灵活地理解复杂的视觉场景的能力方面继续大大胜过现代AI系统。注意力和记忆是已知的两个系统,它们在我们选择性地维护和操纵与行为相关的视觉信息的能力中起着至关重要的作用,以解决一些最具挑战性的视觉推理任务。在这里,我们介绍了一种新颖的体系结构,用于视觉推理的认知科学文献,基于记忆和注意力(视觉)推理(MAREO)架构。 Mareo实例化了一个主动视觉理论,该理论认为大脑通过学习结合以前学习的基本视觉操作以形成更复杂的视觉例程来在构成中解决复杂的视觉推理问题。 Mareo学会通过注意力转移序列来解决视觉推理任务,以路由并通过多头变压器模块将与任务相关的视觉信息保持在存储库中。然后,通过训练有素的专用推理模块来部署视觉例程,以判断场景中对象之间的各种关系。对四种推理任务的实验证明了Mareo以强大和样品有效的方式学习视觉例程的能力。
translated by 谷歌翻译
视觉变压器正在成为解决计算机视觉问题的强大工具。最近的技术还证明了超出图像域之外的变压器来解决许多与视频相关的任务的功效。其中,由于其广泛的应用,人类的行动识别是从研究界受到特别关注。本文提供了对动作识别的视觉变压器技术的首次全面调查。我们朝着这个方向分析并总结了现有文献和新兴文献,同时突出了适应变形金刚以进行动作识别的流行趋势。由于其专业应用,我们将这些方法统称为``动作变压器''。我们的文献综述根据其架构,方式和预期目标为动作变压器提供了适当的分类法。在动作变压器的背景下,我们探讨了编码时空数据,降低维度降低,框架贴片和时空立方体构造以及各种表示方法的技术。我们还研究了变压器层中时空注意的优化,以处理更长的序列,通常通过减少单个注意操作中的令牌数量。此外,我们还研究了不同的网络学习策略,例如自我监督和零局学习,以及它们对基于变压器的行动识别的相关损失。这项调查还总结了在具有动作变压器重要基准的评估度量评分方面取得的进步。最后,它提供了有关该研究方向的挑战,前景和未来途径的讨论。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
与卷积神经网络(CNN)相比,视觉变压器(VIT)正在变得越来越流行和主导技术。作为计算机视觉中苛刻的技术,VIT已成功解决了各种视觉问题,同时着眼于远程关系。在本文中,我们首先介绍自我注意机制的基本概念和背景。接下来,我们提供了最新表现最好的VIT方法的全面概述,该方法在强度和弱点,计算成本以及培训和测试数据集方面描述。我们彻底比较了流行基准数据集上各种VIT算法和大多数代表性CNN方法的性能。最后,我们通过有见地的观察来探索一些局限性,并提供进一步的研究方向。项目页面以及论文集可通过https://github.com/khawar512/vit-survey获得
translated by 谷歌翻译
已经证明了视觉变压器架构以非常有效地为图像分类任务工作。用变压器依靠卷积骨架解决更具挑战性的愿景任务的努力,以进行特征提取。在本文中,我们调查使用纯变压器架构(即,没有CNN骨干网)的使用,用于2D体姿势估计的问题。我们在Coco DataSet上评估了两个Vit架构。我们演示了使用编码器 - 解码器变压器架构产生最新的技术结果,导致该估计问题。
translated by 谷歌翻译
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at https: //github.com/leoxiaobin/CvT.
translated by 谷歌翻译
Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations, wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention
translated by 谷歌翻译
我们提出了一种新颖的计算模型“ Savir-T”,用于在Raven的渐进式矩阵(RPM)中体现的视觉推理问题。我们的模型考虑了拼图中每个图像中视觉元素的显式空间语义,编码为时空视标,并了解内部图像以及图像的依赖依赖性依赖性,与视觉推理任务高度相关。通过基于变压器的SAVIR-T体系结构建模的令牌关系,提取组(行或列)通过利用组规则相干性并将其用作电感偏置来提取前两行中的基本规则表示形式,从而引起了提取组(行或列)驱动的表示形式(或列)RPM中的每个令牌。我们使用此关系表示形式来找到正确的选择图像,该图像完成了RPM的最后一行或列。在两个合成RPM基准测试中进行了广泛的实验,包括Raven,I-Raven,Raven-Fair和PGM以及基于自然图像的“ V-Prom”,这表明Savir-T为视觉设定了新的最新时间推理,超过了先前模型的性能。
translated by 谷歌翻译
注意机制对研究界提出了重大兴趣,因为他们承诺改善神经网络架构的表现。但是,在任何特定的问题中,我们仍然缺乏主要的方法来选择导致保证改进的具体机制和超参数。最近,已经提出了自我关注并广泛用于变压器 - 类似的架构中,导致某些应用中的重大突破。在这项工作中,我们专注于两种形式的注意机制:注意模块和自我关注。注意模块用于重新重量每个层输入张量的特征。不同的模块具有不同的方法,可以在完全连接或卷积层中执行此重复。研究的注意力模型是完全模块化的,在这项工作中,它们将与流行的Reset架构一起使用。自我关注,最初在自然语言处理领域提出,可以将所有项目与输入序列中的所有项目相关联。自我关注在计算机视觉中越来越受欢迎,其中有时与卷积层相结合,尽管最近的一些架构与卷曲完全消失。在这项工作中,我们研究并执行了在特定计算机视觉任务中许多不同关注机制的客观的比较,在广泛使用的皮肤癌MNIST数据集中的样本分类。结果表明,关注模块有时会改善卷积神经网络架构的性能,也是这种改进虽然明显且统计学意义,但在不同的环境中并不一致。另一方面,通过自我关注机制获得的结果表明了一致和显着的改进,即使在具有减少数量的参数的架构中,也可以实现最佳结果。
translated by 谷歌翻译
ous vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. (3) We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译
The recently developed vision transformer (ViT) has achieved promising results on image classification compared to convolutional neural networks. Inspired by this, in this paper, we study how to learn multi-scale feature representations in transformer models for image classification. To this end, we propose a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features. Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity and these tokens are then fused purely by attention multiple times to complement each other. Furthermore, to reduce computation, we develop a simple yet effective token fusion module based on cross attention, which uses a single token for each branch as a query to exchange information with other branches. Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise. Extensive experiments demonstrate that our approach performs better than or on par with several concurrent works on vision transformer, in addition to efficient CNN models. For example, on the ImageNet1K dataset, with some architectural changes, our approach outperforms the recent DeiT by a large margin of 2% with a small to moderate increase in FLOPs and model parameters. Our source codes and models are available at https://github.com/IBM/CrossViT.
translated by 谷歌翻译
自动图像分类是食品科学中监督机器学习的常见任务。一个例子是基于图像的水果外部质量或成熟度的分类。为此,通常使用深层卷积神经网络(CNN)。这些模型通常需要大量标记的培训样本和增强的计算资源。尽管商业水果分类线很容易满足这些要求,但这些先决条件可能会阻碍机器学习方法的使用,尤其是对于发展中国家的小农户。我们提出了一种基于预先训练的视觉变压器(VIT)的替代方法,该方法特别适用于数据可用性较低和计算资源有限的域。可以在标准设备上使用有限的资源来轻松实施,这可以使这些模型在发展中国家的基于智能手机的图像分类中民主化。我们通过用良好的CNN方法基准对香蕉和苹果水果的域数据集进行两项不同的分类任务来证明我们方法的竞争力。我们的方法在3745张图像的训练数据集上,分类精度低于表现最佳的CNN(0.950 vs. 0.958)的分类精度。同时,当只有少量标记的训练样本可用时,我们的方法是优越的。与CNN相比,它需要少三倍才能达到0.90的精度。此外,低维特征嵌入的可视化表明,我们的研究中使用的模型从看不见的数据中提取了出色的特征,而无需分配标签。
translated by 谷歌翻译
根据诊断各种疾病的胸部X射线图像的可观增长,以及收集广泛的数据集,使用深神经网络进行了自动诊断程序,已经占据了专家的思想。计算机视觉中的大多数可用方法都使用CNN主链来获得分类问题的高精度。然而,最近的研究表明,在NLP中成为事实上方法的变压器也可以优于许多基于CNN的模型。本文提出了一个基于SWIN变压器的多标签分类深模型,作为实现最新诊断分类的骨干。它利用了头部体系结构来利用多层感知器(也称为MLP)。我们评估了我们的模型,该模型称为“ Chest X-Ray14”,最广泛,最大的X射线数据集之一,该数据集由30,000多名14例著名胸部疾病的患者组成100,000多个额叶/背景图像。我们的模型已经用几个数量的MLP层用于头部设置,每个模型都在所有类别上都达到了竞争性的AUC分数。胸部X射线14的全面实验表明,与以前的SOTA平均AUC为0.799相比,三层头的平均AUC得分为0.810,其平均AUC得分为0.810。我们建议对现有方法进行公平基准测试的实验设置,该设置可以用作未来研究的基础。最后,我们通过确认所提出的方法参与胸部的病理相关区域,从而跟进了结果。
translated by 谷歌翻译
视觉变压器(VIT)用作强大的视觉模型。与卷积神经网络不同,在前几年主导视觉研究,视觉变压器享有捕获数据中的远程依赖性的能力。尽管如此,任何变压器架构的组成部分,自我关注机制都存在高延迟和低效的内存利用,使其不太适合高分辨率输入图像。为了缓解这些缺点,分层视觉模型在非交错的窗口上局部使用自我关注。这种放松会降低输入尺寸的复杂性;但是,它限制了横窗相互作用,损害了模型性能。在本文中,我们提出了一种新的班次不变的本地注意层,称为查询和参加(QNA),其以重叠的方式聚集在本地输入,非常类似于卷积。 QNA背后的关键想法是介绍学习的查询,这允许快速高效地实现。我们通过将其纳入分层视觉变压器模型来验证我们的层的有效性。我们展示了速度和内存复杂性的改进,同时实现了与最先进的模型的可比准确性。最后,我们的图层尺寸尤其良好,窗口大小,需要高于X10的内存,而不是比现有方法更快。
translated by 谷歌翻译
To ensure proper knowledge representation of the kitchen environment, it is vital for kitchen robots to recognize the states of the food items that are being cooked. Although the domain of object detection and recognition has been extensively studied, the task of object state classification has remained relatively unexplored. The high intra-class similarity of ingredients during different states of cooking makes the task even more challenging. Researchers have proposed adopting Deep Learning based strategies in recent times, however, they are yet to achieve high performance. In this study, we utilized the self-attention mechanism of the Vision Transformer (ViT) architecture for the Cooking State Recognition task. The proposed approach encapsulates the globally salient features from images, while also exploiting the weights learned from a larger dataset. This global attention allows the model to withstand the similarities between samples of different cooking objects, while the employment of transfer learning helps to overcome the lack of inductive bias by utilizing pretrained weights. To improve recognition accuracy, several augmentation techniques have been employed as well. Evaluation of our proposed framework on the `Cooking State Recognition Challenge Dataset' has achieved an accuracy of 94.3%, which significantly outperforms the state-of-the-art.
translated by 谷歌翻译