最近,视觉变压器变得非常流行。但是,将它们部署在许多应用程序中的计算昂贵部分是由于注意力块中的软磁层。我们引入了一个简单但有效的,无软的注意力块Sima,它使用简单的$ \ ell_1 $ -norm而不是使用SoftMax层,将查询和密钥矩阵归一化。然后,SIMA中的注意力块是三个矩阵的简单乘法,因此SIMA可以在测试时间动态更改计算的顺序,以在令牌数量或通道数量上实现线性计算。我们从经验上表明,SIMA应用于变形金刚,DEIT,XCIT和CVT的三种SOTA变体,与SOTA模型相比,SIMA可在不需要SoftMax层的情况下达到PAR准确性。有趣的是,将SIMA从多头更改为单头只会对精度产生很小的影响,这进一步简化了注意力障碍。该代码可在此处找到:$ \ href {https://github.com/ucdvision/sima} {\ text {this https url}} $
translated by 谷歌翻译
视觉变压器(VIT)用作强大的视觉模型。与卷积神经网络不同,在前几年主导视觉研究,视觉变压器享有捕获数据中的远程依赖性的能力。尽管如此,任何变压器架构的组成部分,自我关注机制都存在高延迟和低效的内存利用,使其不太适合高分辨率输入图像。为了缓解这些缺点,分层视觉模型在非交错的窗口上局部使用自我关注。这种放松会降低输入尺寸的复杂性;但是,它限制了横窗相互作用,损害了模型性能。在本文中,我们提出了一种新的班次不变的本地注意层,称为查询和参加(QNA),其以重叠的方式聚集在本地输入,非常类似于卷积。 QNA背后的关键想法是介绍学习的查询,这允许快速高效地实现。我们通过将其纳入分层视觉变压器模型来验证我们的层的有效性。我们展示了速度和内存复杂性的改进,同时实现了与最先进的模型的可比准确性。最后,我们的图层尺寸尤其良好,窗口大小,需要高于X10的内存,而不是比现有方法更快。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolution-free transformers by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.
translated by 谷歌翻译
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https: //github.com/facebookresearch/LeViT.
translated by 谷歌翻译
视觉变形金刚(VIT)通过贴片图像令牌化推动了各种视觉识别任务的最先进,然后是堆叠的自我注意操作。采用自我发场模块会导致计算和内存使用情况的二次复杂性。因此,已经在自然语言处理中进行了各种尝试以线性复杂性近似自我发挥计算的尝试。但是,这项工作的深入分析表明,它们在理论上是缺陷的,或者在经验上是无效的视觉识别。我们确定它们的局限性植根于在近似过程中保留软马克斯的自我注意力。具体而言,传统的自我注意力是通过使令状特征向量之间的缩放点产物标准化来计算的。保留SoftMax操作会挑战任何随后的线性化工作。在这个见解下,首次提出了无软磁变压器(缩写为软的变压器)。为了消除自我注意事项的软马克斯操作员,采用高斯内核函数来替代点产品相似性。这使完整的自发矩阵可以通过低级矩阵分解近似。我们近似的鲁棒性是通过使用牛顿 - 拉夫森方法来计算其摩尔 - 芬罗逆的。此外,在低级别的自我注意事项上引入了有效的对称归一化,以增强模型的推广性和可传递性。对Imagenet,Coco和ADE20K的广泛实验表明,我们的软可以显着提高现有VIT变体的计算效率。至关重要的是,具有线性复杂性,允许使用较长的令牌序列,从而使精度和复杂性之间的权衡较高。
translated by 谷歌翻译
虽然最先进的视觉变压器模型实现了图像分类的有希望的结果,但它们是非常昂贵的并且需要许多GFLOPS。尽管可以通过减少网络中的令牌数量来降低视觉变压器的GFLOPS,但是没有对所有输入图像的最佳设置。因此,在这项工作中,我们引入了可分辨率的无参数自适应令牌采样(ATS)模块,可以插入任何现有的视觉变压器架构。通过评分和自适应采样重要令牌,在视觉变压器上实现视觉变压器。结果,令牌的数量不再静态,但是每个输入图像都变化。通过将ATS集成为当前变压器块内的附加层,我们可以将它们转换为具有自适应令牌的更高效的视觉变压器。由于ATS是一种无参数模块,因此它可以作为即插即用模块添加到从货架上的预制视觉变压器中,从而在没有任何额外训练的情况下减少他们的GFLOP。但是,由于其可分辨动的设计,人们还可以培训配有ATS的视觉变压器。通过将其添加到多个最先进的视觉变压器,我们在想象成数据集上进行评估。我们的评估表明,通过将计算成本(GFLOPS)降低37%,在保留准确性时,该模块通过降低了37%,提高了最先进的模块。
translated by 谷歌翻译
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO testdev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-theart by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at https://github. com/microsoft/Swin-Transformer.
translated by 谷歌翻译
视觉变压器(VIT)的最新进展在视觉识别任务中取得了出色的表现。卷积神经网络(CNNS)利用空间电感偏见来学习视觉表示,但是这些网络在空间上是局部的。 VIT可以通过其自我注意力机制学习全球表示形式,但它们通常是重量重量,不适合移动设备。在本文中,我们提出了交叉功能关注(XFA),以降低变压器的计算成本,并结合有效的移动CNN,形成一种新型有效的轻质CNN-CNN-VIT混合模型Xformer,可以用作通用的骨干链。学习全球和本地代表。实验结果表明,Xformer在不同的任务和数据集上的表现优于大量CNN和基于VIT的模型。在ImagEnet1k数据集上,XFormer以550万参数的优先级达到78.5%的TOP-1精度,比EdgitionNet-B0(基于CNN)(基于CNN)和DEIT(基于VIT)(基于VIT)的参数高2.2%和6.3%。当转移到对象检测和语义分割任务时,我们的模型也表现良好。在MS Coco数据集上,Xformer在Yolov3框架中仅超过10.5 AP(22.7-> 33.2 AP),只有630万参数和3.8克Flops。在CityScapes数据集上,只有一个简单的全MLP解码器,Xformer可实现78.5的MIOU,而FPS为15.3,超过了最先进的轻量级分割网络。
translated by 谷歌翻译
香草自我注意的机制固有地依赖于预定和坚定的计算维度。这种僵化的性限制了它具有面向上下文的概括,可以带来更多的上下文提示和全球表示。为了减轻此问题,我们提出了一种可扩展的自我注意(SSA)机制,该机制利用两个缩放因素来释放查询,键和价值矩阵的维度,同时使它们不符合输入。这种可伸缩性可获得面向上下文的概括并增强对象灵敏度,从而将整个网络推向准确性和成本之间的更有效的权衡状态。此外,我们提出了一个基于窗口的自我注意事项(IWSA),该自我注意力(IWSA)通过重新合并独立的值代币并从相邻窗口中汇总空间信息来建立非重叠区域之间的相互作用。通过交替堆叠SSA和IWSA,可扩展的视觉变压器(可伸缩率)在通用视觉任务中实现最先进的性能。例如,在Imagenet-1K分类中,可伸缩率S的表现优于双胞胎-SVT-S,而Swin-T则比1.4%。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolutionfree transformers trained on ImageNet only using a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.We also introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention, typically from a convnet teacher. The learned transformers are competitive (85.2% top-1 acc.) with the state of the art on ImageNet, and similarly when transferred to other tasks. We will share our code and models.
translated by 谷歌翻译
在本文中,我们通过利用视觉数据中的空间稀疏性提出了一种新的模型加速方法。我们观察到,视觉变压器中的最终预测仅基于最有用的令牌的子集,这足以使图像识别。基于此观察,我们提出了一个动态的令牌稀疏框架,以根据加速视觉变压器的输入逐渐和动态地修剪冗余令牌。具体而言,我们设计了一个轻量级预测模块,以估计给定当前功能的每个令牌的重要性得分。该模块被添加到不同的层中以层次修剪冗余令牌。尽管该框架的启发是我们观察到视觉变压器中稀疏注意力的启发,但我们发现自适应和不对称计算的想法可能是加速各种体系结构的一般解决方案。我们将我们的方法扩展到包括CNN和分层视觉变压器在内的层次模型,以及更复杂的密集预测任务,这些任务需要通过制定更通用的动态空间稀疏框架,并具有渐进性的稀疏性和非对称性计算,用于不同空间位置。通过将轻质快速路径应用于少量的特征,并使用更具表现力的慢速路径到更重要的位置,我们可以维护特征地图的结构,同时大大减少整体计算。广泛的实验证明了我们框架对各种现代体系结构和不同视觉识别任务的有效性。我们的结果清楚地表明,动态空间稀疏为模型加速提供了一个新的,更有效的维度。代码可从https://github.com/raoyongming/dynamicvit获得
translated by 谷歌翻译
探讨了语言建模流行的变形金刚,用于近期解决视觉任务,例如,用于图像分类的视觉变压器(VIT)。 VIT模型将每个图像分成具有固定长度的令牌序列,然后应用多个变压器层以模拟它们的全局关系以进行分类。然而,当从像想象中的中型数据集上从头开始训练时,VIT对CNNS达到较差的性能。我们发现它是因为:1)输入图像的简单标记未能模拟相邻像素之间的重要局部结构,例如边缘和线路,导致训练采样效率低。 2)冗余注意骨干骨干设计对固定计算预算和有限的训练样本有限的具有限制性。为了克服这些限制,我们提出了一种新的令牌到令牌视觉变压器(T2T-VIT),它包含1)层 - 明智的代币(T2T)转换,通过递归聚合相邻来逐步地结构于令牌到令牌。代币进入一个令牌(令牌到令牌),这样可以建模由周围令牌所代表的本地结构,并且可以减少令牌长度; 2)一种高效的骨干,具有深度狭窄的结构,用于在实证研究后CNN建筑设计的激励变压器结构。值得注意的是,T2T-VIT将Vanilla Vit的参数计数和Mac减少了一半,同时从想象中从头开始训练时,改善了超过3.0 \%。它还优于Endnets并通过直接培训Imagenet训练来实现与MobileNets相当的性能。例如,T2T-VTO与Reset50(21.5M参数)的可比大小(21.5M参数)可以在图像分辨率384 $ \ Times 384上实现83.3 \%TOP1精度。 (代码:https://github.com/yitu-opensource/t2t-vit)
translated by 谷歌翻译
变压器提供了一种设计神经网络以进行视觉识别的新方法。与卷积网络相比,变压器享有在每个阶段引用全局特征的能力,但注意模块带来了更高的计算开销,阻碍了变压器的应用来处理高分辨率的视觉数据。本文旨在减轻效率和灵活性之间的冲突,为此,我们为每个地区提出了专门的令牌,作为使者(MSG)。因此,通过操纵这些MSG令牌,可以在跨区域灵活地交换视觉信息,并且减少计算复杂性。然后,我们将MSG令牌集成到一个名为MSG-Transformer的多尺度体系结构中。在标准图像分类和对象检测中,MSG变压器实现了竞争性能,加速了GPU和CPU的推断。代码可在https://github.com/hustvl/msg-transformer中找到。
translated by 谷歌翻译
尽管变压器已经开始在视力中占主导地位,但将它们应用于大图像仍然很困难。这样做的一个很大的原因是,自我发场的标记数二次缩放,而令牌数量又随图像大小而倍增。在较大的图像(例如1080p)上,网络中总计算的60%以上仅用于创建和应用注意矩阵。我们通过引入Hydra注意来解决这个问题,这是视觉变压器(VITS)的极有效的关注操作。自相矛盾的是,这种效率来自对其极端的多头关注:通过使用尽可能多的注意力头部,Hydra注意力在代币和没有隐藏常数的特征上是线性的,使其比标准自我注意力要快得多。在现成的VIT-B/16中,代币计数的一倍。此外,Hydra注意力保留了ImageNet上的高精度,在某些情况下实际上可以改善它。
translated by 谷歌翻译
诸如对象检测和分割等密集的计算机视觉任务需要有效的多尺度特征表示,用于检测或分类具有不同大小的对象或区域。虽然卷积神经网络(CNNS)是这种任务的主导架构,但最近引入了视觉变压器(VITS)的目标是将它们替换为骨干。类似于CNN,VITS构建一个简单的多级结构(即,细致粗略),用于使用单尺度补丁进行多尺度表示。在这项工作中,通过从现有变压器的不同角度来看,我们探索了多尺度补丁嵌入和多路径结构,构建了多路径视觉变压器(MPVIT)。 MPVIT通过使用重叠的卷积贴片嵌入,将相同尺寸〜(即,序列长度,序列长度,序列长度的序列长度)嵌入不同尺度的斑块。然后,通过多个路径独立地将不同尺度的令牌独立地馈送到变压器编码器,并且可以聚合产生的特征,使得能够在相同特征级别的精细和粗糙的特征表示。由于多样化,多尺寸特征表示,我们的MPVits从微小〜(5m)缩放到基础〜(73米)一直在想象成分,对象检测,实例分段上的最先进的视觉变压器来实现卓越的性能,和语义细分。这些广泛的结果表明,MPVIT可以作为各种视觉任务的多功能骨干网。代码将在\ url {https://git.io/mpvit}上公开可用。
translated by 谷歌翻译
vision变压器(VIT)最近在图像分类上实现了对卷积神经网络(CNNS)的可比结果的强大能力。然而,Vanilla Vit只是直接从自然语言处理继承相同的架构,这通常不会针对视觉应用进行优化。在这篇文章的推动中,我们提出了一种采用金字塔结构的新架构,并在视觉变压器中采用新的区域到局部关注,而不是全球自我关注。更具体地,我们的模型首先从具有不同补丁大小的图像生成区域令牌和本地标记,其中每个区域令牌与基于空间位置的一组本地代币相关联。区域到当地的注意力包括两个步骤:第一,区域自我关注提取所有区域代币之间的全球信息,然后通过自我关注将局部自我关注与相关的本地代币之间的信息交换。因此,尽管局部自我关注限制了当地区域的范围,但它仍然可以接收全球信息。在四个视觉任务中进行广泛的实验,包括图像分类,对象和关键点检测,语义分割和动作识别,表明我们的方法优于或与最先进的Vit变体(包括许多并发作品)的差异。我们的源代码和模型可在https://github.com/ibm/regionvit上使用。
translated by 谷歌翻译
随着自我关注机制的发展,变压器模型已经在计算机视觉域中展示了其出色的性能。然而,从完全关注机制带来的大规模计算成为内存消耗的沉重负担。顺序地,记忆的限制降低了改善变压器模型的可能性。为了解决这个问题,我们提出了一种名为耦合器的新的记忆经济性注意力机制,它将注意力映射与两个子矩阵分成并从空间信息中生成对准分数。应用了一系列不同的尺度图像分类任务来评估模型的有效性。实验结果表明,在ImageNet-1K分类任务上,与常规变压器相比,耦合器可以显着降低28%的存储器消耗,同时访问足够的精度要求,并且在占用相同的内存占用时表达了0.92%。结果,耦合器可以用作视觉任务中的有效骨干,并提供关于研究人员注意机制的新颖视角。
translated by 谷歌翻译
There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
translated by 谷歌翻译
Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models 1 .
translated by 谷歌翻译