我们介绍了ADAVIT,一种可自适应地调整视觉变压器(VIT)推理成本的方法,用于不同复杂性的图像。 Adavit通过自动减少在网络中处理的视觉变压器中的令牌数量作为推理进行的令牌的数量来实现这一目标。我们为此任务进行重新格式化自适应计算时间(ACT),扩展为丢弃冗余空间令牌。视觉变换器的吸引力架构属性使我们的自适应令牌减少机制能够加速推理而不修改网络架构或推理硬件。我们展示了ADAVIT不需要额外的参数或子网来停止,因为我们基于自适应停止在原始网络参数上的学习。我们进一步引入了与现有行为方法相比稳定培训的分布先前正则化。在图像分类任务(ImageNet1K)上,我们表明我们提出的Adavit在过滤信息丰富的空间特征和削减整体计算上产生了高效率。所提出的方法将Deit-Tiny的吞吐量提高了62%并除去了38%,只有0.3%的精度下降,优于大边距。
translated by 谷歌翻译
虽然最先进的视觉变压器模型实现了图像分类的有希望的结果,但它们是非常昂贵的并且需要许多GFLOPS。尽管可以通过减少网络中的令牌数量来降低视觉变压器的GFLOPS,但是没有对所有输入图像的最佳设置。因此,在这项工作中,我们引入了可分辨率的无参数自适应令牌采样(ATS)模块,可以插入任何现有的视觉变压器架构。通过评分和自适应采样重要令牌,在视觉变压器上实现视觉变压器。结果,令牌的数量不再静态,但是每个输入图像都变化。通过将ATS集成为当前变压器块内的附加层,我们可以将它们转换为具有自适应令牌的更高效的视觉变压器。由于ATS是一种无参数模块,因此它可以作为即插即用模块添加到从货架上的预制视觉变压器中,从而在没有任何额外训练的情况下减少他们的GFLOP。但是,由于其可分辨动的设计,人们还可以培训配有ATS的视觉变压器。通过将其添加到多个最先进的视觉变压器,我们在想象成数据集上进行评估。我们的评估表明,通过将计算成本(GFLOPS)降低37%,在保留准确性时,该模块通过降低了37%,提高了最先进的模块。
translated by 谷歌翻译
在本文中,我们通过利用视觉数据中的空间稀疏性提出了一种新的模型加速方法。我们观察到,视觉变压器中的最终预测仅基于最有用的令牌的子集,这足以使图像识别。基于此观察,我们提出了一个动态的令牌稀疏框架,以根据加速视觉变压器的输入逐渐和动态地修剪冗余令牌。具体而言,我们设计了一个轻量级预测模块,以估计给定当前功能的每个令牌的重要性得分。该模块被添加到不同的层中以层次修剪冗余令牌。尽管该框架的启发是我们观察到视觉变压器中稀疏注意力的启发,但我们发现自适应和不对称计算的想法可能是加速各种体系结构的一般解决方案。我们将我们的方法扩展到包括CNN和分层视觉变压器在内的层次模型,以及更复杂的密集预测任务,这些任务需要通过制定更通用的动态空间稀疏框架,并具有渐进性的稀疏性和非对称性计算,用于不同空间位置。通过将轻质快速路径应用于少量的特征,并使用更具表现力的慢速路径到更重要的位置,我们可以维护特征地图的结构,同时大大减少整体计算。广泛的实验证明了我们框架对各种现代体系结构和不同视觉识别任务的有效性。我们的结果清楚地表明,动态空间稀疏为模型加速提供了一个新的,更有效的维度。代码可从https://github.com/raoyongming/dynamicvit获得
translated by 谷歌翻译
视觉变压器(VIT)在计算机视觉任务中取得了许多突破。但是,输入图像的空间维度出现了相当大的冗余,导致了巨大的计算成本。因此,我们提出了一个粗糙的视觉变压器(CF-VIT),以减轻计算负担,同时在本文中保持绩效。我们提出的CF-VIT是由现代VIT模型中的两个重要观察结果激励的:(1)粗粒斑分裂可以找到输入图像的信息区域。 (2)大多数图像可以通过小型令牌序列中的VIT模型很好地识别。因此,我们的CF-Vit以两阶段的方式实现网络推断。在粗糙的推理阶段,输入图像分为一个小长度贴片序列,以进行计算经济分类。如果不公认的话,请确定信息斑块,并在细粒度的细粒度中进一步重新分解。广泛的实验证明了我们CF-VIT的功效。例如,在不妥协性能的情况下,CF-VIT可以减少53%的LV-VIT拖鞋,还可以达到2.01倍的吞吐量。
translated by 谷歌翻译
基于自我关注机制的顶部,视觉变压器最近在各种视觉任务上表现出显着的性能。虽然实现出色的性能,但它们仍然需要相对密集的计算成本,随着斑块的数量,自我关注头和变压器块增加而剧烈缩放。在本文中,我们争辩说,由于图像的变化大,因此它们对贴片之间的长距离依赖性建模的需要不同。为此,我们介绍了一个Adavit,一个自适应计算框架,学习在每次输入的基础上派生在整个骨干内的修补程序,自我注意力头和变压器块的使用策略,旨在提高视觉变压器的推理效率图像识别的最小精度降低。以端到端的方式与变压器骨架一起优化,轻量级决策网络连接到骨架上,以便在飞行中产生决定。关于ImageNet的广泛实验表明,与最先进的视觉变压器相比,我们的方法对效率的提高超过了2倍的效率,只有0.8%的准确性,实现了在不同的计算预算上的良好效率/准确性权衡权衡。我们进一步对学习使用政策进行了定量和定性分析,并对视觉变压器的冗余提供了更多的见解。
translated by 谷歌翻译
我们介绍了贴片采样时间表(PSS)的概念,该概念在训练过程中每批次使用的视觉变压器(VIT)贴片的数量变化。由于对于大多数视觉目标(例如,分类),所有补丁都不同样重要,因此我们认为,不太重要的补丁可以用于较少的训练迭代中,从而导致较短的训练时间,对性能的影响最小。此外,我们观察到,使用PSS的训练可以使VIT在推理过程中对更宽的贴片采样范围更强。这允许在推理过程中进行吞吐量和准确性之间的细粒度,动态的权衡。我们使用PSSS在VIT上评估Imagenet的VIT,均通过从头开始训练并使用重建损耗函数进行了预训练。对于预训练的模型,与使用所有斑块相比,我们的分类准确性降低了0.26%(从25小时到17小时)降低了0.26%。代码,模型检查点和日志可在https://github.com/bradmcdanel/pss上找到。
translated by 谷歌翻译
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models. Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features. We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks where it outperforms concurrent vision transformers that rely on large scale external pre-training and are 5-10× more costly in computation and parameters. We further remove the temporal dimension and apply our model for image classification where it outperforms prior work on vision transformers. Code is available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译
Vision Transformer已成为计算机视觉中的新范式,表现出出色的性能,同时还具有昂贵的计算成本。图像令牌修剪是VIT压缩的主要方法之一,这是因为相对于令牌数的复杂性是二次的,而许多仅包含背景区域的令牌并不能真正促进最终预测。现有作品要么依赖其他模块来评分单个令牌的重要性,要么为不同的输入实例实施固定比率修剪策略。在这项工作中,我们提出了一个自适应的稀疏令牌修剪框架,成本最低。我们的方法是基于可学习的阈值,并利用多头自我注意力来评估令牌信息,但几乎没有其他操作。具体而言,我们首先提出了廉价的注意力重点加权阶级注意力评分机制。然后,将可学习的参数插入VIT作为阈值,以区分信息令牌和不重要的令牌。通过比较令牌注意分数和阈值,我们可以从层次上丢弃无用的令牌,从而加速推理。可学习的阈值在预算感知培训中进行了优化,以平衡准确性和复杂性,并为不同的输入实例执行相应的修剪配置。广泛的实验证明了我们方法的有效性。例如,我们的方法将DEIT-S的吞吐量提高了50%,并且TOP-1的准确性仅下降了0.2%,这比以前的方法在准确性和延迟之间取得了更好的权衡。
translated by 谷歌翻译
动态神经网络是深度学习中的新兴的研究课题。与具有推断阶段的固定计算图和参数的静态模型相比,动态网络可以使其结构或参数适应不同的输入,从而在本调查中的准确性,计算效率,适应性等方面的显着优势。我们全面地通过将动态网络分为三个主要类别:1)使用数据相关的架构或参数进行处理的实例 - Wise-Wise DiveS动态模型的速度开发区域2)关于图像数据的不同空间位置和3)沿着诸如视频和文本的顺序数据的时间维度执行自适应推断的时间明智的动态模型进行自适应计算的空间 - 方向动态网络。系统地审查了动态网络的重要研究问题,例如架构设计,决策方案,优化技术和应用。最后,我们与有趣的未来研究方向讨论了该领域的开放问题。
translated by 谷歌翻译
视觉变压器(VITS)已成为各种视觉任务的流行结构和优于卷积神经网络(CNNS)。然而,这种强大的变形金机带来了巨大的计算负担。而这背后的基本障碍是排气的令牌到令牌比较。为了缓解这一点,我们深入研究Vit的模型属性,观察到VITS表现出稀疏关注,具有高令牌相似性。这直观地向我们介绍了可行的结构不可知的尺寸,令牌编号,以降低计算成本。基于这一探索,我们为香草vits提出了一种通用的自我切片学习方法,即坐下。具体而言,我们首先设计一种新颖的令牌减肥模块(TSM),可以通过动态令牌聚集来提高VIT的推理效率。不同于令牌硬滴,我们的TSM轻轻地集成了冗余令牌变成了更少的信息,可以在不切断图像中的鉴别性令牌关系的情况下动态缩放视觉注意。此外,我们介绍了一种简洁的密集知识蒸馏(DKD)框架,其密集地以柔性自动编码器方式传送无组织的令牌信息。由于教师和学生之间的结构类似,我们的框架可以有效地利用结构知识以获得更好的收敛性。最后,我们进行了广泛的实验来评估我们的坐姿。它展示了我们的方法可以通过1.7倍加速VITS,其精度下降可忽略不计,甚至在3.6倍上加速VITS,同时保持其性能的97%。令人惊讶的是,通过简单地武装LV-VIT与我们的坐线,我们在想象中实现了新的最先进的表现,超过了最近文学中的所有CNN和VITS。
translated by 谷歌翻译
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models 1 .
translated by 谷歌翻译
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https: //github.com/facebookresearch/LeViT.
translated by 谷歌翻译
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at https: //github.com/leoxiaobin/CvT.
translated by 谷歌翻译
Data mixing strategies (e.g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs). They mix two images as inputs for training and assign them with a mixed label with the same ratio. While they are shown effective for vision transformers (ViTs), we identify a token fluctuation phenomenon that has suppressed the potential of data mixing strategies. We empirically observe that the contributions of input tokens fluctuate as forward propagating, which might induce a different mixing ratio in the output tokens. The training target computed by the original data mixing strategy can thus be inaccurate, resulting in less effective training. To address this, we propose a token-label alignment (TL-Align) method to trace the correspondence between transformed tokens and the original tokens to maintain a label for each token. We reuse the computed attention at each layer for efficient token-label alignment, introducing only negligible additional training costs. Extensive experiments demonstrate that our method improves the performance of ViTs on image classification, semantic segmentation, objective detection, and transfer learning tasks. Code is available at: https://github.com/Euphoria16/TL-Align.
translated by 谷歌翻译
已经发现基于混合的增强对于培训期间的概括模型有效,特别是对于视觉变压器(VITS),因为它们很容易过度装备。然而,先前的基于混合的方法具有潜在的先验知识,即目标的线性内插比应保持与输入插值中提出的比率相同。这可能导致一个奇怪的现象,有时由于增强中的随机过程,混合图像中没有有效对象,但标签空间仍然存在响应。为了弥合输入和标签空间之间的这种差距,我们提出了透明度,该差别将基于视觉变压器的注意图混合标签。如果受关注图的相应输入图像加权,则标签的置信度将会更大。传输令人尴尬地简单,可以在几行代码中实现,而不会在不引入任何额外的参数和拖鞋到基于Vit的模型。实验结果表明,我们的方法可以在想象集分类上一致地始终改善各种基于Vit的模型。在ImageNet上预先接受过扫描后,基于Vit的模型还展示了对语义分割,对象检测和实例分割的更好的可转换性。当在评估4个不同的基准时,传输展示展示更加强劲。代码将在https://github.com/beckschen/transmix上公开提供。
translated by 谷歌翻译
我们在视觉变压器上呈现整洁但有效的递归操作,可以提高参数利用而不涉及额外参数。这是通过在变压器网络的深度分享权重来实现的。所提出的方法可以只使用NA \“IVE递归操作来获得大量增益(〜2%),不需要对设计网络原理的特殊或复杂的知识,并引入训练程序的最小计算开销。减少额外的计算通过递归操作,同时保持卓越的准确性,我们通过递归层的多个切片组自行引入近似方法,这可以通过最小的性能损失将成本消耗降低10〜30%。我们称我们的模型切片递归变压器(SRET) ,这与高效视觉变压器的广泛的其他设计兼容。我们最好的模型在含有较少参数的同时,在最先进的方法中对Imagenet建立了重大改进。建议的切片递归操作使我们能够建立一个变压器超过100甚至1000层,仍然仍然小尺寸(13〜15米),以避免困难当模型尺寸太大时,IES在优化中。灵活的可扩展性显示出缩放和构建极深和大维视觉变压器的巨大潜力。我们的代码和模型可在https://github.com/szq0214/sret中找到。
translated by 谷歌翻译
变压器架构现在是序列建模任务的核心。注意机制是核心,它可以在序列中对长期依赖性进行有效的建模。最近,变压器已成功地应用于计算机视觉域,在该域中首先将2D图像分割成斑块,然后将其视为1D序列。然而,这种线性化会损害图像中空间位置的概念,该图像具有重要的视觉线索。为了弥合差距,我们提出了连锁反应,这是视觉变压器的次级注意机制。基于最近基于内核的有效注意机制,我们设计了一种新型的动态编程算法,该算法将不同令牌的贡献加重了与它们在线性观察到的2D空间中相对空间距离的查询的贡献。广泛的实验和分析证明了连锁反应对各种视觉任务的有效性。
translated by 谷歌翻译
The recently developed vision transformer (ViT) has achieved promising results on image classification compared to convolutional neural networks. Inspired by this, in this paper, we study how to learn multi-scale feature representations in transformer models for image classification. To this end, we propose a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features. Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity and these tokens are then fused purely by attention multiple times to complement each other. Furthermore, to reduce computation, we develop a simple yet effective token fusion module based on cross attention, which uses a single token for each branch as a query to exchange information with other branches. Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise. Extensive experiments demonstrate that our approach performs better than or on par with several concurrent works on vision transformer, in addition to efficient CNN models. For example, on the ImageNet1K dataset, with some architectural changes, our approach outperforms the recent DeiT by a large margin of 2% with a small to moderate increase in FLOPs and model parameters. Our source codes and models are available at https://github.com/IBM/CrossViT.
translated by 谷歌翻译
尽管图像变形金刚与计算机视觉任务中的卷积神经网络显示出竞争性结果,但缺乏诸如区域的电感偏见仍然在模型效率方面构成问题,尤其是对于嵌入式应用程序而言。在这项工作中,我们通过引入注意力面具以将空间位置纳入自我发挥作用来解决这个问题。局部依赖性有效地捕获了掩盖的注意力头,以及由未掩盖的注意力头部捕获的全球依赖性。随着蒙版注意力图像变压器 - MAIT,与CAIT相比,TOP -1的准确性提高了1.7%,与SWIN相比,吞吐量更少,吞吐量提高了1.5倍。使用注意力面罩编码局部性是模型的不可知论,因此它适用于整体,分层或其他新型变压器体系结构。
translated by 谷歌翻译
Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译