Data mixing strategies (e.g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs). They mix two images as inputs for training and assign them with a mixed label with the same ratio. While they are shown effective for vision transformers (ViTs), we identify a token fluctuation phenomenon that has suppressed the potential of data mixing strategies. We empirically observe that the contributions of input tokens fluctuate as forward propagating, which might induce a different mixing ratio in the output tokens. The training target computed by the original data mixing strategy can thus be inaccurate, resulting in less effective training. To address this, we propose a token-label alignment (TL-Align) method to trace the correspondence between transformed tokens and the original tokens to maintain a label for each token. We reuse the computed attention at each layer for efficient token-label alignment, introducing only negligible additional training costs. Extensive experiments demonstrate that our method improves the performance of ViTs on image classification, semantic segmentation, objective detection, and transfer learning tasks. Code is available at: https://github.com/Euphoria16/TL-Align.
translated by 谷歌翻译
已经发现基于混合的增强对于培训期间的概括模型有效,特别是对于视觉变压器(VITS),因为它们很容易过度装备。然而,先前的基于混合的方法具有潜在的先验知识,即目标的线性内插比应保持与输入插值中提出的比率相同。这可能导致一个奇怪的现象,有时由于增强中的随机过程,混合图像中没有有效对象,但标签空间仍然存在响应。为了弥合输入和标签空间之间的这种差距,我们提出了透明度,该差别将基于视觉变压器的注意图混合标签。如果受关注图的相应输入图像加权,则标签的置信度将会更大。传输令人尴尬地简单,可以在几行代码中实现,而不会在不引入任何额外的参数和拖鞋到基于Vit的模型。实验结果表明,我们的方法可以在想象集分类上一致地始终改善各种基于Vit的模型。在ImageNet上预先接受过扫描后,基于Vit的模型还展示了对语义分割,对象检测和实例分割的更好的可转换性。当在评估4个不同的基准时,传输展示展示更加强劲。代码将在https://github.com/beckschen/transmix上公开提供。
translated by 谷歌翻译
CutMix是一种流行的增强技术,通常用于训练现代卷积和变压器视觉网络。它最初旨在鼓励卷积神经网络(CNN)更多地关注图像的全球环境,而不是本地信息,从而大大提高了CNN的性能。但是,我们发现它对自然具有全球接收领域的基于变压器的体系结构的好处有限。在本文中,我们提出了一种新型的数据增强技术图,以提高视觉变压器的性能。 TokenMix通过将混合区分为多个分离的零件,将两个图像在令牌级别混合。此外,我们表明,Cutmix中的混合学习目标是一对地面真相标签的线性组合,可能是不准确的,有时是违反直觉的。为了获得更合适的目标,我们建议根据预先训练的教师模型的两个图像的基于内容的神经激活图分配目标得分,该图像不需要具有高性能。通过大量有关各种视觉变压器体系结构的实验,我们表明我们提出的TokenMix可以帮助视觉变形金刚专注于前景区域,以推断班级并增强其稳健性,以稳定的性能增长。值得注意的是,我们使用 +1%Imagenet TOP-1精度改善DEIT-T/S/B。此外,TokenMix的训练较长,在Imainet上获得了81.2%的TOP-1精度,而DEIT-S训练了400个时代。代码可从https://github.com/sense-x/tokenmix获得。
translated by 谷歌翻译
在本文中,我们通过利用视觉数据中的空间稀疏性提出了一种新的模型加速方法。我们观察到,视觉变压器中的最终预测仅基于最有用的令牌的子集,这足以使图像识别。基于此观察,我们提出了一个动态的令牌稀疏框架,以根据加速视觉变压器的输入逐渐和动态地修剪冗余令牌。具体而言,我们设计了一个轻量级预测模块,以估计给定当前功能的每个令牌的重要性得分。该模块被添加到不同的层中以层次修剪冗余令牌。尽管该框架的启发是我们观察到视觉变压器中稀疏注意力的启发,但我们发现自适应和不对称计算的想法可能是加速各种体系结构的一般解决方案。我们将我们的方法扩展到包括CNN和分层视觉变压器在内的层次模型,以及更复杂的密集预测任务,这些任务需要通过制定更通用的动态空间稀疏框架,并具有渐进性的稀疏性和非对称性计算,用于不同空间位置。通过将轻质快速路径应用于少量的特征,并使用更具表现力的慢速路径到更重要的位置,我们可以维护特征地图的结构,同时大大减少整体计算。广泛的实验证明了我们框架对各种现代体系结构和不同视觉识别任务的有效性。我们的结果清楚地表明,动态空间稀疏为模型加速提供了一个新的,更有效的维度。代码可从https://github.com/raoyongming/dynamicvit获得
translated by 谷歌翻译
CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs). However, the inconsistency between the mixed images and the corresponding labels harms its efficacy. Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels, but inevitably introduce heavy training overhead or require extra information, undermining ease of use. To this end, we propose an efficient and effective Self-Motivated image Mixing method (SMMix), which motivates both image and label enhancement by the model under training itself. Specifically, we propose a max-min attention region mixing approach that enriches the attention-focused objects in the mixed images. Then, we introduce a fine-grained label assignment technique that co-trains the output tokens of mixed images with fine-grained supervision. Moreover, we devise a novel feature consistency constraint to align features from mixed and unmixed images. Due to the subtle designs of the self-motivated paradigm, our SMMix is significant in its smaller training overhead and better performance than other CutMix variants. In particular, SMMix improves the accuracy of DeiT-T/S, CaiT-XXS-24/36, and PVT-T/S/M/L by more than +1% on ImageNet-1k. The generalization capability of our method is also demonstrated on downstream tasks and out-of-distribution datasets. Code of this project is available at https://github.com/ChenMnZ/SMMix.
translated by 谷歌翻译
先前的视觉MLP,如MLP-MILER和RESMLP接受线性扁平的图像贴片作为输入,使其对不同的输入大小和难以捕获空间信息。这种方法隐瞒了MLP与基于变压器的对应物相比,并防止它们成为计算机视觉的一般骨干。本文介绍了Hire-MLP,通过\ TextBF {Hi} reachical \ TextBF {Re}排列,这是一个简单而竞争的愿景MLP架构,其中包含两个重排级别。具体地,提出内部区域重新排列以捕获空间区域内的局部信息,并且提出横区域重新排列以使不同区域之间的信息通信能够通过沿空间方向循环地转换所有令牌来实现不同区域之间的信息通信。广泛的实验证明了Hire-MLP作为各种视觉任务的多功能骨干的有效性。特别是,Hire-MLP在图像分类,对象检测和语义分割任务上实现竞争结果,例如,在Imagenet上的83.8%的前1个精度,51.7%盒AP和Coco Val2017上的44.8%掩模AP和Ade20k上的49.9%Miou ,超越以前的基于变压器和基于MLP的型号,具有更好的折衷以获得准确性和吞吐量。代码可在https://github.com/ggjy/hire-wave-mlp.pytorch获得。
translated by 谷歌翻译
视觉变压器的最新进展在基于点产生自我注意的新空间建模机制驱动的各种任务中取得了巨大成功。在本文中,我们表明,视觉变压器背后的关键要素,即输入自适应,远程和高阶空间相互作用,也可以通过基于卷积的框架有效地实现。我们介绍了递归封闭式卷积($ \ textit {g}^\ textit {n} $ conv),该卷积{n} $ conv)与封闭的卷积和递归设计执行高阶空间交互。新操作是高度灵活和可定制的,它与卷积的各种变体兼容,并将自我注意的两阶相互作用扩展到任意订单,而无需引入大量额外的计算。 $ \ textit {g}^\ textit {n} $ conv可以用作插件模块,以改善各种视觉变压器和基于卷积的模型。根据该操作,我们构建了一个名为Hornet的新型通用视觉骨干家族。关于ImageNet分类,可可对象检测和ADE20K语义分割的广泛实验表明,大黄蜂的表现优于Swin变形金刚,并具有相似的整体体系结构和训练配置的明显边距。大黄蜂还显示出对更多训练数据和更大模型大小的有利可伸缩性。除了在视觉编码器中的有效性外,我们还可以将$ \ textit {g}^\ textit {n} $ conv应用于特定于任务的解码器,并始终通过较少的计算来提高密集的预测性能。我们的结果表明,$ \ textIt {g}^\ textit {n} $ conv可以成为视觉建模的新基本模块,可有效结合视觉变形金刚和CNN的优点。代码可从https://github.com/raoyongming/hornet获得
translated by 谷歌翻译
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO testdev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-theart by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at https://github. com/microsoft/Swin-Transformer.
translated by 谷歌翻译
视觉变压器由于能够捕获图像中的长期依赖性的能力而成功地应用于图像识别任务。但是,变压器与现有卷积神经网络(CNN)之间的性能和计算成本仍然存在差距。在本文中,我们旨在解决此问题,并开发一个网络,该网络不仅可以超越规范变压器,而且可以超越高性能卷积模型。我们通过利用变压器来捕获长期依赖性和CNN来建模本地特征,从而提出了一个新的基于变压器的混合网络。此外,我们将其扩展为获得一个称为CMT的模型家族,比以前的基于卷积和基于变压器的模型获得了更好的准确性和效率。特别是,我们的CMT-S在ImageNet上获得了83.5%的TOP-1精度,而在拖鞋上的拖曳率分别比现有的DEIT和EficitiveNet小14倍和2倍。拟议的CMT-S还可以很好地概括CIFAR10(99.2%),CIFAR100(91.7%),花(98.7%)以及其他具有挑战性的视觉数据集,例如可可(44.3%地图),计算成本较小。
translated by 谷歌翻译
变压器提供了一种设计神经网络以进行视觉识别的新方法。与卷积网络相比,变压器享有在每个阶段引用全局特征的能力,但注意模块带来了更高的计算开销,阻碍了变压器的应用来处理高分辨率的视觉数据。本文旨在减轻效率和灵活性之间的冲突,为此,我们为每个地区提出了专门的令牌,作为使者(MSG)。因此,通过操纵这些MSG令牌,可以在跨区域灵活地交换视觉信息,并且减少计算复杂性。然后,我们将MSG令牌集成到一个名为MSG-Transformer的多尺度体系结构中。在标准图像分类和对象检测中,MSG变压器实现了竞争性能,加速了GPU和CPU的推断。代码可在https://github.com/hustvl/msg-transformer中找到。
translated by 谷歌翻译
Recent studies show that Vision Transformers(ViTs) exhibit strong robustness against various corruptions. Although this property is partly attributed to the self-attention mechanism, there is still a lack of systematic understanding. In this paper, we examine the role of self-attention in learning robust representations. Our study is motivated by the intriguing properties of the emerging visual grouping in Vision Transformers, which indicates that self-attention may promote robustness through improved mid-level representations. We further propose a family of fully attentional networks (FANs) that strengthen this capability by incorporating an attentional channel processing design. We validate the design comprehensively on various hierarchical backbones. Our model achieves a state-of-the-art 87.1% accuracy and 35.8% mCE on ImageNet-1k and ImageNet-C with 76.8M parameters. We also demonstrate state-of-the-art accuracy and robustness in two downstream tasks: semantic segmentation and object detection. Code is available at: https://github.com/NVlabs/FAN.
translated by 谷歌翻译
在这项研究中,我们提出了混合图像建模(MixMim),这是一种适用于各种分层视觉变压器的简单但有效的MIM方法。现有的MIM方法用特殊的掩码符号替换输入令牌的随机子集,并旨在从损坏的图像中重建原始图像令牌。但是,我们发现,由于较大的掩蔽率(例如,Beit中的40%),使用蒙版符号会大大减慢训练并引起训练 - 不一致的不一致。相比之下,我们用另一个图像的可见令牌(即创建混合图像)代替一个图像的蒙版令牌。然后,我们进行双重重建以从混合输入中重建原始的两个图像,从而显着提高效率。虽然MixMim可以应用于各种体系结构,但本文探讨了更简单但更强的层次变压器,并使用MixMim -B,-L和-H缩放。经验结果表明,混合mim可以有效地学习高质量的视觉表示。值得注意的是,具有88M参数的MixMIM-B通过预处理600个时期的Imagenet-1k上的TOP-1精度达到了85.1%的TOP-1精度,在MIM方法中为具有可比模型尺寸(例如VIT-B)的神经网络创造了新的记录。此外,其在其他6个数据集上的传输性能显示MixMim比以前的MIM方法更好。代码可从https://github.com/sense-x/mixmim获得。
translated by 谷歌翻译
Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
Patch-based models, e.g., Vision Transformers (ViTs) and Mixers, have shown impressive results on various visual recognition tasks, alternating classic convolutional networks. While the initial patch-based models (ViTs) treated all patches equally, recent studies reveal that incorporating inductive bias like spatiality benefits the representations. However, most prior works solely focused on the location of patches, overlooking the scene structure of images. Thus, we aim to further guide the interaction of patches using the object information. Specifically, we propose OAMixer (object-aware mixing layer), which calibrates the patch mixing layers of patch-based models based on the object labels. Here, we obtain the object labels in unsupervised or weakly-supervised manners, i.e., no additional human-annotating cost is necessary. Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers. By learning an object-centric representation, we demonstrate that OAMixer improves the classification accuracy and background robustness of various patch-based models, including ViTs, MLP-Mixers, and ConvMixers. Moreover, we show that OAMixer enhances various downstream tasks, including large-scale classification, self-supervised learning, and multi-object recognition, verifying the generic applicability of OAMixer
translated by 谷歌翻译
视觉多层感知器(MLP)在计算机视觉任务中表现出了有希望的表现,并成为CNNS和Vision Transformers的主要竞争对手。他们使用令牌混合层来捕获交叉互动,而不是变形金刚使用的多头自我发项机制。然而,严重的参数化令牌混合层自然缺乏捕获局部信息和多粒性非本地关系的机制,因此它们的判别能力受到限制。为了解决这个问题,我们提出了一个新的位置空间门控单元(POSGU)。它利用经典相对位置编码(RPE)中使用的注意力公式,以有效地编码令牌混合的交叉关系。它可以成功地将视觉MLP的当前二次参数复杂度$ O(n^2)$ $ O(n^2)$ o(n)$(n)$和$ o(1)$。我们实验了两种RPE机制,并进一步提出了一个小组扩展,以实现多种环境的成就,以提高其表现力。然后,它们是一种新型视觉MLP的关键构建块,称为POSMLP。我们通过进行彻底的实验来评估所提出的方法的有效性,证明参数复杂性的提高或可比性能得到了改善或可比性。例如,对于在ImagEnet1k上训练的模型,我们实现了从72.14 \%\%\%\%的绩效提高,并且可学习的参数从$ 194M $ $ $ $ $ $ $ $ 1.182亿美元。代码可以在\ href {https://github.com/zhicaiwww/posmlp} {https://github.com/zhicaiwww/posmlp}中找到代码。
translated by 谷歌翻译
vision变压器(VIT)最近在图像分类上实现了对卷积神经网络(CNNS)的可比结果的强大能力。然而,Vanilla Vit只是直接从自然语言处理继承相同的架构,这通常不会针对视觉应用进行优化。在这篇文章的推动中,我们提出了一种采用金字塔结构的新架构,并在视觉变压器中采用新的区域到局部关注,而不是全球自我关注。更具体地,我们的模型首先从具有不同补丁大小的图像生成区域令牌和本地标记,其中每个区域令牌与基于空间位置的一组本地代币相关联。区域到当地的注意力包括两个步骤:第一,区域自我关注提取所有区域代币之间的全球信息,然后通过自我关注将局部自我关注与相关的本地代币之间的信息交换。因此,尽管局部自我关注限制了当地区域的范围,但它仍然可以接收全球信息。在四个视觉任务中进行广泛的实验,包括图像分类,对象和关键点检测,语义分割和动作识别,表明我们的方法优于或与最先进的Vit变体(包括许多并发作品)的差异。我们的源代码和模型可在https://github.com/ibm/regionvit上使用。
translated by 谷歌翻译
Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost.
translated by 谷歌翻译
在过去的十年中,CNN在电脑愿景世界中统治了至高无上,但最近,变压器一直在崛起。然而,自我关注的二次计算成本已成为实践应用中的严重问题。在没有CNN的情况下,在这种情况下已经有很多研究了,并且在这种情况下自我关注。特别地,MLP混合器是使用MLP设计的简单架构,并击中与视觉变压器相当的精度。然而,这种体系结构中唯一的归纳偏见是嵌入令牌。这叶打开了将非卷积(或非本地)电感偏差结合到架构中的可能性,因此我们使用了两个简单的想法,以便利用其捕获全局相关能力的同时将电感偏差纳入MLP混合器。一种方法是将令牌混合块垂直和水平分割。另一种方法是在一些令牌混合通道中进行空间相关性密集。通过这种方法,我们能够提高MLP混合器的准确性,同时降低其参数和计算复杂性。 RAFTMLP-S的小型模型与每个计算的参数和效率方面的基于最先进的全球MLP的模型相当。此外,我们通过利用双向插值来解决基于MLP的模型的固定输入图像分辨率的问题。我们证明这些模型可以应用于诸如物体检测的下游任务的架构的骨干。但是,它没有显着的性能,并提到了对基于全球MLP的模型的下游任务的特定MLP特定架构的需求。 pytorch版本中的源代码可用于\ url {https:/github.com/okojoalg/raft-mlp}。
translated by 谷歌翻译
我们在视觉变压器上呈现整洁但有效的递归操作,可以提高参数利用而不涉及额外参数。这是通过在变压器网络的深度分享权重来实现的。所提出的方法可以只使用NA \“IVE递归操作来获得大量增益(〜2%),不需要对设计网络原理的特殊或复杂的知识,并引入训练程序的最小计算开销。减少额外的计算通过递归操作,同时保持卓越的准确性,我们通过递归层的多个切片组自行引入近似方法,这可以通过最小的性能损失将成本消耗降低10〜30%。我们称我们的模型切片递归变压器(SRET) ,这与高效视觉变压器的广泛的其他设计兼容。我们最好的模型在含有较少参数的同时,在最先进的方法中对Imagenet建立了重大改进。建议的切片递归操作使我们能够建立一个变压器超过100甚至1000层,仍然仍然小尺寸(13〜15米),以避免困难当模型尺寸太大时,IES在优化中。灵活的可扩展性显示出缩放和构建极深和大维视觉变压器的巨大潜力。我们的代码和模型可在https://github.com/szq0214/sret中找到。
translated by 谷歌翻译
在最近的计算机视觉研究中,Vision Transformer(VIT)的出现迅速彻底改变了各种建筑设计工作:VIT使用自然语言处理中发现的自我注意力实现了最新的图像分类性能,而MLP-Mixer实现了使用简单多层感知器的竞争性能。相比之下,一些研究还表明,精心重新设计的卷积神经网络(CNN)可以实现与VIT相当的先进性能,而无需诉诸这些新想法。在这种背景下,越来越多的感应偏见适合计算机视觉。在这里,我们提出了Sequencer,这是VIT的一种新颖且具有竞争力的体系结构,可为这些问题提供新的看法。与VIT不同,音序器使用LSTM而不是自我发项层模型的远程依赖性。我们还提出了二维版本的音序器模块,其中LSTM分解为垂直和水平LSTM,以增强性能。尽管它很简单,但一些实验表明,Sequencer表现出色:Sequencer2d-L,具有54m参数,​​仅在Imagenet-1K上实现了84.6%的TOP-1精度。不仅如此,我们还表明它具有良好的可传递性和在双分辨率波段上具有强大的分辨率适应性。
translated by 谷歌翻译