Vision Transformer(VIT)最近由于其出色的模型功能而引起了计算机视觉的极大关注。但是,大多数流行的VIT模型都有大量参数,从而限制了其在资源有限的设备上的适用性。为了减轻这个问题,我们提出了Tinyvit,这是一个新的小型,有效的小型视觉变压器,并通过我们提议的快速蒸馏框架在大型数据集上预处理。核心思想是将知识从大型模型转移到小型模型,同时使小型模型能够获得大量预处理数据的股息。更具体地说,我们在预训练期间应用蒸馏进行知识转移。大型教师模型的徽标被稀疏并提前存储在磁盘中,以节省内存成本和计算开销。微小的学生变形金刚自动从具有计算和参数约束的大型审计模型中缩小。全面的实验证明了TinyVit的功效。它仅具有21m参数的Imagenet-1k上的前1个精度为84.8%,与在Imagenet-21K上预读的SWIN-B相当,而使用较少的参数则使用了4.2倍。此外,增加图像分辨率,TinyVit可以达到86.5%的精度,仅使用11%参数,比SWIN-L略好。最后但并非最不重要的一点是,我们在各种下游任务上展示了TinyVit的良好转移能力。代码和型号可在https://github.com/microsoft/cream/tree/main/tinyvit上找到。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolution-free transformers by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.
translated by 谷歌翻译
视觉变压器(VITS)已成为各种视觉任务的流行结构和优于卷积神经网络(CNNS)。然而,这种强大的变形金机带来了巨大的计算负担。而这背后的基本障碍是排气的令牌到令牌比较。为了缓解这一点,我们深入研究Vit的模型属性,观察到VITS表现出稀疏关注,具有高令牌相似性。这直观地向我们介绍了可行的结构不可知的尺寸,令牌编号,以降低计算成本。基于这一探索,我们为香草vits提出了一种通用的自我切片学习方法,即坐下。具体而言,我们首先设计一种新颖的令牌减肥模块(TSM),可以通过动态令牌聚集来提高VIT的推理效率。不同于令牌硬滴,我们的TSM轻轻地集成了冗余令牌变成了更少的信息,可以在不切断图像中的鉴别性令牌关系的情况下动态缩放视觉注意。此外,我们介绍了一种简洁的密集知识蒸馏(DKD)框架,其密集地以柔性自动编码器方式传送无组织的令牌信息。由于教师和学生之间的结构类似,我们的框架可以有效地利用结构知识以获得更好的收敛性。最后,我们进行了广泛的实验来评估我们的坐姿。它展示了我们的方法可以通过1.7倍加速VITS,其精度下降可忽略不计,甚至在3.6倍上加速VITS,同时保持其性能的97%。令人惊讶的是,通过简单地武装LV-VIT与我们的坐线,我们在想象中实现了新的最先进的表现,超过了最近文学中的所有CNN和VITS。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolutionfree transformers trained on ImageNet only using a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.We also introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention, typically from a convnet teacher. The learned transformers are competitive (85.2% top-1 acc.) with the state of the art on ImageNet, and similarly when transferred to other tasks. We will share our code and models.
translated by 谷歌翻译
We introduce submodel co-training, a regularization method related to co-training, self-distillation and stochastic depth. Given a neural network to be trained, for each sample we implicitly instantiate two altered networks, ``submodels'', with stochastic depth: we activate only a subset of the layers. Each network serves as a soft teacher to the other, by providing a loss that complements the regular loss provided by the one-hot label. Our approach, dubbed cosub, uses a single set of weights, and does not involve a pre-trained external model or temporal averaging. Experimentally, we show that submodel co-training is effective to train backbones for recognition tasks such as image classification and semantic segmentation. Our approach is compatible with multiple architectures, including RegNet, ViT, PiT, XCiT, Swin and ConvNext. Our training strategy improves their results in comparable settings. For instance, a ViT-B pretrained with cosub on ImageNet-21k obtains 87.4% top-1 acc. @448 on ImageNet-val.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
在本文中,我们通过利用视觉数据中的空间稀疏性提出了一种新的模型加速方法。我们观察到,视觉变压器中的最终预测仅基于最有用的令牌的子集,这足以使图像识别。基于此观察,我们提出了一个动态的令牌稀疏框架,以根据加速视觉变压器的输入逐渐和动态地修剪冗余令牌。具体而言,我们设计了一个轻量级预测模块,以估计给定当前功能的每个令牌的重要性得分。该模块被添加到不同的层中以层次修剪冗余令牌。尽管该框架的启发是我们观察到视觉变压器中稀疏注意力的启发,但我们发现自适应和不对称计算的想法可能是加速各种体系结构的一般解决方案。我们将我们的方法扩展到包括CNN和分层视觉变压器在内的层次模型,以及更复杂的密集预测任务,这些任务需要通过制定更通用的动态空间稀疏框架,并具有渐进性的稀疏性和非对称性计算,用于不同空间位置。通过将轻质快速路径应用于少量的特征,并使用更具表现力的慢速路径到更重要的位置,我们可以维护特征地图的结构,同时大大减少整体计算。广泛的实验证明了我们框架对各种现代体系结构和不同视觉识别任务的有效性。我们的结果清楚地表明,动态空间稀疏为模型加速提供了一个新的,更有效的维度。代码可从https://github.com/raoyongming/dynamicvit获得
translated by 谷歌翻译
虽然知识蒸馏(KD)被认为是许多视觉任务中的有用工具,例如监督分类和自我监督的代表学习,但香草KD框架的主要缺点是其机制,它消耗了大部分计算开销通过巨型教师网络转发,使整个学习程序效率低下和昂贵。 Relabel是最近提出的解决方案,建议为整个图像创建标签映射。在培训期间,它通过ROI对齐在预先生成的整个标签地图上接收裁剪区域级标签,允许有效的监督生成,而无需多次通过教师。然而,随着KD教师来自传统的多作物培训,在这种技术中,全球标签 - 地图和区域级标签之间存在各种不匹配,导致性能恶化。在这项研究中,我们介绍了一种快速知识蒸馏(FKD)框架,可通过多作物KD方法进行复制,并产生软标签,同时训练比RELABEL更快,因为ROI对齐和软墨率操作等后期用过的。当在同一图像中进行多作物进行数据加载时,我们的FKD比传统的图像分类框架更有效。在Imagenet-1K上,我们使用Reset-50获得79.8%,优先表现优于〜1.0%,同时更快。在自我监督的学习任务上,我们还表明FKD具有效率优势。我们的项目页面:http://zhiqiangshen.com/projects/fkd/index.html,源代码和型号可用于:https://github.com/szq0214/fkd。
translated by 谷歌翻译
最近,变压器和多层感知器(MLP)体系结构在各种视觉任务上取得了令人印象深刻的结果。但是,如何有效地结合这些操作员形成高性能混合视觉体系结构仍然是一个挑战。在这项工作中,我们通过提出一种新型的统一体系结构搜索方法来研究卷积,变压器和MLP的可学习组合。我们的方法包含两个关键设计,以实现高性能网络的搜索。首先,我们以统一的形式对截然不同的可搜索运算符进行建模,从而使操作员能够用相同的配置参数进行表征。这样,总体搜索空间规模大大减少,总搜索成本变得负担得起。其次,我们提出上下文感知的倒数采样模块(DSM),以减轻不同类型的操作员之间的差距。我们提出的DSM能够更好地适应不同类型的操作员的功能,这对于识别高性能混合体系结构很重要。最后,我们将可配置的运算符和DSM集成到统一的搜索空间中,并使用基于增强学习的搜索算法进行搜索,以充分探索操作员的最佳组合。为此,我们搜索一个基线网络并扩大规模,以获得一个名为UNINET的模型系列,该模型的准确性和效率比以前的Convnets和Transformers更好。特别是,我们的UNET-B5在ImageNet上获得了84.9%的TOP-1精度,比效应网络-B7和Botnet-T7分别少了44%和55%。通过在Imagenet-21K上进行预处理,我们的UNET-B6获得了87.4%,表现优于SWIN-L,拖鞋少51%,参数减少了41%。代码可在https://github.com/sense-x/uninet上找到。
translated by 谷歌翻译
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO testdev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-theart by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at https://github. com/microsoft/Swin-Transformer.
translated by 谷歌翻译
Vision Transformers (ViTs) have achieved overwhelming success, yet they suffer from vulnerable resolution scalability, i.e., the performance drops drastically when presented with input resolutions that are unseen during training. We introduce, ResFormer, a framework that is built upon the seminal idea of multi-resolution training for improved performance on a wide spectrum of, mostly unseen, testing resolutions. In particular, ResFormer operates on replicated images of different resolutions and enforces a scale consistency loss to engage interactive information across different scales. More importantly, to alternate among varying resolutions, we propose a global-local positional embedding strategy that changes smoothly conditioned on input sizes. This allows ResFormer to cope with novel resolutions effectively. We conduct extensive experiments for image classification on ImageNet. The results provide strong quantitative evidence that ResFormer has promising scaling abilities towards a wide range resolutions. For instance, ResFormer-B-MR achieves a Top-1 accuracy of 75.86% and 81.72% when evaluated on relatively low and high resolutions respectively (i.e., 96 and 640), which are 48% and 7.49% better than DeiT-B. We also demonstrate, among other things, ResFormer is flexible and can be easily extended to semantic segmentation and video action recognition.
translated by 谷歌翻译
我们在视觉变压器上呈现整洁但有效的递归操作,可以提高参数利用而不涉及额外参数。这是通过在变压器网络的深度分享权重来实现的。所提出的方法可以只使用NA \“IVE递归操作来获得大量增益(〜2%),不需要对设计网络原理的特殊或复杂的知识,并引入训练程序的最小计算开销。减少额外的计算通过递归操作,同时保持卓越的准确性,我们通过递归层的多个切片组自行引入近似方法,这可以通过最小的性能损失将成本消耗降低10〜30%。我们称我们的模型切片递归变压器(SRET) ,这与高效视觉变压器的广泛的其他设计兼容。我们最好的模型在含有较少参数的同时,在最先进的方法中对Imagenet建立了重大改进。建议的切片递归操作使我们能够建立一个变压器超过100甚至1000层,仍然仍然小尺寸(13〜15米),以避免困难当模型尺寸太大时,IES在优化中。灵活的可扩展性显示出缩放和构建极深和大维视觉变压器的巨大潜力。我们的代码和模型可在https://github.com/szq0214/sret中找到。
translated by 谷歌翻译
变压器模型在处理各种视觉任务方面表现出了有希望的有效性。但是,与训练卷积神经网络(CNN)模型相比,训练视觉变压器(VIT)模型更加困难,并且依赖于大规模训练集。为了解释这一观察结果,我们做出了一个假设,即\ textit {vit模型在捕获图像的高频组件方面的有效性较小,而不是CNN模型},并通过频率分析对其进行验证。受这一发现的启发,我们首先研究了现有技术从新的频率角度改进VIT模型的影响,并发现某些技术(例如,randaugment)的成功可以归因于高频组件的更好使用。然后,为了补偿这种不足的VIT模型能力,我们提出了HAT,该HAT可以通过对抗训练直接增强图像的高频组成部分。我们表明,HAT可以始终如一地提高各种VIT模型的性能(例如VIT-B的 +1.2%,Swin-B的 +0.5%),尤其是提高了仅使用Imagenet-的高级模型Volo-D5至87.3% 1K数据,并且优势也可以维持在分发数据的数据上,并转移到下游任务。该代码可在以下网址获得:https://github.com/jiawangbai/hat。
translated by 谷歌翻译
本文研究了两种技术,用于开发有效的自我监督视觉变压器(ESVIT)进行视觉表示学习。首先,我们通过一项全面的实证研究表明,具有稀疏自我生产的多阶段体系结构可以显着降低建模的复杂性,但具有失去捕获图像区域之间细粒度对应关系的能力的成本。其次,我们提出了一项新的区域匹配训练任务,该任务使模型可以捕获细粒的区域依赖性,因此显着提高了学习视觉表示的质量。我们的结果表明,ESVIT在ImageNet线性探针评估上结合两种技术,在ImageNet线性探针评估中获得了81.3%的TOP-1,优于先前的艺术,其较高吞吐量的顺序幅度约为较高。当转移到下游线性分类任务时,ESVIT在18个数据集中的17个中优于其受监督的对方。代码和模型可公开可用:https://github.com/microsoft/esvit
translated by 谷歌翻译
Recently, large-scale pre-trained models have shown their advantages in many tasks. However, due to the huge computational complexity and storage requirements, it is challenging to apply the large-scale model to real scenes. A common solution is knowledge distillation which regards the large-scale model as a teacher model and helps to train a small student model to obtain a competitive performance. Cross-task Knowledge distillation expands the application scenarios of the large-scale pre-trained model. Existing knowledge distillation works focus on directly mimicking the final prediction or the intermediate layers of the teacher model, which represent the global-level characteristics and are task-specific. To alleviate the constraint of different label spaces, capturing invariant intrinsic local object characteristics (such as the shape characteristics of the leg and tail of the cattle and horse) plays a key role. Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios. First, to better transfer the generalized knowledge in the teacher model in cross-task scenarios, we propose a prototype learning module to learn from the essential feature representation of objects in the teacher model. Secondly, for diverse downstream tasks, we propose a task-adaptive feature augmentation module to enhance the features of the student model with the learned generalization prototype features and guide the training of the student model to improve its generalization ability. The experimental results on various visual tasks demonstrate the effectiveness of our approach for large-scale model cross-task knowledge distillation scenes.
translated by 谷歌翻译
最近,视觉变压器(VIT)及其变体在各种计算机视觉任务中取得了有希望的表现。然而,VITS的高计算成本和培训数据要求将其应用程序限制在资源受限设置中。模型压缩是加快深度学习模型的有效方法,但压缩VITS的研究已经不太探索。许多以前的作品集中在减少令牌的数量。然而,这种攻击行会破坏VIT的空间结构,并且难以推广到下游任务中。在本文中,我们设计了统一的框架,用于对VITS及其变体的结构修剪,即升级Vits。我们的方法侧重于修剪所有VITS组件,同时保持模型结构的一致性。丰富的实验结果表明,我们的方法可以在压缩VITS和变体上实现高精度,例如,UP-DEIT-T在Imagenet上实现了75.79%的精度,这与Vanilla Deit-T以相同的计算成本优于3.59%。 UP-PVTV2-B0提高了PVTV2-B0的精度4.83%,以进行想象成分类。同时,上升VITS维护令牌表示的一致性,并在对象检测任务上提高一致的改进。
translated by 谷歌翻译
视觉变压器的最新进展在基于点产生自我注意的新空间建模机制驱动的各种任务中取得了巨大成功。在本文中,我们表明,视觉变压器背后的关键要素,即输入自适应,远程和高阶空间相互作用,也可以通过基于卷积的框架有效地实现。我们介绍了递归封闭式卷积($ \ textit {g}^\ textit {n} $ conv),该卷积{n} $ conv)与封闭的卷积和递归设计执行高阶空间交互。新操作是高度灵活和可定制的,它与卷积的各种变体兼容,并将自我注意的两阶相互作用扩展到任意订单,而无需引入大量额外的计算。 $ \ textit {g}^\ textit {n} $ conv可以用作插件模块,以改善各种视觉变压器和基于卷积的模型。根据该操作,我们构建了一个名为Hornet的新型通用视觉骨干家族。关于ImageNet分类,可可对象检测和ADE20K语义分割的广泛实验表明,大黄蜂的表现优于Swin变形金刚,并具有相似的整体体系结构和训练配置的明显边距。大黄蜂还显示出对更多训练数据和更大模型大小的有利可伸缩性。除了在视觉编码器中的有效性外,我们还可以将$ \ textit {g}^\ textit {n} $ conv应用于特定于任务的解码器,并始终通过较少的计算来提高密集的预测性能。我们的结果表明,$ \ textIt {g}^\ textit {n} $ conv可以成为视觉建模的新基本模块,可有效结合视觉变形金刚和CNN的优点。代码可从https://github.com/raoyongming/hornet获得
translated by 谷歌翻译
视觉变压器(VIT)显示了计算机视觉任务的快速进步,在各种基准上取得了令人鼓舞的结果。但是,由于参数和模型设计的数量大量,例如注意机制,基于VIT的模型通常比轻型卷积网络慢。因此,为实时应用程序部署VIT特别具有挑战性,尤其是在资源受限的硬件(例如移动设备)上。最近的努力试图通过网络体系结构搜索或与Mobilenet块的混合设计来降低VIT的计算复杂性,但推理速度仍然不令人满意。这导致了一个重要的问题:变形金刚在获得高性能的同时可以像Mobilenet一样快吗?为了回答这一点,我们首先重新审视基于VIT的模型中使用的网络体系结构和运营商,并确定效率低下的设计。然后,我们引入了一个尺寸一致的纯变压器(无需Mobilenet块)作为设计范式。最后,我们执行以延迟驱动的缩小,以获取一系列称为EfficityFormer的最终模型。广泛的实验表明,在移动设备上的性能和速度方面,有效形式的优势。我们最快的型号,EfficientFormer-L1,在ImagEnet-1k上获得$ 79.2 \%$ $ TOP-1的准确性,仅$ 1.6 $ MS推理潜伏期在iPhone 12上(与Coreml一起编译),该{运行速度与MobileNetV2 $ \ Times Times 1.4 $( $ 1.6 $ MS,$ 74.7 \%$ top-1),我们最大的型号EfficientFormer-L7,获得了$ 83.3 \%$精度,仅$ 7.0 $ MS延迟。我们的工作证明,正确设计的变压器可以在移动设备上达到极低的延迟,同时保持高性能。
translated by 谷歌翻译
Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, to leverage the advantage of different teachers, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 75.9% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 1.6% respectively. Code will be available at \url{https://github.com/ruiwang2021/mvd}.
translated by 谷歌翻译