Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, making it possible to tailor the model to different compute budgets at deployment time. We extensively evaluate the resulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic segmentation, concluding that it usually matches, and sometimes outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most models relying on a ViT backbone architecture. Code and pre-trained models are available at https://github.com/google-research/big_vision
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
视觉变压器(VIT)已被证明可以在广泛的视觉应用中获得高度竞争性的性能,例如图像分类,对象检测和语义图像分割。与卷积神经网络相比,通常发现视觉变压器的较弱的电感偏差会在较小的培训数据集上培训时,会增加对模型正则化或数据增强的依赖(简称为“ AUGREG”)。我们进行了一项系统的实证研究,以便更好地了解培训数据,AUGREG,模型大小和计算预算之间的相互作用。作为这项研究的一个结果,我们发现增加的计算和AUGREG的组合可以产生与在数量级上训练的模型相同的训练数据的模型:我们在公共Imagenet-21K数据集中培训各种尺寸的VIT模型在较大的JFT-300M数据集上匹配或超越其对手的培训。
translated by 谷歌翻译
将简单的体系结构与大规模预训练相结合已导致图像分类的大量改进。对于对象检测,预训练和缩放方法的确定性不佳,尤其是在长尾和开放式摄影的环境中,训练数据相对较少。在本文中,我们提出了一个强大的配方,用于将图像文本模型转移到开放式对象检测中。我们使用具有最小修改,对比度文本预训练和端到端检测微调的标准视觉变压器体系结构。我们对该设置的缩放属性的分析表明,增加图像级预训练和模型大小在下游检测任务上产生一致的改进。我们提供适应性策略和正规化,以实现零击文本条件和单次图像条件对象检测的非常强劲的性能。代码和型号可在GitHub上找到。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolution-free transformers by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.
translated by 谷歌翻译
本文提出了一种对比调整,这是一种简单的方法,采用对比训练来对准图像和文本模型,同时仍然利用他们的预训练。在我们的实证研究中,我们发现,锁定的预训练图像模型与解锁文本模型最佳。我们调用这种对比调整“锁定图像文本调整”(LIT TOONING)的实例,该实例仅教导文本模型,从预先训练的图像模型中读出了良好的表示新任务。亮度调谐模型将零拍摄传输到新视觉任务的能力提高,例如图像分类或检索。建议的亮度调整是广泛适用的;它可以使用三种不同的图像文本数据集可靠地使用多种预训练方法(监督和无监督)和多种架构(Reset,Vision变换器和MLP-MILLER)。利用基于变压器的预训练VIT-G / 14型号,LIT调谐模型在想象网测试集中实现了84.5%的零射频传输精度,并且在充满挑战的分发ObjectNet测试集中实现了81.1%。
translated by 谷歌翻译
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models. Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features. We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks where it outperforms concurrent vision transformers that rely on large scale external pre-training and are 5-10× more costly in computation and parameters. We further remove the temporal dimension and apply our model for image classification where it outperforms prior work on vision transformers. Code is available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译
随着变压器作为语言处理的标准及其在计算机视觉方面的进步,参数大小和培训数据的数量相应地增长。许多人开始相信,因此,变形金刚不适合少量数据。这种趋势引起了人们的关注,例如:某些科学领域中数据的可用性有限,并且排除了该领域研究资源有限的人。在本文中,我们旨在通过引入紧凑型变压器来提出一种小规模学习的方法。我们首次表明,具有正确的尺寸,卷积令牌化,变压器可以避免在小数据集上过度拟合和优于最先进的CNN。我们的模型在模型大小方面具有灵活性,并且在获得竞争成果的同时,参数可能仅为0.28亿。当在CIFAR-10上训练Cifar-10,只有370万参数训练时,我们的最佳模型可以达到98%的准确性,这是与以前的基于变形金刚的模型相比,数据效率的显着提高,比其他变压器小于10倍,并且是15%的大小。在实现类似性能的同时,重新NET50。 CCT还表现优于许多基于CNN的现代方法,甚至超过一些基于NAS的方法。此外,我们在Flowers-102上获得了新的SOTA,具有99.76%的TOP-1准确性,并改善了Imagenet上现有基线(82.71%精度,具有29%的VIT参数)以及NLP任务。我们针对变压器的简单而紧凑的设计使它们更可行,可以为那些计算资源和/或处理小型数据集的人学习,同时扩展了在数据高效变压器中的现有研究工作。我们的代码和预培训模型可在https://github.com/shi-labs/compact-transformers上公开获得。
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolutionfree transformers trained on ImageNet only using a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.We also introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention, typically from a convnet teacher. The learned transformers are competitive (85.2% top-1 acc.) with the state of the art on ImageNet, and similarly when transferred to other tasks. We will share our code and models.
translated by 谷歌翻译
基于注意力的神经网络(例如Vision Transformer(VIT))最近在许多计算机视觉基准上获得了最新结果。尺度是获得出色结果的主要成分,因此,了解模型的缩放属性是有效设计子孙后代的关键。尽管已经研究了用于扩展变压器语言模型的法律,但视觉变压器如何扩展是未知的。为了解决这个问题,我们将VIT模型和数据扩展到上下,并表征错误率,数据和计算之间的关系。在此过程中,我们完善了VIT的体系结构和培训,减少了记忆消耗并提高了所得模型的准确性。结果,我们成功地训练了具有20亿个参数的VIT模型,该模型达到了90.45%TOP-1准确性的新最先进。该模型在几次转移中也表现良好,例如,ImageNet上的Top-1精度达到了84.86%,每个类别仅10个示例。
translated by 谷歌翻译
视觉变压器已经证明了在各种视觉任务中胜过CNN的潜力。但是这些模型的计算和内存要求禁止在许多应用中使用它们,尤其是依赖高分辨率图像的应用程序,例如医学图像分类。更有效地训练VIT的努力过于复杂,需要进行建筑变化或复杂的培训方案。在这项工作中,我们表明可以通过随机删除输入图像贴片来有效地以高分辨率进行标准VIT模型。这种简单的方法(PatchDropout)在标准的自然图像数据集(例如ImageNet)中将拖鞋和内存减少至少50%,而这些节省仅随图像尺寸而增加。在高分辨率医疗数据集CSAW上,我们使用PatchDropout可节省5倍的计算和内存,并提高性能。对于具有固定计算或内存预算的从业人员,PatchDropout可以选择图像分辨率,超参数或模型大小以使其从模型中获得最大的性能。
translated by 谷歌翻译
Image segmentation is often ambiguous at the level of individual image patches and requires contextual information to reach label consensus. In this paper we introduce Segmenter, a transformer model for semantic segmentation. In contrast to convolution-based methods, our approach allows to model global context already at the first layer and throughout the network. We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation. To do so, we rely on the output embeddings corresponding to image patches and obtain class labels from these embeddings with a point-wise linear decoder or a mask transformer decoder. We leverage models pre-trained for image classification and show that we can fine-tune them on moderate sized datasets available for semantic segmentation. The linear decoder allows to obtain excellent results already, but the performance can be further improved by a mask transformer generating class masks. We conduct an extensive ablation study to show the impact of the different parameters, in particular the performance is better for large models and small patch sizes. Segmenter attains excellent results for semantic segmentation. It outperforms the state of the art on both ADE20K and Pascal Context datasets and is competitive on Cityscapes.
translated by 谷歌翻译
当前的Modus Operandi在改编预训练的模型中涉及更新所有骨干参数,即,完整的微调。本文介绍了视觉及时调整(VPT),作为视觉中大规模变压器模型的全面微调的有效替代方案。VPT从最近有效地调整大型语言模型的最新进展中汲取灵感,在输入空间中仅引入了少量的可训练参数(少于模型参数),同时保持模型骨架冻结。通过对各种下游识别任务的广泛实验,我们表明VPT与其他参数有效调整协议相比获得了显着的性能增长。最重要的是,在许多情况下,VPT甚至在模型能力和培训数据量表的许多情况下都胜过全面的微调,同时降低了每任务的存储成本。
translated by 谷歌翻译
我们使用无卷积的变压器架构提出了一种从未标记数据学习多式式表示的框架。具体而言,我们的视频音频文本变压器(Vatt)将原始信号作为输入提取,提取丰富的多式化表示,以使各种下游任务受益。我们使用多模式对比损失从头划线训练Vatt端到端,并通过视频动作识别,音频事件分类,图像分类和文本到视频检索的下游任务评估其性能。此外,我们通过共享三种方式之间的重量来研究模型 - 无话的单骨架变压器。我们表明,无卷积VATT优于下游任务中的最先进的Convnet架构。特别是,Vatt的视觉变压器在动力学-400上实现82.1%的高精度82.1%,在动力学-600,72.7%的动力学-700上的72.7%,以及时间的时间,新的记录,在避免受监督的预训练时,新的记录。通过从头划伤训练相同的变压器,转移到图像分类导致图像分类导致78.7%的ImageNet精度为64.7%,尽管视频和图像之间的域间差距,我们的模型概括了我们的模型。 Vatt的音雅音频变压器还通过在没有任何监督的预训练的情况下在Audioset上实现39.4%的地图来设置基于波形的音频事件识别的新记录。 Vatt的源代码是公开的。
translated by 谷歌翻译
我们介绍了贴片采样时间表(PSS)的概念,该概念在训练过程中每批次使用的视觉变压器(VIT)贴片的数量变化。由于对于大多数视觉目标(例如,分类),所有补丁都不同样重要,因此我们认为,不太重要的补丁可以用于较少的训练迭代中,从而导致较短的训练时间,对性能的影响最小。此外,我们观察到,使用PSS的训练可以使VIT在推理过程中对更宽的贴片采样范围更强。这允许在推理过程中进行吞吐量和准确性之间的细粒度,动态的权衡。我们使用PSSS在VIT上评估Imagenet的VIT,均通过从头开始训练并使用重建损耗函数进行了预训练。对于预训练的模型,与使用所有斑块相比,我们的分类准确性降低了0.26%(从25小时到17小时)降低了0.26%。代码,模型检查点和日志可在https://github.com/bradmcdanel/pss上找到。
translated by 谷歌翻译
变形金刚和蒙版语言建模在计算机视觉中很快被视为视觉变压器和蒙版图像建模(MIM)。在这项工作中,我们认为由于图像中令牌的数量和相关性,图像令牌掩盖与文本中的令牌掩盖有所不同。特别是,为了为MIM产生具有挑战性的借口任务,我们主张从随机掩盖到知情掩盖的转变。我们在基于蒸馏的MIM的背景下开发并展示了这一想法,其中教师变压器编码器生成了一个注意力图,我们用它来指导学生为学生指导掩盖。因此,我们引入了一种新颖的掩蔽策略,称为注意引导蒙版(ATTMASK),我们证明了其对基于密集蒸馏的MIM以及基于普通蒸馏的自然剥离的自助力学习的有效性。我们确认ATTMASK可以加快学习过程,并提高各种下游任务的性能。我们在https://github.com/gkakogeorgiou/attmask上提供实现代码。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers. 1
translated by 谷歌翻译
在本文中,我们询问视觉变形金刚(VIT)是否可以作为改善机器学习模型对抗逃避攻击的对抗性鲁棒性的基础结构。尽管较早的作品集中在改善卷积神经网络上,但我们表明VIT也非常适合对抗训练以实现竞争性能。我们使用自定义的对抗训练配方实现了这一目标,该配方是在Imagenet数据集的一部分上使用严格的消融研究发现的。与卷积相比,VIT的规范培训配方建议强大的数据增强,部分是为了补偿注意力模块的视力归纳偏置。我们表明,该食谱在用于对抗训练时可实现次优性能。相比之下,我们发现省略所有重型数据增强,并添加一些额外的零件($ \ varepsilon $ -Warmup和更大的重量衰减),从而大大提高了健壮的Vits的性能。我们表明,我们的配方在完整的Imagenet-1k上概括了不同类别的VIT体系结构和大规模模型。此外,调查了模型鲁棒性的原因,我们表明,在使用我们的食谱时,在训练过程中产生强烈的攻击更加容易,这会在测试时提高鲁棒性。最后,我们通过提出一种量化对抗性扰动的语义性质并强调其与模型的鲁棒性的相关性来进一步研究对抗训练的结果。总体而言,我们建议社区应避免将VIT的规范培训食谱转换为在对抗培训的背景下进行强大的培训和重新思考常见的培训选择。
translated by 谷歌翻译
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models 1 .
translated by 谷歌翻译