Convolutional networks have been the paradigm of choice in many computer vision applications. The convolution operation however has a significant weakness in that it only operates on a local neighborhood, thus missing global information. Self-attention, on the other hand, has emerged as a recent advance to capture long range interactions, but has mostly been applied to sequence modeling and generative modeling tasks. In this paper, we consider the use of self-attention for discriminative visual tasks as an alternative to convolutions. We introduce a novel two-dimensional relative self-attention mechanism that proves competitive in replacing convolutions as a stand-alone computational primitive for image classification. We find in control experiments that the best results are obtained when combining both convolutions and self-attention. We therefore propose to augment convolutional operators with this self-attention mechanism by concatenating convolutional feature maps with a set of feature maps produced via self-attention. Extensive experiments show that Attention Augmentation leads to consistent improvements in image classification on Im-ageNet and object detection on COCO across many different models and scales, including ResNets and a stateof-the art mobile constrained network, while keeping the number of parameters similar. In particular, our method achieves a 1.3% top-1 accuracy improvement on ImageNet classification over a ResNet50 baseline and outperforms other attention mechanisms for images such as . It also achieves an improvement of 1.4 mAP in COCO Object Detection on top of a RetinaNet baseline.
translated by 谷歌翻译
Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to selfattention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameterlimited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.
translated by 谷歌翻译
Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention applied to ResNet model produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a pure self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox. Code for this project is made available. 1 * Denotes equal contribution. Ordering determined by random shuffle. † Work done as a member of the Google AI Residency Program.
translated by 谷歌翻译
视觉变压器(VIT)用作强大的视觉模型。与卷积神经网络不同,在前几年主导视觉研究,视觉变压器享有捕获数据中的远程依赖性的能力。尽管如此,任何变压器架构的组成部分,自我关注机制都存在高延迟和低效的内存利用,使其不太适合高分辨率输入图像。为了缓解这些缺点,分层视觉模型在非交错的窗口上局部使用自我关注。这种放松会降低输入尺寸的复杂性;但是,它限制了横窗相互作用,损害了模型性能。在本文中,我们提出了一种新的班次不变的本地注意层,称为查询和参加(QNA),其以重叠的方式聚集在本地输入,非常类似于卷积。 QNA背后的关键想法是介绍学习的查询,这允许快速高效地实现。我们通过将其纳入分层视觉变压器模型来验证我们的层的有效性。我们展示了速度和内存复杂性的改进,同时实现了与最先进的模型的可比准确性。最后,我们的图层尺寸尤其良好,窗口大小,需要高于X10的内存,而不是比现有方法更快。
translated by 谷歌翻译
卷积和自我关注是表示学习的两个强大的技术,通常被认为是两个与彼此不同的对等方法。在本文中,我们表明它们之间存在强烈的潜在关系,从而在这两个范式的大部分计算实际上以相同的操作完成。具体来说,我们首先表明,具有内核大小k x k的传统卷积可以分解为k ^ 2个单独的1x1卷积,然后是换档和求和操作。然后,我们将自我注意模块中的查询,键和值解释为多个1x1卷积,然后计算注意力权重和值的聚合。因此,两个模块的第一阶段包括类似的操作。更重要的是,第一阶段有助于与第二阶段相比的主导计算复杂性(信道大小的正方形)。这种观察结果自然导致这两个看似独特的范例的优雅集成,即享有自我关注和卷积(ACMIX)的益处的混合模型,同时与纯卷积或自我关注对应相比具有最小的计算开销。广泛的实验表明,我们的模型在图像识别和下游任务上持续改进了竞争基础的结果。代码和预先训练的型号将在https://github.com/panxuran/acmix和https://gitee.com/mindspore/models发布。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt [67] evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 1.64x faster in "compute" 1 time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.
translated by 谷歌翻译
变形金刚最近在计算机视觉社区中引起了极大的关注。然而,缺乏关于图像大小的自我注意力机制的可扩展性限制了它们在最先进的视觉骨架中的广泛采用。在本文中,我们介绍了一种高效且可扩展的注意模型,我们称之为多轴注意,该模型由两个方面组成:阻止局部和扩张的全球关注。这些设计选择允许仅具有线性复杂性的任意输入分辨率上进行全局本地空间相互作用。我们还通过有效地将我们提出的注意模型与卷积混合在一起,提出了一个新的建筑元素,因此,通过简单地在多个阶段重复基本的构建块,提出了一个简单的层次视觉主链,称为Maxvit。值得注意的是,即使在早期的高分辨率阶段,Maxvit也能够在整个网络中“看到”。我们证明了模型在广泛的视觉任务上的有效性。根据图像分类,Maxvit在各种设置下实现最先进的性能:没有额外的数据,Maxvit获得了86.5%的Imagenet-1K Top-1精度;使用Imagenet-21K预训练,我们的模型可实现88.7%的TOP-1精度。对于下游任务,麦克斯维特(Maxvit)作为骨架可在对象检测以及视觉美学评估方面提供有利的性能。我们还表明,我们提出的模型表达了ImageNet上强大的生成建模能力,这表明了Maxvit块作为通用视觉模块的优势潜力。源代码和训练有素的模型将在https://github.com/google-research/maxvit上找到。
translated by 谷歌翻译
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https: //github.com/facebookresearch/LeViT.
translated by 谷歌翻译
在本文中,我们将多尺度视觉变压器(MVIT)作为图像和视频分类的统一架构,以及对象检测。我们提出了一种改进的MVIT版本,它包含分解的相对位置嵌入和残余汇集连接。我们以五种尺寸实例化此架构,并评估Imagenet分类,COCO检测和动力学视频识别,在此优先效果。我们进一步比较了MVITS的汇集注意力来窗口注意力机制,其中它在准确性/计算中优于后者。如果没有钟声,MVIT在3个域中具有最先进的性能:ImageNet分类的准确性为88.8%,Coco对象检测的56.1盒AP和动力学-400视频分类的86.1%。代码和模型将公开可用。
translated by 谷歌翻译
Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets (pronounced "coat" nets), a family of hybrid models built from two key insights:(1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.1 The initial projection stage can be seen as an aggressive down-sampling convolutional stem.
translated by 谷歌翻译
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at https: //github.com/leoxiaobin/CvT.
translated by 谷歌翻译
我们提出了邻里注意力变压器(NAT),这是一种有效,准确和可扩展的层次变压器,在图像分类和下游视觉任务上都很好地工作。它建立在邻里注意力(NA)的基础上,这是一种简单而灵活的注意机制,将每个查询的接受场都定位到其最近的相邻像素。 NA是自我注意的本地化,并且随着接收场大小的增加而接近它。在拖曳和记忆使用方面,它也等同于Swin Transformer的转移窗口的注意力,而同样的接收场大小,同时受到了较少的约束。此外,NA包括局部电感偏见,从而消除了对像素移位等额外操作的需求。 NAT的实验结果具有竞争力; Nat-tiny在Imagenet上仅具有4.3 GFLOPS和28M参数,在MS-Coco上达到51.4%的MAP和ADE20K上的48.4%MIOU。我们在:https://github.com/shi-labs/neighborhood-cithention-transformer上开放了检查点,代码和CUDA内核。
translated by 谷歌翻译
Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations, wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention
translated by 谷歌翻译
视觉变压器的最新进展在基于点产生自我注意的新空间建模机制驱动的各种任务中取得了巨大成功。在本文中,我们表明,视觉变压器背后的关键要素,即输入自适应,远程和高阶空间相互作用,也可以通过基于卷积的框架有效地实现。我们介绍了递归封闭式卷积($ \ textit {g}^\ textit {n} $ conv),该卷积{n} $ conv)与封闭的卷积和递归设计执行高阶空间交互。新操作是高度灵活和可定制的,它与卷积的各种变体兼容,并将自我注意的两阶相互作用扩展到任意订单,而无需引入大量额外的计算。 $ \ textit {g}^\ textit {n} $ conv可以用作插件模块,以改善各种视觉变压器和基于卷积的模型。根据该操作,我们构建了一个名为Hornet的新型通用视觉骨干家族。关于ImageNet分类,可可对象检测和ADE20K语义分割的广泛实验表明,大黄蜂的表现优于Swin变形金刚,并具有相似的整体体系结构和训练配置的明显边距。大黄蜂还显示出对更多训练数据和更大模型大小的有利可伸缩性。除了在视觉编码器中的有效性外,我们还可以将$ \ textit {g}^\ textit {n} $ conv应用于特定于任务的解码器,并始终通过较少的计算来提高密集的预测性能。我们的结果表明,$ \ textIt {g}^\ textit {n} $ conv可以成为视觉建模的新基本模块,可有效结合视觉变形金刚和CNN的优点。代码可从https://github.com/raoyongming/hornet获得
translated by 谷歌翻译
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available 1 .
translated by 谷歌翻译
The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by non-local network are almost the same for different query positions within an image. In this paper, we take advantage of this finding to create a simplified network based on a queryindependent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further observe that this simplified design shares similar structure with Squeeze-Excitation Network (SENet). Hence we unify them into a three-step general framework for global context modeling. Within the general framework, we design a better instantiation, called the global context (GC) block, which is lightweight and can effectively model the global context. The lightweight property allows us to apply it for multiple layers in a backbone network to construct a global context network (GCNet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks. The code and configurations are released at https://github.com/xvjiarui/GCNet.
translated by 谷歌翻译
注意机制对研究界提出了重大兴趣,因为他们承诺改善神经网络架构的表现。但是,在任何特定的问题中,我们仍然缺乏主要的方法来选择导致保证改进的具体机制和超参数。最近,已经提出了自我关注并广泛用于变压器 - 类似的架构中,导致某些应用中的重大突破。在这项工作中,我们专注于两种形式的注意机制:注意模块和自我关注。注意模块用于重新重量每个层输入张量的特征。不同的模块具有不同的方法,可以在完全连接或卷积层中执行此重复。研究的注意力模型是完全模块化的,在这项工作中,它们将与流行的Reset架构一起使用。自我关注,最初在自然语言处理领域提出,可以将所有项目与输入序列中的所有项目相关联。自我关注在计算机视觉中越来越受欢迎,其中有时与卷积层相结合,尽管最近的一些架构与卷曲完全消失。在这项工作中,我们研究并执行了在特定计算机视觉任务中许多不同关注机制的客观的比较,在广泛使用的皮肤癌MNIST数据集中的样本分类。结果表明,关注模块有时会改善卷积神经网络架构的性能,也是这种改进虽然明显且统计学意义,但在不同的环境中并不一致。另一方面,通过自我关注机制获得的结果表明了一致和显着的改进,即使在具有减少数量的参数的架构中,也可以实现最佳结果。
translated by 谷歌翻译
在计算机视觉模型中自我关注已经普遍存在。灵感来自完全连接的条件随机字段(CRF),我们将自我关注分解为本地和上下文条款。它们对应于CRF中的一元和二进制术语,并通过带投影矩阵的注意机制来实现。我们观察到,即机构只能对产出作出小贡献,而且同时依赖于机智术语的标准CNNS在各种任务上实现了良好的表现。因此,我们提出了局部增强的自我关注(LESA),通过将其与卷曲掺入卷积来增强联合术语,并利用融合模块动态地耦合偶联和二进制操作。在我们的实验中,我们用Lesa取代自我关注模块。 Imagenet和Coco的结果显示了Lesa在卷积和自我关注基线的优越性,用于图像识别,对象检测和实例分割的任务。代码公开可用。
translated by 谷歌翻译
自我关注是强大的模拟远程依赖性,但在本地更精细的特征学习中是薄弱的。局部自我关注(LSA)的表现正恰好搭配卷积,劣于动态过滤器,这拼图是使用LSA或其同行的研究人员,哪一个更好,是什么让LSA平庸。为了澄清这些,我们全面调查了来自双方的LSA及其对应物:\ EMPH {频道设置}和\ EMPH {空间处理}。我们发现魔鬼在于生成和应用空间注意,其中相对位置嵌入和相邻过滤器应用是关键因素。根据这些调查结果,我们提出了具有Hadamard注意力和幽灵头的局部自我关注(ELSA)。 Hadamard注意介绍了Hadamard产品,在邻近壳体中有效地产生注意,同时保持高阶映射。 Ghost Head将注意力映射与静态矩阵相结合以增加信道容量。实验证明了ELSA的有效性。如果没有架构/封路数据计修改,则使用ELSA的替换LSA将Swin Transformer \ Cite {Swin}替换为高达+1.​​4,最高1精度。 ELSA还一直在D1至D5中始终如一地享受Volo \ Cite {Volo},其中Elsa-Volo-D5在ImageNet-1K上实现87.2,而无需额外的培训图像。此外,我们在下游任务中评估ELSA。 ELSA在COCO上显着改善了最高+1.9盒AP / +1.3面膜AP,并在ADE20K上达到+1.9 miou。代码可用于\ url {https:/github.com/damo-cv/elsa}。
translated by 谷歌翻译