由于长距离依赖性建模的能力,变压器在各种自然语言处理和计算机视觉任务中表现出令人印象深刻的性能。最近的进展证明,将这种变压器与基于CNN的语义图像分割模型相结合非常有前途。然而,目前还没有很好地研究了纯变压器的方法如何实现图像分割。在这项工作中,我们探索了语义图像分割的新框架,它是基于编码器 - 解码器的完全变压器网络(FTN)。具体地,我们首先提出金字塔组变压器(PGT)作为逐步学习分层特征的编码器,同时降低标准视觉变压器(VIT)的计算复杂性。然后,我们将特征金字塔变换器(FPT)提出了来自PGT编码器的多电平进行语义图像分割的多级别的语义级别和空间级信息。令人惊讶的是,这种简单的基线可以在多个具有挑战性的语义细分和面部解析基准上实现更好的结果,包括帕斯卡背景,ADE20K,Cocostuff和Celebamask-HQ。源代码将在https://github.com/br -dl/paddlevit上发布。
translated by 谷歌翻译
图像中的场景细分是视觉内容理解中的一个基本而又具有挑战性的问题,即学习一个模型,将每个图像像素分配给分类标签。这项学习任务的挑战之一是考虑空间和语义关系以获得描述性特征表示,因此从多个量表中学习特征图是场景细分中的一种常见实践。在本文中,我们探讨了在多尺度图像窗口中自我发挥的有效使用来学习描述性视觉特征,然后提出三种不同的策略来汇总这些特征图以解码特征表示形式以进行密集的预测。我们的设计基于最近提出的SWIN Transformer模型,该模型完全放弃了卷积操作。借助简单而有效的多尺度功能学习和聚合,我们的模型在四个公共场景细分数据集,Pascal VOC2012,Coco-STUFF 10K,ADE20K和CITYSCAPES上实现了非常有希望的性能。
translated by 谷歌翻译
视觉表示学习是解决各种视力问题的关键。依靠开创性的网格结构先验,卷积神经网络(CNN)已成为大多数深视觉模型的事实上的标准架构。例如,经典的语义分割方法通常采用带有编码器编码器体系结构的完全横向卷积网络(FCN)。编码器逐渐减少了空间分辨率,并通过更大的接受场来学习更多抽象的视觉概念。由于上下文建模对于分割至关重要,因此最新的努力一直集中在通过扩张(即极度)卷积或插入注意力模块来增加接受场。但是,基于FCN的体系结构保持不变。在本文中,我们旨在通过将视觉表示学习作为序列到序列预测任务来提供替代观点。具体而言,我们部署纯变压器以将图像编码为一系列贴片,而无需局部卷积和分辨率减少。通过在变压器的每一层中建立的全球环境,可以学习更强大的视觉表示形式,以更好地解决视力任务。特别是,我们的细分模型(称为分割变压器(SETR))在ADE20K上擅长(50.28%MIOU,这是提交当天测试排行榜中的第一个位置),Pascal环境(55.83%MIOU),并在CityScapes上达到竞争成果。此外,我们制定了一个分层局部全球(HLG)变压器的家族,其特征是窗户内的本地关注和跨窗户的全球性专注于层次结构和金字塔架构。广泛的实验表明,我们的方法在各种视觉识别任务(例如,图像分类,对象检测和实例分割和语义分割)上实现了吸引力的性能。
translated by 谷歌翻译
ous vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. (3) We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.
translated by 谷歌翻译
Image segmentation is often ambiguous at the level of individual image patches and requires contextual information to reach label consensus. In this paper we introduce Segmenter, a transformer model for semantic segmentation. In contrast to convolution-based methods, our approach allows to model global context already at the first layer and throughout the network. We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation. To do so, we rely on the output embeddings corresponding to image patches and obtain class labels from these embeddings with a point-wise linear decoder or a mask transformer decoder. We leverage models pre-trained for image classification and show that we can fine-tune them on moderate sized datasets available for semantic segmentation. The linear decoder allows to obtain excellent results already, but the performance can be further improved by a mask transformer generating class masks. We conduct an extensive ablation study to show the impact of the different parameters, in particular the performance is better for large models and small patch sizes. Segmenter attains excellent results for semantic segmentation. It outperforms the state of the art on both ADE20K and Pascal Context datasets and is competitive on Cityscapes.
translated by 谷歌翻译
Semantic segmentation usually benefits from global contexts, fine localisation information, multi-scale features, etc. To advance Transformer-based segmenters with these aspects, we present a simple yet powerful semantic segmentation architecture, termed as IncepFormer. IncepFormer has two critical contributions as following. First, it introduces a novel pyramid structured Transformer encoder which harvests global context and fine localisation features simultaneously. These features are concatenated and fed into a convolution layer for final per-pixel prediction. Second, IncepFormer integrates an Inception-like architecture with depth-wise convolutions, and a light-weight feed-forward module in each self-attention layer, efficiently obtaining rich local multi-scale object features. Extensive experiments on five benchmarks show that our IncepFormer is superior to state-of-the-art methods in both accuracy and speed, e.g., 1) our IncepFormer-S achieves 47.7% mIoU on ADE20K which outperforms the existing best method by 1% while only costs half parameters and fewer FLOPs. 2) Our IncepFormer-B finally achieves 82.0% mIoU on Cityscapes dataset with 39.6M parameters. Code is available:github.com/shendu0321/IncepFormer.
translated by 谷歌翻译
卷积神经网络(CNN)已成为医疗图像分割任务的共识。但是,由于卷积操作的性质,它们在建模长期依赖性和空间相关性时受到限制。尽管最初开发了变压器来解决这个问题,但它们未能捕获低级功能。相比之下,证明本地和全球特征对于密集的预测至关重要,例如在具有挑战性的环境中细分。在本文中,我们提出了一种新型方法,该方法有效地桥接了CNN和用于医学图像分割的变压器。具体而言,我们使用开创性SWIN变压器模块和一个基于CNN的编码器设计两个多尺度特征表示。为了确保从上述两个表示获得的全局和局部特征的精细融合,我们建议在编码器编码器结构的跳过连接中提出一个双层融合(DLF)模块。在各种医学图像分割数据集上进行的广泛实验证明了Hiformer在计算复杂性以及定量和定性结果方面对其他基于CNN的,基于变压器和混合方法的有效性。我们的代码可在以下网址公开获取:https://github.com/amirhossein-kz/hiformer
translated by 谷歌翻译
Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoderdecoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (i.e., without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.
translated by 谷歌翻译
已知预测的集合,而是比单独采取的个体预测更好地执行更好。但是,对于需要重型计算资源的任务,\ texit {例如}语义细分,创建需要单独培训的学习者的集合几乎没有易行。在这项工作中,我们建议利用集合方法提供的性能提升,以增强语义分割,同时避免了集合的传统训练成本。我们的自我集成框架利用了通过特征金字塔网络方法生产的多尺度功能来提供独立解码器,从而在单个模型中创建集合。类似于集合,最终预测是每个学习者所做的预测的聚合。与以前的作品相比,我们的模型可以训练结束,减轻了传统的繁琐多阶段培训的合奏。我们的自身融合框架优于当前最先进的基准数据集ADE20K,Pascal Context和Coco-Stuff-10K用于语义细分,并且在城市景观竞争。代码将在Github.com/walbouss/senformer上使用。
translated by 谷歌翻译
Vision transformers (ViTs) encoding an image as a sequence of patches bring new paradigms for semantic segmentation.We present an efficient framework of representation separation in local-patch level and global-region level for semantic segmentation with ViTs. It is targeted for the peculiar over-smoothness of ViTs in semantic segmentation, and therefore differs from current popular paradigms of context modeling and most existing related methods reinforcing the advantage of attention. We first deliver the decoupled two-pathway network in which another pathway enhances and passes down local-patch discrepancy complementary to global representations of transformers. We then propose the spatially adaptive separation module to obtain more separate deep representations and the discriminative cross-attention which yields more discriminative region representations through novel auxiliary supervisions. The proposed methods achieve some impressive results: 1) incorporated with large-scale plain ViTs, our methods achieve new state-of-the-art performances on five widely used benchmarks; 2) using masked pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new record; 3) pyramid ViTs integrated with the decoupled two-pathway network even surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved representations by our framework have favorable transferability in images with natural corruptions. The codes will be released publicly.
translated by 谷歌翻译
在图像变压器网络的编码器部分中的FineTuning佩带的骨干网一直是语义分段任务的传统方法。然而,这种方法揭示了图像在编码阶段提供的语义上下文。本文认为将图像的语义信息纳入预磨料的基于分层变换器的骨干,而FineTuning可显着提高性能。为实现这一目标,我们提出了一个简单且有效的框架,在语义关注操作的帮助下将语义信息包含在编码器中。此外,我们在训练期间使用轻量级语义解码器,为每个阶段提供监督对中间语义的先前地图。我们的实验表明,结合语义前导者增强了所建立的分层编码器的性能,随着絮凝物的数量略有增加。我们通过将Sromask集成到Swin-Cransformer的每个变体中提供了经验证明,因为我们的编码器与不同的解码器配对。我们的框架在CudeScapes数据集上实现了ADE20K数据集的新型58.22%的MIOU,并在Miou指标中提高了超过3%的内容。代码和检查点在https://github.com/picsart-ai-research/semask-egation上公开使用。
translated by 谷歌翻译
最近,变形金刚在各种视觉任务中表现出具有很大的表现。为了降低全球自我关注引起的二次计算复杂性,各种方法限制了本地区域内的注意范围以提高其效率。因此,单个注意层中的接收领域不够大,导致上下文建模不足。为了解决这个问题,我们提出了一种浅色的自我关注(PS-Legution),这在浅层形状的地区内进行自我关注。与全球自我关注相比,PS-Peponsion可以显着降低计算和内存成本。同时,它可以通过以前的本地自我关注机制捕获类似的计算复杂性下的更丰富的上下文信息。根据PS-Intension,我们开发了一个具有分层架构的一般视觉变压器骨干,名为苍白变压器,其达到83.4%,84.3%和84.9%的前1个精度,分别为22米,48米和85米对于224个Imagenet-1K分类,优于上一个视觉变压器骨干板。对于下游任务,我们的苍白变压器骨干在ADE20K语义分割和Coco对象检测和实例分割中,我们的苍白变压器骨干比最近最近的最新的克斯卡文变压器表现更好。代码将在https://github.com/br -dl/paddlevit上发布。
translated by 谷歌翻译
香草自我注意的机制固有地依赖于预定和坚定的计算维度。这种僵化的性限制了它具有面向上下文的概括,可以带来更多的上下文提示和全球表示。为了减轻此问题,我们提出了一种可扩展的自我注意(SSA)机制,该机制利用两个缩放因素来释放查询,键和价值矩阵的维度,同时使它们不符合输入。这种可伸缩性可获得面向上下文的概括并增强对象灵敏度,从而将整个网络推向准确性和成本之间的更有效的权衡状态。此外,我们提出了一个基于窗口的自我注意事项(IWSA),该自我注意力(IWSA)通过重新合并独立的值代币并从相邻窗口中汇总空间信息来建立非重叠区域之间的相互作用。通过交替堆叠SSA和IWSA,可扩展的视觉变压器(可伸缩率)在通用视觉任务中实现最先进的性能。例如,在Imagenet-1K分类中,可伸缩率S的表现优于双胞胎-SVT-S,而Swin-T则比1.4%。
translated by 谷歌翻译
多年来,卷积神经网络(CNN)已成为多种计算机视觉任务的事实上的标准。尤其是,基于开创性体系结构(例如具有跳过连接的U形模型)或具有金字塔池的Artous卷积的深度神经网络已针对广泛的医学图像分析任务量身定制。此类架构的主要优点是它们容易拘留多功能本地功能。然而,作为一般共识,CNN无法捕获由于卷积操作的固有性能的内在特性而捕获长期依赖性和空间相关性。另外,从全球信息建模中获利的变压器源于自我发项机制,最近在自然语言处理和计算机视觉方面取得了出色的表现。然而,以前的研究证明,局部和全局特征对于密集预测的深层模型至关重要,例如以不同的形状和配置对复杂的结构进行分割。为此,本文提出了TransDeeplab,这是一种新型的DeepLab样纯变压器,用于医学图像分割。具体而言,我们用移动的窗口利用层次旋转式变形器来扩展DeepLabV3并建模非常有用的空间金字塔池(ASPP)模块。对相关文献的彻底搜索结果是,我们是第一个用基于纯变压器模型对开创性DeepLab模型进行建模的人。关于各种医学图像分割任务的广泛实验证明,我们的方法在视觉变压器和基于CNN的方法的合并中表现出色或与大多数当代作品相提并论,并显着降低了模型复杂性。代码和训练有素的模型可在https://github.com/rezazad68/transdeeplab上公开获得
translated by 谷歌翻译
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose the SegVit. Previous ViT-based segmentation networks usually learn a pixel-level representation from the output of the ViT. Differently, we make use of the fundamental component -- attention mechanism, to generate masks for semantic segmentation. Specifically, we propose the Attention-to-Mask (ATM) module, in which the similarity maps between a set of learnable class tokens and the spatial feature maps are transferred to the segmentation masks. Experiments show that our proposed SegVit using the ATM module outperforms its counterparts using the plain ViT backbone on the ADE20K dataset and achieves new state-of-the-art performance on COCO-Stuff-10K and PASCAL-Context datasets. Furthermore, to reduce the computational cost of the ViT backbone, we propose query-based down-sampling (QD) and query-based up-sampling (QU) to build a Shrunk structure. With the proposed Shrunk structure, the model can save up to $40\%$ computations while maintaining competitive performance.
translated by 谷歌翻译
虽然视觉变形金机在许多视觉任务中实现了骨干模型的优异性能,但大多数都打算捕获图像或窗口中所有令牌的全局关系,这会破坏2D结构中的补丁之间固有的空间和本地相关性。在本文中,我们介绍了一个名为SimVit的简单视觉变压器,将空间结构和本地信息合并到视觉变压器中。具体而言,我们引入多头中央自我关注(MCSA)而不是传统的多头自我关注以捕获高度局部关系。滑动窗口的引入有助于捕获空间结构。同时,SIMVIT从不同层提取多尺度分层特征以进行密集的预测任务。广泛的实验表明,SIMVIT作为各种图像处理任务的通用骨干模型是有效和高效的。特别是,我们的SIMVIT-MICRO只需要3.3M的参数,在Imagenet-1K数据集上达到71.1%的前1个精度,即现在是最小的尺寸视觉变压器模型。我们的代码将在https://github.com/cucasligang/simvit中提供。
translated by 谷歌翻译
我们提出Segnext,这是一种简单的卷积网络体系结构,用于语义分割。由于自我注意力在编码空间信息中的效率,基于变压器的最新模型已主导语义分割领域。在本文中,我们表明卷积注意是一种比变形金刚中的自我注意机制更有效的编码上下文信息的方法。通过重新检查成功分割模型所拥有的特征,我们发现了几个关键组件,从而导致分割模型的性能提高。这促使我们设计了一个新型的卷积注意网络,该网络使用廉价的卷积操作。没有铃铛和哨子,我们的Segnext显着提高了先前最先进的方法对流行基准测试的性能,包括ADE20K,CityScapes,Coco-stuff,Pascal VOC,Pascal Context和ISAID。值得注意的是,segnext优于w/ nas-fpn的效率超过lavenet-l2,在帕斯卡VOC 2012测试排行榜上仅使用1/10参数,在Pascal VOC 2012测试排行榜上达到90.6%。平均而言,与具有相同或更少计算的ADE20K数据集上的最新方法相比,Segnext的改进约为2.0%。代码可在https://github.com/uyzhang/jseg(jittor)和https://github.com/visual-cratch-network/segnext(pytorch)获得。
translated by 谷歌翻译
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO testdev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-theart by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at https://github. com/microsoft/Swin-Transformer.
translated by 谷歌翻译
多尺度表示对于语义细分至关重要。社区目睹了利用多尺度上下文信息的语义分割卷积神经网络(CNN)的蓬勃发展。通过视觉变压器(VIV)的动机是强大的图像分类,最近提出了一些语义分割VITS,其中大多数是令人印象深刻的结果,但以计算经济为代价。在本文中,我们通过窗口注意机制成功地将多尺度表示引入语义分割vit,并进一步提高了性能和效率。为此,我们介绍了大型窗口关注,允许本地窗口在略微计算开销时仅查询大面积的上下文窗口。通过调节上下文区域与查询区域的比率,我们可以在多个尺度上捕获大量窗口注意。此外,采用空间金字塔汇集的框架与大窗口关注合作,这提出了一种名为大型窗口注意空​​间金字塔池(LawinAspp)的新型解码器,用于语义细分vit。我们所产生的Vit,草坪变压器由一个高效的定理视觉变压器(HVT)作为编码器和作为解码器的草坪Appp。实证结果表明,与现有方法相比,草坪变压器提供了提高的效率。草坪变压器进一步为城市景观(84.4 \%Miou),ADE20K(56.2 \%Miou)和Coco-incumate集进行了新的最先进的性能。代码将在https://github.com/yan-hao-tian/lawin发布。
translated by 谷歌翻译
变压器最近在各种视觉任务上表现出卓越的性能。大型有时甚至全球,接收领域赋予变换器模型,并通过其CNN对应物具有更高的表示功率。然而,简单地扩大接收领域也产生了几个问题。一方面,使用致密的注意,例如,在VIT中,导致过度的记忆和计算成本,并且特征可以受到超出兴趣区域的无关紧要的影响。另一方面,PVT或SWIN变压器采用的稀疏注意是数据不可知论,可能会限制模拟长距离关系的能力。为了缓解这些问题,我们提出了一种新型可变形的自我关注模块,其中以数据相关的方式选择密钥和值对中的密钥和值对的位置。这种灵活的方案使自我关注模块能够专注于相关区域并捕获更多的信息性功能。在此基础上,我们呈现可变形的关注变压器,一般骨干模型,具有可变形关注的图像分类和密集预测任务。广泛的实验表明,我们的模型在综合基准上实现了一致的改善结果。代码可在https://github.com/leaplabthu/dat上获得。
translated by 谷歌翻译