We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5× smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code will be released at: github.com/NVlabs/SegFormer.Preprint. Under review.
translated by 谷歌翻译
Semantic segmentation usually benefits from global contexts, fine localisation information, multi-scale features, etc. To advance Transformer-based segmenters with these aspects, we present a simple yet powerful semantic segmentation architecture, termed as IncepFormer. IncepFormer has two critical contributions as following. First, it introduces a novel pyramid structured Transformer encoder which harvests global context and fine localisation features simultaneously. These features are concatenated and fed into a convolution layer for final per-pixel prediction. Second, IncepFormer integrates an Inception-like architecture with depth-wise convolutions, and a light-weight feed-forward module in each self-attention layer, efficiently obtaining rich local multi-scale object features. Extensive experiments on five benchmarks show that our IncepFormer is superior to state-of-the-art methods in both accuracy and speed, e.g., 1) our IncepFormer-S achieves 47.7% mIoU on ADE20K which outperforms the existing best method by 1% while only costs half parameters and fewer FLOPs. 2) Our IncepFormer-B finally achieves 82.0% mIoU on Cityscapes dataset with 39.6M parameters. Code is available:github.com/shendu0321/IncepFormer.
translated by 谷歌翻译
视觉表示学习是解决各种视力问题的关键。依靠开创性的网格结构先验,卷积神经网络(CNN)已成为大多数深视觉模型的事实上的标准架构。例如,经典的语义分割方法通常采用带有编码器编码器体系结构的完全横向卷积网络(FCN)。编码器逐渐减少了空间分辨率,并通过更大的接受场来学习更多抽象的视觉概念。由于上下文建模对于分割至关重要,因此最新的努力一直集中在通过扩张(即极度)卷积或插入注意力模块来增加接受场。但是,基于FCN的体系结构保持不变。在本文中,我们旨在通过将视觉表示学习作为序列到序列预测任务来提供替代观点。具体而言,我们部署纯变压器以将图像编码为一系列贴片,而无需局部卷积和分辨率减少。通过在变压器的每一层中建立的全球环境,可以学习更强大的视觉表示形式,以更好地解决视力任务。特别是,我们的细分模型(称为分割变压器(SETR))在ADE20K上擅长(50.28%MIOU,这是提交当天测试排行榜中的第一个位置),Pascal环境(55.83%MIOU),并在CityScapes上达到竞争成果。此外,我们制定了一个分层局部全球(HLG)变压器的家族,其特征是窗户内的本地关注和跨窗户的全球性专注于层次结构和金字塔架构。广泛的实验表明,我们的方法在各种视觉识别任务(例如,图像分类,对象检测和实例分割和语义分割)上实现了吸引力的性能。
translated by 谷歌翻译
Vision transformers (ViTs) encoding an image as a sequence of patches bring new paradigms for semantic segmentation.We present an efficient framework of representation separation in local-patch level and global-region level for semantic segmentation with ViTs. It is targeted for the peculiar over-smoothness of ViTs in semantic segmentation, and therefore differs from current popular paradigms of context modeling and most existing related methods reinforcing the advantage of attention. We first deliver the decoupled two-pathway network in which another pathway enhances and passes down local-patch discrepancy complementary to global representations of transformers. We then propose the spatially adaptive separation module to obtain more separate deep representations and the discriminative cross-attention which yields more discriminative region representations through novel auxiliary supervisions. The proposed methods achieve some impressive results: 1) incorporated with large-scale plain ViTs, our methods achieve new state-of-the-art performances on five widely used benchmarks; 2) using masked pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new record; 3) pyramid ViTs integrated with the decoupled two-pathway network even surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved representations by our framework have favorable transferability in images with natural corruptions. The codes will be released publicly.
translated by 谷歌翻译
Image segmentation is often ambiguous at the level of individual image patches and requires contextual information to reach label consensus. In this paper we introduce Segmenter, a transformer model for semantic segmentation. In contrast to convolution-based methods, our approach allows to model global context already at the first layer and throughout the network. We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation. To do so, we rely on the output embeddings corresponding to image patches and obtain class labels from these embeddings with a point-wise linear decoder or a mask transformer decoder. We leverage models pre-trained for image classification and show that we can fine-tune them on moderate sized datasets available for semantic segmentation. The linear decoder allows to obtain excellent results already, but the performance can be further improved by a mask transformer generating class masks. We conduct an extensive ablation study to show the impact of the different parameters, in particular the performance is better for large models and small patch sizes. Segmenter attains excellent results for semantic segmentation. It outperforms the state of the art on both ADE20K and Pascal Context datasets and is competitive on Cityscapes.
translated by 谷歌翻译
Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoderdecoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (i.e., without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.
translated by 谷歌翻译
现有的基于变压器的图像骨干通常会在一个方向上传播特征信息,从较低到更高级别。这可能不是理想的选择,因为定位能力划定准确的物体边界,在较低的高分辨率特征图中最突出,而可以删除属于一个对象的图像信号的语义与另一个对象相对于另一个对象,通常是在较高级别中出现的处理。我们提出了分层间注意力(HILA),这是一种基于注意力的方法,可在不同级别的功能之间捕获自下而上的更新和自上而下的更新。 Hila通过将较高和较低级别的特征之间的局部连接添加到骨干编码器中,扩展了层次视觉变压器体系结构。在每次迭代中,我们通过具有更高级别的功能来竞争作业来更新属于它们的低级功能,从而构建层次结构,从而迭代解决对象零件关系。然后使用这些改进的低级功能来更新更高级别的功能。 HILA可以集成到大多数层次结构中,而无需对基本模型进行任何更改。我们将HILA添加到Segformer和Swin Transformer中,并以更少的参数和拖鞋的方式显示出明显的语义分割精度。项目网站和代码:https://www.cs.toronto.edu/~garyleung/hila/
translated by 谷歌翻译
视觉变形金刚(VITS)引起了对计算机视觉任务的卓越性能的关注。为解决单级低分辨率表示的限制,先前的工作适用于具有分层体系结构的高分辨率密集预测任务,以生成金字塔功能。然而,考虑到其分类的顺序拓扑,仍然对VITS探索多种表达学习。在这项工作中提高具有更多能力的VITS来学习语义和空间精确的多尺度表示,我们展示了高分辨率多分支架构的高分辨率多分支架构,带有视觉变压器,称为HRVIT,推动静脉前沿预测任务到新级别。我们探索异构分支设计,降低线性层中的冗余,并增加模型非线性以平衡模型性能和硬件效率。拟议的HRVIT在ADE20K上达到50.20%的Miou,83.16%Miou,用于语义细分任务,超过最先进的麻省理工学院和克斯犬,平均+1.78 miou改善,参数减少28%和21%拖鞋,展示HRVIT作为强大视力骨架的潜力。
translated by 谷歌翻译
我们提出Segnext,这是一种简单的卷积网络体系结构,用于语义分割。由于自我注意力在编码空间信息中的效率,基于变压器的最新模型已主导语义分割领域。在本文中,我们表明卷积注意是一种比变形金刚中的自我注意机制更有效的编码上下文信息的方法。通过重新检查成功分割模型所拥有的特征,我们发现了几个关键组件,从而导致分割模型的性能提高。这促使我们设计了一个新型的卷积注意网络,该网络使用廉价的卷积操作。没有铃铛和哨子,我们的Segnext显着提高了先前最先进的方法对流行基准测试的性能,包括ADE20K,CityScapes,Coco-stuff,Pascal VOC,Pascal Context和ISAID。值得注意的是,segnext优于w/ nas-fpn的效率超过lavenet-l2,在帕斯卡VOC 2012测试排行榜上仅使用1/10参数,在Pascal VOC 2012测试排行榜上达到90.6%。平均而言,与具有相同或更少计算的ADE20K数据集上的最新方法相比,Segnext的改进约为2.0%。代码可在https://github.com/uyzhang/jseg(jittor)和https://github.com/visual-cratch-network/segnext(pytorch)获得。
translated by 谷歌翻译
这项竞争重点是基于车辆摄像头视图的城市义细分。高度不平衡的城市义图像数据集挑战了现有的解决方案和进一步的研究。深度传统的基于神经网络的语义分割方法,例如编码器架构以及基于金字塔的多尺度和基于金字塔的方法,成为适用于现实世界应用程序的灵活解决方案。在这项比赛中,我们主要回顾有关变压器驱动方法(尤其是Segformer)的文献和进行实验,以实现性能和效率之间的最佳权衡。例如,Segformer-B0以最小的拖鞋,15.6G和最大的模型,Segformer-B5存档的80.2%MIOU获得了74.6%MIOU。根据多个因素,包括个体案例失败分析,个体班级绩效,训练压力和效率估计,竞争的最终候选模型为Segformer-b2,在测试集中评估了50.6 GFLOPS和78.5%MIOU。在https://vmv.re/cv3315上查看我们的代码实现。
translated by 谷歌翻译
多尺度表示对于语义细分至关重要。社区目睹了利用多尺度上下文信息的语义分割卷积神经网络(CNN)的蓬勃发展。通过视觉变压器(VIV)的动机是强大的图像分类,最近提出了一些语义分割VITS,其中大多数是令人印象深刻的结果,但以计算经济为代价。在本文中,我们通过窗口注意机制成功地将多尺度表示引入语义分割vit,并进一步提高了性能和效率。为此,我们介绍了大型窗口关注,允许本地窗口在略微计算开销时仅查询大面积的上下文窗口。通过调节上下文区域与查询区域的比率,我们可以在多个尺度上捕获大量窗口注意。此外,采用空间金字塔汇集的框架与大窗口关注合作,这提出了一种名为大型窗口注意空​​间金字塔池(LawinAspp)的新型解码器,用于语义细分vit。我们所产生的Vit,草坪变压器由一个高效的定理视觉变压器(HVT)作为编码器和作为解码器的草坪Appp。实证结果表明,与现有方法相比,草坪变压器提供了提高的效率。草坪变压器进一步为城市景观(84.4 \%Miou),ADE20K(56.2 \%Miou)和Coco-incumate集进行了新的最先进的性能。代码将在https://github.com/yan-hao-tian/lawin发布。
translated by 谷歌翻译
图像中的场景细分是视觉内容理解中的一个基本而又具有挑战性的问题,即学习一个模型,将每个图像像素分配给分类标签。这项学习任务的挑战之一是考虑空间和语义关系以获得描述性特征表示,因此从多个量表中学习特征图是场景细分中的一种常见实践。在本文中,我们探讨了在多尺度图像窗口中自我发挥的有效使用来学习描述性视觉特征,然后提出三种不同的策略来汇总这些特征图以解码特征表示形式以进行密集的预测。我们的设计基于最近提出的SWIN Transformer模型,该模型完全放弃了卷积操作。借助简单而有效的多尺度功能学习和聚合,我们的模型在四个公共场景细分数据集,Pascal VOC2012,Coco-STUFF 10K,ADE20K和CITYSCAPES上实现了非常有希望的性能。
translated by 谷歌翻译
在图像变压器网络的编码器部分中的FineTuning佩带的骨干网一直是语义分段任务的传统方法。然而,这种方法揭示了图像在编码阶段提供的语义上下文。本文认为将图像的语义信息纳入预磨料的基于分层变换器的骨干,而FineTuning可显着提高性能。为实现这一目标,我们提出了一个简单且有效的框架,在语义关注操作的帮助下将语义信息包含在编码器中。此外,我们在训练期间使用轻量级语义解码器,为每个阶段提供监督对中间语义的先前地图。我们的实验表明,结合语义前导者增强了所建立的分层编码器的性能,随着絮凝物的数量略有增加。我们通过将Sromask集成到Swin-Cransformer的每个变体中提供了经验证明,因为我们的编码器与不同的解码器配对。我们的框架在CudeScapes数据集上实现了ADE20K数据集的新型58.22%的MIOU,并在Miou指标中提高了超过3%的内容。代码和检查点在https://github.com/picsart-ai-research/semask-egation上公开使用。
translated by 谷歌翻译
人们众所周知,与卷积神经网络相比,变压器在语义分割方面的性能更好。然而,最初的视觉变压器可能缺乏当地社区的归纳偏见,并且具有较高的时间复杂性。最近,Swin Transformer通过使用分层体系结构并更有效地改变了窗口,在各种视觉任务中创建了新记录。但是,由于Swin Transformer是专门为图像分类设计的,因此它可能在基于密集的预测分段任务上实现次优性能。此外,仅使用现有方法对SWIN Transformer梳理将导致最终分割模型的模型大小和参数的提升。在本文中,我们重新考虑了Swin Transformer进行语义分割,并设计了一个轻巧但有效的变压器模型,称为SSFormer。在此模型中,考虑到SWIN Transformer的固有层次设计,我们提出了一个解码器来汇总来自不同层的信息,从而获得了局部和全局的注意。实验结果表明,提出的SSFormer与最先进的模型产生了可比的MIOU性能,同时保持较小的模型尺寸和较低的计算。
translated by 谷歌翻译
由于长距离依赖性建模的能力,变压器在各种自然语言处理和计算机视觉任务中表现出令人印象深刻的性能。最近的进展证明,将这种变压器与基于CNN的语义图像分割模型相结合非常有前途。然而,目前还没有很好地研究了纯变压器的方法如何实现图像分割。在这项工作中,我们探索了语义图像分割的新框架,它是基于编码器 - 解码器的完全变压器网络(FTN)。具体地,我们首先提出金字塔组变压器(PGT)作为逐步学习分层特征的编码器,同时降低标准视觉变压器(VIT)的计算复杂性。然后,我们将特征金字塔变换器(FPT)提出了来自PGT编码器的多电平进行语义图像分割的多级别的语义级别和空间级信息。令人惊讶的是,这种简单的基线可以在多个具有挑战性的语义细分和面部解析基准上实现更好的结果,包括帕斯卡背景,ADE20K,Cocostuff和Celebamask-HQ。源代码将在https://github.com/br -dl/paddlevit上发布。
translated by 谷歌翻译
视觉变压器由于能够捕获图像中的长期依赖性的能力而成功地应用于图像识别任务。但是,变压器与现有卷积神经网络(CNN)之间的性能和计算成本仍然存在差距。在本文中,我们旨在解决此问题,并开发一个网络,该网络不仅可以超越规范变压器,而且可以超越高性能卷积模型。我们通过利用变压器来捕获长期依赖性和CNN来建模本地特征,从而提出了一个新的基于变压器的混合网络。此外,我们将其扩展为获得一个称为CMT的模型家族,比以前的基于卷积和基于变压器的模型获得了更好的准确性和效率。特别是,我们的CMT-S在ImageNet上获得了83.5%的TOP-1精度,而在拖鞋上的拖曳率分别比现有的DEIT和EficitiveNet小14倍和2倍。拟议的CMT-S还可以很好地概括CIFAR10(99.2%),CIFAR100(91.7%),花(98.7%)以及其他具有挑战性的视觉数据集,例如可可(44.3%地图),计算成本较小。
translated by 谷歌翻译
ous vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. (3) We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.
translated by 谷歌翻译
最近,Vision Transformer通过推动各种视觉任务的最新技术取得了巨大的成功。视觉变压器中最具挑战性的问题之一是,图像令牌的较大序列长度会导致高计算成本(二次复杂性)。解决此问题的一个流行解决方案是使用单个合并操作来减少序列长度。本文考虑如何改善现有的视觉变压器,在这种变压器中,单个合并操作提取的合并功能似乎不太强大。为此,我们注意到,由于其在上下文抽象中的强大能力,金字塔池在各种视觉任务中已被证明是有效的。但是,在骨干网络设计中尚未探索金字塔池。为了弥合这一差距,我们建议在视觉变压器中将金字塔池汇总到多头自我注意力(MHSA)中,同时降低了序列长度并捕获强大的上下文特征。我们插入了基于池的MHSA,我们构建了一个通用视觉变压器主链,称为金字塔池变压器(P2T)。广泛的实验表明,与先前的基于CNN-和基于变压器的网络相比,当将P2T用作骨干网络时,它在各种视觉任务中显示出很大的优势。该代码将在https://github.com/yuhuan-wu/p2t上发布。
translated by 谷歌翻译
识别息肉对于在计算机辅助临床支持系统中自动分析内窥镜图像的自动分析具有挑战性。已经提出了基于卷积网络(CNN),变压器及其组合的模型,以分割息肉以有希望的结果。但是,这些方法在模拟息肉的局部外观方面存在局限性,或者在解码过程中缺乏用于空间依赖性的多层次特征。本文提出了一个新颖的网络,即结肠形式,以解决这些局限性。 Colonformer是一种编码器架构,能够在编码器和解码器分支上对远程语义信息进行建模。编码器是一种基于变压器的轻量级体系结构,用于在多尺度上建模全局语义关系。解码器是一种层次结构结构,旨在学习多层功能以丰富特征表示。此外,添加了一个新的Skip连接技术,以完善整体地图中的息肉对象的边界以进行精确分割。已经在五个流行的基准数据集上进行了广泛的实验,以进行息肉分割,包括Kvasir,CVC-Clinic DB,CVC-ColondB,CVC-T和Etis-Larib。实验结果表明,我们的结肠构造者在所有基准数据集上的表现优于其他最先进的方法。
translated by 谷歌翻译
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our Code is available at: https://git.io/Twins.
translated by 谷歌翻译