卷积神经网络在寻址像素级预测任务中的主要进展,例如语义分割,深度估计,表面正常预测等,从他们的强大功能中受益于视觉表现学习。通常,本领域模型的状态集成了对改进的深度特征表示的关注机制。最近,一些作品已经证明了学习的重要性,并结合了深度特征细化的空间和通道介绍。在本文中,WEAIM在有效地提升之前的方法和提出统一的深度框架,以便以原则的方式共同学习空间注意图和信道注意矢量,以便构建由此两种类型的注意力之间的引起的张量和模型相互作用。具体地,我们将估计和相互作用集成了概率表示学习框架内的关注,导致变分结构注意网络(Vista-net)。我们在神经网络内实现推理规则,从而允许概率的端到端学习和CNN前端参数。正如我们对六个大型数据集的大量实证评估所证明的致密视觉预测,Vista-Net在多个连续和离散预测任务中优于最先进的,从而确认在联合结构空间中提出的方法的益处 - 深度代表学习的关注估计。该代码可在https://github.com/ygjwd12345/vista-ner上获得。
translated by 谷歌翻译
单眼深度估计和语义分割是场景理解的两个基本目标。由于任务交互的优点,许多作品研究了联合任务学习算法。但是,大多数现有方法都无法充分利用语义标签,忽略提供的上下文结构,并且仅使用它们来监督分段拆分的预测,这限制了两个任务的性能。在本文中,我们提出了一个网络注入了上下文信息(CI-Net)来解决问题。具体而言,我们在编码器中引入自我关注块以产生注意图。通过由语义标签创建的理想注意图的监督,网络嵌入了上下文信息,使得它可以更好地理解场景并利用相关特征来进行准确的预测。此外,构造了一个特征共享模块,以使任务特征深入融合,并且设计了一致性损耗,以使特征相互引导。我们在NYU-Deaft-V2和Sun-RGBD数据集上评估所提出的CI-Net。实验结果验证了我们所提出的CI-Net可以有效提高语义分割和深度估计的准确性。
translated by 谷歌翻译
多任务密集的场景理解是一个蓬勃发展的研究领域,需要同时对与像素预测的一系列相关任务进行推理。由于卷积操作的大量利用,大多数现有作品都会遇到当地建模的严重限制,而在全球空间位置和多任务背景中学习相互作用和推断对于此问题至关重要。在本文中,我们提出了一种新颖的端到端倒立金字塔多任务变压器(Invpt),以在统一框架中对空间位置和多个任务进行同时建模。据我们所知,这是探索设计变压器结构的第一项工作,以用于多任务密集的预测以进行场景理解。此外,人们广泛证明,较高的空间分辨率对密集的预测非常有益,而对于现有的变压器来说,由于对大空间大小的巨大复杂性,现有变形金刚更深入地采用更高的分辨率。 Invpt提出了一个有效的上移动器块,以逐渐增加分辨率学习多任务特征交互,这还结合了有效的自我发言消息传递和多规模特征聚合,以高分辨率产生特定于任务的预测。我们的方法分别在NYUD-V2和PASCAL-CONTEXT数据集上实现了卓越的多任务性能,并且显着优于先前的最先前。该代码可在https://github.com/prismformore/invpt上获得
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
translated by 谷歌翻译
深度是自治车辆以感知障碍的重要信息。由于价格相对较低,单目一体相机的小尺寸,从单个RGB图像的深度估计引起了对研究界的兴趣。近年来,深神经网络(DNN)的应用已经显着提高了单眼深度估计(MDE)的准确性。最先进的方法通常设计在复杂和极其深的网络架构之上,需要更多的计算资源,而不使用高端GPU实时运行。虽然一些研究人员试图加速运行速度,但深度估计的准确性降低,因为压缩模型不代表图像。另外,现有方法使用的特征提取器的固有特性导致产生的特征图中的严重空间信息丢失,这也损害了小型图像的深度估计的精度。在本研究中,我们有动力设计一种新颖且有效的卷积神经网络(CNN),其连续地组装两个浅编码器解码器样式子网,以解决这些问题。特别是,我们强调MDE准确性和速度之间的权衡。已经在NYU深度V2,Kitti,Make3D和虚幻数据集上进行了广泛的实验。与拥有极其深层和复杂的架构的最先进的方法相比,所提出的网络不仅可以实现可比性的性能,而且在单个不那么强大的GPU上以更快的速度运行。
translated by 谷歌翻译
自我监督的学习已经为单眼深度估计显示出非常有希望的结果。场景结构和本地细节都是高质量深度估计的重要线索。最近的作品遭受了场景结构的明确建模,并正确处理细节信息,这导致了预测结果中的性能瓶颈和模糊人工制品。在本文中,我们提出了具有两个有效贡献的通道 - 明智的深度估计网络(Cadepth-Net):1)结构感知模块采用自我关注机制来捕获远程依赖性并聚合在信道中的识别特征尺寸,明确增强了场景结构的感知,获得了更好的场景理解和丰富的特征表示。 2)细节强调模块重新校准通道 - 方向特征映射,并选择性地强调信息性功能,旨在更有效地突出至关重要的本地细节信息和熔断器不同的级别功能,从而更精确,更锐化深度预测。此外,广泛的实验验证了我们方法的有效性,并表明我们的模型在基蒂基准和Make3D数据集中实现了最先进的结果。
translated by 谷歌翻译
Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoderdecoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (i.e., without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.
translated by 谷歌翻译
编码器 - 解码器模型已广泛用于RGBD语义分割,并且大多数通过双流网络设计。通常,共同推理RGBD的颜色和几何信息是有益的对语义分割。然而,大多数现有方法都无法全面地利用编码器和解码器中的多模式信息。在本文中,我们提出了一种用于RGBD语义细分的新型关注的双重监督解码器。在编码器中,我们设计一个简单但有效的关注的多模式融合模块,以提取和保险丝深度多级成对的互补信息。要了解更强大的深度表示和丰富的多模态信息,我们介绍了一个双分支解码器,以有效利用不同任务的相关性和互补线。在Nyudv2和Sun-RGBD数据集上的广泛实验表明,我们的方法达到了最先进的方法的卓越性能。
translated by 谷歌翻译
Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a Criss-Cross Network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11× less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85% of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9%, 45.76% and 55.47% on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNet.
translated by 谷歌翻译
Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.
translated by 谷歌翻译
In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the self-attention mechanism. Unlike previous works that capture contexts by multi-scale feature fusion, we propose a Dual Attention Network (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the feature at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-theart segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. 1 .
translated by 谷歌翻译
视觉表示学习是解决各种视力问题的关键。依靠开创性的网格结构先验,卷积神经网络(CNN)已成为大多数深视觉模型的事实上的标准架构。例如,经典的语义分割方法通常采用带有编码器编码器体系结构的完全横向卷积网络(FCN)。编码器逐渐减少了空间分辨率,并通过更大的接受场来学习更多抽象的视觉概念。由于上下文建模对于分割至关重要,因此最新的努力一直集中在通过扩张(即极度)卷积或插入注意力模块来增加接受场。但是,基于FCN的体系结构保持不变。在本文中,我们旨在通过将视觉表示学习作为序列到序列预测任务来提供替代观点。具体而言,我们部署纯变压器以将图像编码为一系列贴片,而无需局部卷积和分辨率减少。通过在变压器的每一层中建立的全球环境,可以学习更强大的视觉表示形式,以更好地解决视力任务。特别是,我们的细分模型(称为分割变压器(SETR))在ADE20K上擅长(50.28%MIOU,这是提交当天测试排行榜中的第一个位置),Pascal环境(55.83%MIOU),并在CityScapes上达到竞争成果。此外,我们制定了一个分层局部全球(HLG)变压器的家族,其特征是窗户内的本地关注和跨窗户的全球性专注于层次结构和金字塔架构。广泛的实验表明,我们的方法在各种视觉识别任务(例如,图像分类,对象检测和实例分割和语义分割)上实现了吸引力的性能。
translated by 谷歌翻译
共同出现的视觉模式使上下文聚集成为语义分割的重要范式。现有的研究重点是建模图像中的上下文,同时忽略图像以下相应类别的有价值的语义。为此,我们提出了一个新颖的软采矿上下文信息,超出了名为McIbi ++的图像范式,以进一步提高像素级表示。具体来说,我们首先设置了动态更新的内存模块,以存储各种类别的数据集级别的分布信息,然后利用信息在网络转发过程中产生数据集级别类别表示。之后,我们为每个像素表示形式生成一个类概率分布,并以类概率分布作为权重进行数据集级上下文聚合。最后,使用汇总的数据集级别和传统的图像级上下文信息来增强原始像素表示。此外,在推论阶段,我们还设计了一种粗到最新的迭代推理策略,以进一步提高分割结果。 MCIBI ++可以轻松地纳入现有的分割框架中,并带来一致的性能改进。此外,MCIBI ++可以扩展到视频语义分割框架中,比基线进行了大量改进。配备MCIBI ++,我们在七个具有挑战性的图像或视频语义分段基准测试中实现了最先进的性能。
translated by 谷歌翻译
跨不同层的特征的聚合信息是密集预测模型的基本操作。尽管表现力有限,但功能级联占主导地位聚合运营的选择。在本文中,我们引入了细分特征聚合(AFA),以融合不同的网络层,具有更具表现力的非线性操作。 AFA利用空间和渠道注意,以计算层激活的加权平均值。灵感来自神经体积渲染,我们将AFA扩展到规模空间渲染(SSR),以执行多尺度预测的后期融合。 AFA适用于各种现有网络设计。我们的实验表明了对挑战性的语义细分基准,包括城市景观,BDD100K和Mapillary Vistas的一致而显着的改进,可忽略不计的计算和参数开销。特别是,AFA改善了深层聚集(DLA)模型在城市景观上的近6%Miou的性能。我们的实验分析表明,AFA学会逐步改进分割地图并改善边界细节,导致新的最先进结果对BSDS500和NYUDV2上的边界检测基准。在http://vis.xyz/pub/dla-afa上提供代码和视频资源。
translated by 谷歌翻译
在视觉识别任务中,很少的学习需要在很少的支持示例中学习对象类别的能力。鉴于深度学习的发展,它的重新流行主要是图像分类。这项工作着重于几片语义细分,这仍然是一个未开发的领域。最近的一些进步通常仅限于单级少量分段。在本文中,我们首先介绍了一个新颖的多通道(类)编码和解码体系结构,该体系结构有效地将多尺度查询信息和多类支持信息融合到一个查询支持嵌入中。多级分割直接在此嵌入后解码。为了获得更好的特征融合,在体系结构中提出了多层注意机制,其中包括对支持功能调制的关注和多尺度组合的注意力。最后,为了增强嵌入式空间学习,引入了一个额外的像素度量学习模块,并在输入图像的像素级嵌入式上提出了三重损失。对标准基准Pascal-5i和Coco-20i进行的广泛实验显示了我们方法对最新技术的明显好处
translated by 谷歌翻译
Australian Centre for Robotic Vision {guosheng.lin;anton.milan;chunhua.shen;
translated by 谷歌翻译
最近的非本地自我关注方法已经证明是有效地捕获用于语义细分的远程依赖性。这些方法通常形成RC * C的相似性图(通过压缩空间尺寸)或rhW * HW(通过压缩通道)来描述沿通道或空间尺寸的特征关系,其中C是通道,H和W的数量输入特征映射的空间尺寸。然而,这种做法倾向于沿着其他尺寸的浓缩特征依赖性,因此引起注意丢失,这可能导致小型/薄类别或大物体内部不一致的分割结果。为了解决这个问题,我们提出了一种重新的方法,即完全注意网络(FLANET),以编码单个相似性图中的空间和频道注意,同时保持高计算效率。具体地,对于每个频道图,我们的枫套可以通过新颖的完全注意模块收获来自所有其他信道地图的特征响应和相关的空间位置。我们的新方法在三个具有挑战性的语义细分数据集中实现了最先进的性能,即在城市景观测试集,ADE20K验证集和Pascal VOC测试集中的83.6%,46.99%和88.5% 。
translated by 谷歌翻译
This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available 5 .
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译