Waterbodies和附近相关对象的基于视觉的语义分割提供了管理水资源和处理洪水紧急情况的重要信息。然而,缺乏用于水相关类别的大规模标记培训和测试数据集可防止研究人员在计算机视野中研究水有关的问题。为了解决这个问题,我们呈现亚特兰蒂斯,一个新的水平和相关对象的语义分割的新基准。亚特兰蒂斯由5,195张Waterbodies图像组成,以及56级物体的高质量像素级手动注释,其中包括17级人为物体,18级自然对象和21个一般课程。我们详细介绍了亚特兰蒂斯,并在我们的基准上评估了几种最先进的语义分段网络。此外,通过在两个不同的路径中加工水生和非水生植物来制定新的深度神经网络水平,用于水体语义分割。 Aquanet还包含低级功能调制和交叉路径调制,可增强特征表示。实验结果表明,拟议的Aquanet优于亚特兰蒂斯的其他最先进的语义细分网络。我们声称,亚特兰蒂斯是最大的水体图像数据集,用于语义分割,提供各种水和水有关的类,它将有利于计算机视觉和水资源工程的研究人员。
translated by 谷歌翻译
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement the state-ofthe-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects 1 .
translated by 谷歌翻译
高分辨率卫星图像可以为土地覆盖分类提供丰富的详细空间信息,这对于研究复杂的建筑环境尤为重要。但是,由于覆盖范围复杂的覆盖模式,昂贵的训练样品收集以及卫星图像的严重分布变化,很少有研究应用高分辨率图像来大规模详细类别的覆盖地图。为了填补这一空白,我们提出了一个大规模的土地盖数据集,即五亿像素。它包含超过50亿个标记的像素,这些像素由150个高分辨率Gaofen-2(4 M)卫星图像,在24类系统中注释,涵盖人工结构,农业和自然阶层。此外,我们提出了一种基于深度学习的无监督域适应方法,该方法可以转移在标记的数据集(称为源域)上训练的分类模型,以获取大型土地覆盖映射的无标记数据(称为目标域) 。具体而言,我们采用动态伪标签分配和班级平衡策略来介绍一个端到端的暹罗网络,以执行自适应领域联合学习。为了验证我们的数据集的普遍性以及在不同的传感器和不同地理区域中提出的方法,我们对中国的五个大城市和其他五个亚洲国家的五个城市进行了土地覆盖地图,以下情况下使用:Planetscope(3 m),Gaofen-1,Gaofen-1 (8 m)和Sentinel-2(10 m)卫星图像。在总研究区域为60,000平方公里,即使输入图像完全未标记,实验也显示出令人鼓舞的结果。拟议的方法接受了5亿像素数据集的培训,可实现在整个中国和其他亚洲国家的高质量和详细的土地覆盖地图。
translated by 谷歌翻译
Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff 1 , which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
translated by 谷歌翻译
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes 1 .
translated by 谷歌翻译
在设计可持续和弹性的城市建造环境的同时,越来越多地促进了世界各地的,重大的数据差距对压迫可持续性问题挑战开展的研究。已知人行道具有强大的经济和环境影响;然而,由于数据收集的成本持久和耗时的性质,大多数城市缺乏它们的表面的空间目录。计算机愿景的最新进展与街道级别图像的可用性一起为城市提供了新的机会,以利用较低的实施成本和更高的准确性提取大规模建筑环境数据。在本文中,我们提出了一个基于主动学习的框架,利用计算机视觉技术来使用广泛可用的街道图像进行分类的计算机视觉技术。我们培训了来自纽约市和波士顿的图像的框架,评价结果显示了90.5%的Miou评分。此外,我们使用六个不同城市的图像评估框架,表明它可以应用于具有不同城市面料的区域,即使在培训数据的领域之外。 Citysurfaces可以为研究人员和城市代理商提供低成本,准确,可扩展的方法来收集人行道材料数据,在寻求主要可持续性问题方面发挥着关键作用,包括气候变化和地表水管理。
translated by 谷歌翻译
深度学习方法表明了遥感高空间分辨率(HSR)覆盖映射的有希望的结果。然而,城乡场景可以呈现完全不同的地理景观,以及这些算法的不充分性妨碍了城市级或国家级映射。大多数现有的HSR土地覆盖数据集主要推动学习语义表示的研究,从而忽略了模型可转移性。在本文中,我们介绍了陆地覆盖域自适应语义分割(Loveda)数据集以推进语义和可转让的学习。 Loveda DataSet包含5987个HSR图像,具有来自三个不同城市的166768个注释对象。与现有数据集相比,Loveda DataSet包含两个域名(城乡),由于:1)多尺度对象,带来了相当大的挑战; 2)复杂的背景样本; 3)类分布不一致。 Loveda DataSet适用于土地覆盖语义分段和无监督域适应(UDA)任务。因此,我们在11个语义分割方法和八种UDA方法上基准测试了Loveda DataSet。还进行了一些探索性研究,包括多规范架构和策略,额外的背景监督和伪标签分析,以解决这些挑战。代码和数据在https://github.com/junjue-wang/loveda获得。
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the community's efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A scene parsing benchmark is built upon the ADE20K with 150 object and stuff classes included. Several segmentation baseline models are evaluated on the benchmark. A novel network design called Cascade Segmentation Module is proposed to parse a scene into stuff, objects, and object parts in a cascade and improve over the baselines. We further show that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis 1 .
translated by 谷歌翻译
本文介绍了Omnicity,这是一种从多层次和多视图图像中了解无所不能的城市理解的新数据集。更确切地说,Omnicity包含多视图的卫星图像以及街道级全景图和单视图图像,构成了超过100k像素的注释图像,这些图像是从纽约市的25k Geo-Locations中良好的一致性和收集的。为了减轻大量像素的注释努力,我们提出了一个有效的街景图像注释管道,该管道利用了卫星视图的现有标签地图以及不同观点之间的转换关系(卫星,Panorama和Mono-View)。有了新的Omnicity数据集,我们为各种任务提供基准,包括构建足迹提取,高度估计以及构建平面/实例/细粒细分。我们还分析了视图对每个任务的影响,不同模型的性能,现有方法的局限性等。与现有的多层次和多视图基准相比,我们的Omnicity包含更多具有更丰富注释类型和更丰富的图像更多的视图,提供了从最先进的模型获得的更多基线结果,并为街道级全景图像中的细粒度建筑实例细分介绍了一项新颖的任务。此外,Omnicity为现有任务提供了新的问题设置,例如跨视图匹配,合成,分割,检测等,并促进开发新方法,以了解大规模的城市理解,重建和仿真。 Omnicity数据集以及基准将在https://city-super.github.io/omnicity上找到。
translated by 谷歌翻译
The Mapillary Vistas Dataset is a novel, largescale street-level image dataset containing 25 000 highresolution images annotated into 66 object categories with additional, instance-specific labels for 37 classes. Annotation is performed in a dense and fine-grained style by using polygons for delineating individual objects. Our dataset is 5× larger than the total amount of fine annotations for Cityscapes and contains images from all around the world, captured at various conditions regarding weather, season and daytime. Images come from different imaging devices (mobile phones, tablets, action cameras, professional capturing rigs) and differently experienced photographers. In such a way, our dataset has been designed and compiled to cover diversity, richness of detail and geographic extent. As default benchmark tasks, we define semantic image segmentation and instance-specific image segmentation, aiming to significantly further the development of state-of-theart methods for visual road-scene understanding.
translated by 谷歌翻译
给定空中图像,空中场景解析(ASP)目标,以解释图像内容的语义结构,例如,通过将语义标签分配给图像的每个像素来解释图像内容的语义结构。随着数据驱动方法的推广,过去几十年通过在使用高分辨率航空图像时,通过接近基于瓦片级场景分类或分段的图像分析的方案来解决了对ASP的有希望的进展。然而,前者的方案通常会产生瓷砖技术边界的结果,而后者需要处理从像素到语义的复杂建模过程,这通常需要具有像素 - 明智语义标签的大规模和良好的图像样本。在本文中,我们在ASP中解决了这些问题,从瓷砖级场景分类到像素明智语义标签的透视图。具体而言,我们首先通过文献综述重新审视空中图像解释。然后,我们提出了一个大规模的场景分类数据集,其中包含一百万个空中图像被称为百万援助。使用所提出的数据集,我们还通过经典卷积神经网络(CNN)报告基准测试实验。最后,我们通过统一瓦片级场景分类和基于对象的图像分析来实现ASP,以实现像素明智的语义标记。密集实验表明,百万援助是一个具有挑战性但有用的数据集,可以作为评估新开发的算法的基准。当从百万辅助救援方面传输知识时,百万辅助的微调CNN模型始终如一,而不是那些用于空中场景分类的预磨料想象。此外,我们设计的分层多任务学习方法实现了对挑战GID的最先进的像素 - 明智的分类,拓宽了用于航空图像解释的像素明智语义标记的瓦片级场景分类。
translated by 谷歌翻译
语义分割是计算机视觉中的关键任务之一,它是为图像中的每个像素分配类别标签。尽管最近取得了重大进展,但大多数现有方法仍然遇到两个具有挑战性的问题:1)图像中的物体和东西的大小可能非常多样化,要求将多规模特征纳入完全卷积网络(FCN); 2)由于卷积网络的固有弱点,很难分类靠近物体/物体的边界的像素。为了解决第一个问题,我们提出了一个新的多受感受性现场模块(MRFM),明确考虑了多尺度功能。对于第二期,我们设计了一个边缘感知损失,可有效区分对象/物体的边界。通过这两种设计,我们的多种接收场网络在两个广泛使用的语义分割基准数据集上实现了新的最先进的结果。具体来说,我们在CityScapes数据集上实现了83.0的平均值,在Pascal VOC2012数据集中达到了88.4的平均值。
translated by 谷歌翻译
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-regionbased context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixellevel prediction. The proposed approach achieves state-ofthe-art performance on various datasets. It came first in Im-ageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
translated by 谷歌翻译
The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very timeconsuming to obtain; image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9% mIOU over image-level supervision. Further, we demonstrate that models trained with pointlevel supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.
translated by 谷歌翻译
深度学习算法在非常高分辨率(VHR)图像的语义分割方面取得了巨大成功。然而,培训这些模型通常需要大量准确的像素注释,这非常费力且耗时。为了减轻注释负担,本文提出了一个一致性调节的区域生长网络(CRGNET),以实现具有点级注释的VHR图像的语义分割。 CRGNET的关键思想是迭代选择未标记的像素,具有很高的信心,可以从原始稀疏点扩展带注释的区域。但是,由于扩展的注释中可能存在一些错误和噪音,因此直接向它们学习可能会误导网络的培训。为此,我们进一步提出了一致性正则化策略,在该策略中,基本分类器和扩展的分类器被采用。具体而言,基本分类器受原始稀疏注释的监督,而扩展的分类器的目的是从基本分类器生成的扩展注释中学习具有区域生长机制。因此,通过最大程度地减少基础和扩展分类器的预测之间的差异来实现一致性正则化。我们发现如此简单的正则化策略对于控制区域生长机制的质量非常有用。在两个基准数据集上进行的广泛实验表明,所提出的CRGNET显着优于现有的最新方法。代码和预培训模型可在线获得(https://github.com/yonghaoxu/crgnet)。
translated by 谷歌翻译
视频分析的图像分割在不同的研究领域起着重要作用,例如智能城市,医疗保健,计算机视觉和地球科学以及遥感应用。在这方面,最近致力于发展新的细分策略;最新的杰出成就之一是Panoptic细分。后者是由语义和实例分割的融合引起的。明确地,目前正在研究Panoptic细分,以帮助获得更多对视频监控,人群计数,自主驾驶,医学图像分析的图像场景的更细致的知识,以及一般对场景更深入的了解。为此,我们介绍了本文的首次全面审查现有的Panoptic分段方法,以获得作者的知识。因此,基于所采用的算法,应用场景和主要目标的性质,执行现有的Panoptic技术的明确定义分类。此外,讨论了使用伪标签注释新数据集的Panoptic分割。继续前进,进行消融研究,以了解不同观点的Panoptic方法。此外,讨论了适合于Panoptic分割的评估度量,并提供了现有解决方案性能的比较,以告知最先进的并识别其局限性和优势。最后,目前对主题技术面临的挑战和吸引不久的将来吸引相当兴趣的未来趋势,可以成为即将到来的研究研究的起点。提供代码的文件可用于:https://github.com/elharroussomar/awesome-panoptic-egation
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
共同出现的视觉模式使上下文聚集成为语义分割的重要范式。现有的研究重点是建模图像中的上下文,同时忽略图像以下相应类别的有价值的语义。为此,我们提出了一个新颖的软采矿上下文信息,超出了名为McIbi ++的图像范式,以进一步提高像素级表示。具体来说,我们首先设置了动态更新的内存模块,以存储各种类别的数据集级别的分布信息,然后利用信息在网络转发过程中产生数据集级别类别表示。之后,我们为每个像素表示形式生成一个类概率分布,并以类概率分布作为权重进行数据集级上下文聚合。最后,使用汇总的数据集级别和传统的图像级上下文信息来增强原始像素表示。此外,在推论阶段,我们还设计了一种粗到最新的迭代推理策略,以进一步提高分割结果。 MCIBI ++可以轻松地纳入现有的分割框架中,并带来一致的性能改进。此外,MCIBI ++可以扩展到视频语义分割框架中,比基线进行了大量改进。配备MCIBI ++,我们在七个具有挑战性的图像或视频语义分段基准测试中实现了最先进的性能。
translated by 谷歌翻译
TU Dresden www.cityscapes-dataset.net train/val -fine annotation -3475 images train -coarse annotation -20 000 images test -fine annotation -1525 images
translated by 谷歌翻译