几次拍摄的语义分割解决了学习任务,其中只有几个具有地面真理像素级标签的图像可用于新颖的感兴趣的景点。通常需要将大量数据(即基类)收集具有这样的地面真理信息,然后是元学习策略来解决上述学习任务。当在训练和测试期间只能观察到图像级语义标签时,它被认为是弱监督少量语义细分的更具挑战性的任务。为了解决这个问题,我们提出了一种新的元学习框架,其预测来自有限量的数据和它们的语义标签的伪像素级分段掩模。更重要的是,我们的学习方案进一步利用了具有分段保证的查询图像输入的产生的像素级信息。因此,我们提出的学习模型可以被视为像素级元学习者。通过对基准数据集的广泛实验,我们表明我们的模型在完全监督的环境下实现了令人满意的性能,但在弱势监督的环境下对最先进的方法进行了有利的方法。
translated by 谷歌翻译
弱监督语义分段(WSSS)的现有研究已经利用了类激活映射(CAM)来本地化类对象。然而,由于分类损失不足以提供精确的物区域,因此凸轮倾向于偏向辨别模式(即,稀疏),并且不提供精确的对象边界信息(即,不确定)。为了解决这些限制,我们提出了一种新颖的框架(由MainNet和SupportNet组成),从给定的图像级监督导出像素级自我监督。在我们的框架中,借助拟议的区域对比模块(RCM)和多尺寸细分模块(MAM),MainNet由来自SupportNet的自我监督训练。 RCM从SupportNet中提取两种形式的自我监督:(1)从凸轮和(2)根据类区域掩码的特征获得的(2)类的类别区域掩模。然后,主目的的每个像素明智的特征被原型训练以对比的方式,锐化所产生的凸轮。 MAM利用从SupportNet的多个尺度推断的凸轮作为自我监控来指导MailNet。基于Mainnet和SupportNet的多尺度凸轮之间的不相似性,来自主目的的凸轮训练以扩展到较少辨别的区域。该方法在Pascal VOC 2012数据集上显示了在列车和验证集上的最先进的WSSS性能。为了再现性,代码将很快公开提供。
translated by 谷歌翻译
Despite the great progress made by deep CNNs in image semantic segmentation, they typically require a large number of densely-annotated images for training and are difficult to generalize to unseen object categories. Few-shot segmentation has thus been developed to learn to perform segmentation from only a few annotated examples. In this paper, we tackle the challenging few-shot segmentation problem from a metric learning perspective and present PANet, a novel prototype alignment network to better utilize the information of the support set. Our PANet learns classspecific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes. With non-parametric metric learning, PANet offers high-quality prototypes that are representative for each semantic class and meanwhile discriminative for different classes. Moreover, PANet introduces a prototype alignment regularization between support and query. With this, PANet fully exploits knowledge from the support and provides better generalization on few-shot segmentation. Significantly, our model achieves the mIoU score of 48.1% and 55.7% on PASCAL-5 i for 1-shot and 5-shot settings respectively, surpassing the state-of-the-art method by 1.8% and 8.6%.
translated by 谷歌翻译
Semantic segmentation assigns a class label to each image pixel. This dense prediction problem requires large amounts of manually annotated data, which is often unavailable. Few-shot learning aims to learn the pattern of a new category with only a few annotated examples. In this paper, we formulate the few-shot semantic segmentation problem from 1-way (class) to N-way (classes). Inspired by few-shot classification, we propose a generalized framework for few-shot semantic segmentation with an alternative training scheme. The framework is based on prototype learning and metric learning. Our approach outperforms the baselines by a large margin and shows comparable performance for 1-way few-shot semantic segmentation on PASCAL VOC 2012 dataset.
translated by 谷歌翻译
几次拍摄的语义分割旨在将新颖的类对象分段为仅具有少数标记的支持图像。大多数高级解决方案利用度量学习框架,通过将每个查询功能与学习的类特定的原型匹配来执行分段。然而,由于特征比较不完整,该框架遭受了偏见的分类。为了解决这个问题,我们通过引入类别特定的和类别不可知的原型来提出自适应原型表示,从而构建与查询功能学习语义对齐的完整样本对。互补特征学习方式有效地丰富了特征比较,并有助于在几次拍摄设置中产生一个非偏见的分段模型。它用双分支端到端网络(\即,特定于类分支和类别不可知分支)实现,它生成原型,然后组合查询特征以执行比较。此外,所提出的类别无神不可话的分支简单而且有效。在实践中,它可以自适应地为查询图像生成多种类别 - 不可知的原型,并以自我对比方式学习特征对齐。广泛的Pascal-5 $ ^ i $和Coco-20 $ ^ i $展示了我们方法的优越性。在不牺牲推理效率的费用中,我们的模型实现了最先进的,导致1-Shot和5-Shot Settings进行语义分割。
translated by 谷歌翻译
就像其他少量学习问题一样,很少拍摄的细分旨在最大限度地减少手动注释的需求,这在分割任务中特别昂贵。即使少量拍摄设置降低了新型测试类的这种成本,仍然需要注释培训数据。为了减轻这种需求,我们提出了一种自我监督的培训方法,用于学习几次射门分割模型。我们首先使用无监督的显着性估计来获得图像上的伪掩码。然后,我们将在不同的伪掩模的不同分割和增强图像的不同分裂上培训一个简单的原型模型。我们广泛的实验表明,该方法达到了有希望的结果,突出了自我监督培训的潜力。据我们所知,这是第一个解决自然图像上无监督的少量分割问题的第一项工作。
translated by 谷歌翻译
很少有分段旨在学习一个细分模型,该模型可以推广到只有几个培训图像的新课程。在本文中,我们提出了一个交叉引用和局部全球条件网络(CRCNET),以进行几次分割。与以前仅预测查询图像掩码的作品不同,我们提出的模型同时对支持图像和查询图像进行了预测。我们的网络可以更好地在两个图像中使用交叉引用机制找到同时出现的对象,从而有助于少量分割任务。为了进一步改善功能比较,我们开发了一个局部全球条件模块,以捕获全球和本地关系。我们还开发了一个掩模修补模块,以重新完善前景区域的预测。Pascal VOC 2012,MS Coco和FSS-1000数据集的实验表明,我们的网络实现了新的最新性能。
translated by 谷歌翻译
Despite the remarkable success of existing methods for few-shot segmentation, there remain two crucial challenges. First, the feature learning for novel classes is suppressed during the training on base classes in that the novel classes are always treated as background. Thus, the semantics of novel classes are not well learned. Second, most of existing methods fail to consider the underlying semantic gap between the support and the query resulting from the representative bias by the scarce support samples. To circumvent these two challenges, we propose to activate the discriminability of novel classes explicitly in both the feature encoding stage and the prediction stage for segmentation. In the feature encoding stage, we design the Semantic-Preserving Feature Learning module (SPFL) to first exploit and then retain the latent semantics contained in the whole input image, especially those in the background that belong to novel classes. In the prediction stage for segmentation, we learn an Self-Refined Online Foreground-Background classifier (SROFB), which is able to refine itself using the high-confidence pixels of query image to facilitate its adaptation to the query image and bridge the support-query semantic gap. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ datasets demonstrates the advantages of these two novel designs both quantitatively and qualitatively.
translated by 谷歌翻译
经过图像级标签训练的弱监督图像分割通常在伪地面上的生成期间因物体区域的覆盖率不准确。这是因为对象激活图受到分类目标的训练,并且缺乏概括的能力。为了提高客观激活图的一般性,我们提出了一个区域原型网络RPNET来探索训练集的跨图像对象多样性。通过区域特征比较确定了跨图像的相似对象零件。区域之间传播对象信心,以发现新的对象区域,同时抑制了背景区域。实验表明,该提出的方法会生成更完整和准确的伪对象掩模,同时在Pascal VOC 2012和MS Coco上实现最先进的性能。此外,我们研究了提出的方法在减少训练集方面的鲁棒性。
translated by 谷歌翻译
仅使用图像级标签的弱监督语义细分旨在降低分割任务的注释成本。现有方法通常利用类激活图(CAM)来定位伪标签生成的对象区域。但是,凸轮只能发现对象的最歧视部分,从而导致下像素级伪标签。为了解决这个问题,我们提出了一个限制的显着性和内类关系的显着性(I $^2 $ CRC)框架,以协助CAM中激活的对象区域的扩展。具体而言,我们提出了一个显着性指导的类不足的距离模块,以通过将特征对准其类原型来更接近类别内特征。此外,我们提出了一个特定的距离模块,以将类间特征推开,并鼓励对象区域的激活高于背景。除了加强分类网络激活CAM中更多积分对象区域的能力外,我们还引入了一个对象引导的标签细化模块,以完全利用分割预测和初始标签,以获取出色的伪标签。 Pascal VOC 2012和可可数据集的广泛实验很好地证明了I $^2 $ CRC的有效性,而不是其他最先进的对应物。源代码,模型和数据已在\ url {https://github.com/nust-machine-intelligence-laboratory/i2crc}提供。
translated by 谷歌翻译
Few-shot semantic segmentation aims to learn to segment new object classes with only a few annotated examples, which has a wide range of real-world applications. Most existing methods either focus on the restrictive setting of one-way few-shot segmentation or suffer from incomplete coverage of object regions. In this paper, we propose a novel few-shot semantic segmentation framework based on the prototype representation. Our key idea is to decompose the holistic class representation into a set of part-aware prototypes, capable of capturing diverse and fine-grained object features. In addition, we propose to leverage unlabeled data to enrich our part-aware prototypes, resulting in better modeling of intra-class variations of semantic objects. We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes based on labeled and unlabeled images. Extensive experimental evaluations on two benchmarks show that our method outperforms the prior art with a sizable margin.
translated by 谷歌翻译
Few-shot segmentation aims to devise a generalizing model that segments query images from unseen classes during training with the guidance of a few support images whose class tally with the class of the query. There exist two domain-specific problems mentioned in the previous works, namely spatial inconsistency and bias towards seen classes. Taking the former problem into account, our method compares the support feature map with the query feature map at multi scales to become scale-agnostic. As a solution to the latter problem, a supervised model, called as base learner, is trained on available classes to accurately identify pixels belonging to seen classes. Hence, subsequent meta learner has a chance to discard areas belonging to seen classes with the help of an ensemble learning model that coordinates meta learner with the base learner. We simultaneously address these two vital problems for the first time and achieve state-of-the-art performances on both PASCAL-5i and COCO-20i datasets.
translated by 谷歌翻译
This paper presents the first attempt to learn semantic boundary detection using image-level class labels as supervision. Our method starts by estimating coarse areas of object classes through attentions drawn by an image classification network. Since boundaries will locate somewhere between such areas of different classes, our task is formulated as a multiple instance learning (MIL) problem, where pixels on a line segment connecting areas of two different classes are regarded as a bag of boundary candidates. Moreover, we design a new neural network architecture that can learn to estimate semantic boundaries reliably even with uncertain supervision given by the MIL strategy. Our network is used to generate pseudo semantic boundary labels of training images, which are in turn used to train fully supervised models. The final model trained with our pseudo labels achieves an outstanding performance on the SBD dataset, where it is as competitive as some of previous arts trained with stronger supervision.
translated by 谷歌翻译
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task in computer vision. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without any further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel framework called CLIP-ES for WSSS. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) to mitigate noise and focus on confident regions. Our proposed framework dramatically reduces the cost of training for WSSS and shows the capability of localizing objects in CLIP. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES.
translated by 谷歌翻译
弱监督的语义细分(WSSS)旨在仅使用用于训练的图像级标签来产生像素类预测。为此,以前的方法采用了通用管道:它们从类激活图(CAM)生成伪口罩,并使用此类掩码来监督分割网络。但是,由于凸轮的局部属性,即它们倾向于仅专注于小的判别对象零件,因此涵盖涵盖整个物体的全部范围的全面伪面罩是一项挑战。在本文中,我们将CAM的局部性与卷积神经网络(CNNS)的质地偏见特性相关联。因此,我们建议利用形状信息来补充质地偏见的CNN特征,从而鼓励掩模预测不仅是全面的,而且还与物体边界相交。我们通过一种新颖的改进方法进一步完善了在线方式的预测,该方法同时考虑了类和颜色亲和力,以生成可靠的伪口罩以监督模型。重要的是,我们的模型是在单阶段框架内进行端到端训练的,因此在培训成本方面有效。通过对Pascal VOC 2012的广泛实验,我们验证了方法在产生精确和形状对准的分割结果方面的有效性。具体而言,我们的模型超过了现有的最新单阶段方法。此外,当在没有铃铛和哨声的简单两阶段管道中采用时,它还在多阶段方法上实现了新的最新性能。
translated by 谷歌翻译
虽然现有的语义分割方法实现令人印象深刻的结果,但它们仍然努力将其模型逐步更新,因为新类别被发现。此外,逐个像素注释昂贵且耗时。本文提出了一种新颖的对语义分割学习弱增量学习的框架,旨在学习从廉价和大部分可用的图像级标签进行新课程。与现有的方法相反,需要从下线生成伪标签,我们使用辅助分类器,用图像级标签培训并由分段模型规范化,在线获取伪监督并逐步更新模型。我们通过使用由辅助分类器生成的软标签来应对过程中的内在噪声。我们展示了我们对Pascal VOC和Coco数据集的方法的有效性,表现出离线弱监督方法,并获得了具有全面监督的增量学习方法的结果。
translated by 谷歌翻译
很少有分割的目的是仅给出少数标记的样品,旨在细分看不见的级对象。原型学习,支持功能通过平均全局和局部对象信息产生单个原型,在FSS中已广泛使用。但是,仅利用原型矢量可能不足以代表所有训练数据的功能。为了提取丰富的特征并做出更精确的预测,我们提出了一个多相似性和注意力网络(MSANET),包括两个新型模块,一个多相似性模块和一个注意模块。多相似模块利用支持图像和查询图像的多个特征图来估计准确的语义关系。注意模块指示网络专注于相关的信息。该网络在标准FSS数据集,Pascal-5i 1-Shot,Pascal-5i 5-Shot,Coco-20i 1-Shot和Coco-20i 5-Shot上进行了测试。具有RESNET-101骨架的MSANET可在所有4基准测试数据集中达到最先进的性能,而平均交叉点(MIOU)为69.13%,73.99%,51.09%,56.80%。代码可在https://github.com/aivresearch/msanet上获得
translated by 谷歌翻译
生成精确的类感知的伪基真实,也就是类激活图(CAM),对于弱监督的语义分割至关重要。原始CAM方法通常会产生不完整和不准确的定位图。为了解决这个问题,本文提出了基于可变形卷积中的偏移学习的扩展和收缩方案,以依次改善两个各个阶段中定位对象的回忆和精度。在扩展阶段,在可变形卷积层中的偏移学习分支,称为“扩展采样器”,寻求采样越来越小的判别对象区域,这是由逆监督信号驱动的,从而最大程度地提高了图像级分类损失。然后在收缩阶段逐渐将位置更完整的物体逐渐缩小到最终对象区域。在收缩阶段,引入了另一个可变形卷积层的偏移学习分支,称为“收缩采样器”,以排除在扩展阶段参加的假积极背景区域,以提高定位图的精度。我们在Pascal VOC 2012和MS Coco 2014上进行了各种实验,以很好地证明了我们方法比其他最先进的方法对弱监督语义分割的优越性。代码将在此处公开提供,https://github.com/tyroneli/esol_wsss。
translated by 谷歌翻译
虽然图像级弱监督的语义分割(WSSS)与类激活地图(CAM)作为基石取得了很大的进展,但分类和分割之间的大型监督差距仍然妨碍模型以产生用于分割的更完整和精确的伪掩模。在这项研究中,我们提出了弱监管的像素到原型对比度,其可以提供像素级监控信号来缩小间隙。由两个直观的前沿引导,我们的方法在不同视图和图像的单个视图中执行,旨在施加跨视图特征语义一致性正则化,并促进特征空间的帧内(互联)紧凑性(色散)。我们的方法可以无缝地纳入现有的WSSS模型,而没有对基础网络的任何更改,并且不会产生任何额外的推断负担。广泛的实验表明,我们的方法始终如一地通过大幅度改善两个强的基线,证明了有效性。具体而言,建于接缝的顶部,我们将初始种子Miou 2012从55.4%提高到Pascal VOC上。此外,通过我们的方法武装,我们从70.8%增加到73.6%的EPS分割Miou,实现了新的最先进。
translated by 谷歌翻译
基于弱监管的像素 - 明显的密集预测任务当前使用类注意映射(CAM)以产生伪掩模作为地面真理。然而,现有方法通常取决于诱人的训练模块,这可能会引入磨削计算开销和复杂的培训程序。在这项工作中,提出了语义结构知识推断(SSA)来探索隐藏在基于CNN的网络的不同阶段的语义结构信息,以在模型推断中产生高质量凸轮。具体地,首先提出语义结构建模模块(SSM)来生成类别不可知语义相关表示,其中每个项目表示一个类别对象和所有其他类别之间的亲和程度。然后,探索结构化特征表示通过点产品操作来抛光不成熟的凸轮。最后,来自不同骨架级的抛光凸轮融合为输出。所提出的方法具有没有参数的优点,不需要培训。因此,它可以应用于广泛的弱监管像素 - 明智的密集预测任务。对弱势监督对象本地化和弱监督语义分割任务的实验结果证明了该方法的效力,这使得新的最先进的结果实现了这两项任务。
translated by 谷歌翻译