引入广义的少量拍摄语义分割以超越仅在新颖的类上评估几次分段模型,以包括测试他们记住基础类的能力。虽然目前所有方法都是基于Meta-Learning,但在观察只有几张镜头后,他们在学习中表现得差,并且在学习中达到差。我们提出了第一种微调解决方案,并证明它在两个数据集上实现最先进的结果时讨论了饱和度问题,Pascal-$ 5 ^ I $和Coco-$ 20 ^ i $。我们还表明它优于现有方法是否微调多个最终层或仅最终层。最后,我们提出了一个三重损失正常化,展示了如何重新分配新颖和基本类别之间的性能平衡,以便它们之间存在较小的差距。
translated by 谷歌翻译
就像其他少量学习问题一样,很少拍摄的细分旨在最大限度地减少手动注释的需求,这在分割任务中特别昂贵。即使少量拍摄设置降低了新型测试类的这种成本,仍然需要注释培训数据。为了减轻这种需求,我们提出了一种自我监督的培训方法,用于学习几次射门分割模型。我们首先使用无监督的显着性估计来获得图像上的伪掩码。然后,我们将在不同的伪掩模的不同分割和增强图像的不同分裂上培训一个简单的原型模型。我们广泛的实验表明,该方法达到了有希望的结果,突出了自我监督培训的潜力。据我们所知,这是第一个解决自然图像上无监督的少量分割问题的第一项工作。
translated by 谷歌翻译
几次拍摄的语义分割旨在将新颖的类对象分段为仅具有少数标记的支持图像。大多数高级解决方案利用度量学习框架,通过将每个查询功能与学习的类特定的原型匹配来执行分段。然而,由于特征比较不完整,该框架遭受了偏见的分类。为了解决这个问题,我们通过引入类别特定的和类别不可知的原型来提出自适应原型表示,从而构建与查询功能学习语义对齐的完整样本对。互补特征学习方式有效地丰富了特征比较,并有助于在几次拍摄设置中产生一个非偏见的分段模型。它用双分支端到端网络(\即,特定于类分支和类别不可知分支)实现,它生成原型,然后组合查询特征以执行比较。此外,所提出的类别无神不可话的分支简单而且有效。在实践中,它可以自适应地为查询图像生成多种类别 - 不可知的原型,并以自我对比方式学习特征对齐。广泛的Pascal-5 $ ^ i $和Coco-20 $ ^ i $展示了我们方法的优越性。在不牺牲推理效率的费用中,我们的模型实现了最先进的,导致1-Shot和5-Shot Settings进行语义分割。
translated by 谷歌翻译
我们解决了几次拍摄语义分割(FSS)的问题,该问题旨在通过一些带有一些注释的样本分段为目标图像中的新型类对象。尽管通过结合基于原型的公制学习来进行最近的进步,但由于其特征表示差,现有方法仍然显示出在极端内部对象变化和语义相似的类别对象下的有限性能。为了解决这个问题,我们提出了一种针对FSS任务定制的双重原型对比学习方法,以有效地捕获代表性的语义。主要思想是通过增加阶级距离来鼓励原型更差异,同时减少了原型特征空间中的课堂距离。为此,我们首先向类别特定的对比丢失丢失具有动态原型字典,该字典字典存储在训练期间的类感知原型,从而实现相同的类原型和不同的类原型是不同的。此外,我们通过压缩每集内语义类的特征分布来提高类别无话的对比损失,以提高未经看不见的类别的概念能力。我们表明,所提出的双重原型对比学习方法优于Pascal-5i和Coco-20i数据集的最先进的FSS方法。该代码可用于:https://github.com/kwonjunn01/dpcl1。
translated by 谷歌翻译
在视觉识别任务中,很少的学习需要在很少的支持示例中学习对象类别的能力。鉴于深度学习的发展,它的重新流行主要是图像分类。这项工作着重于几片语义细分,这仍然是一个未开发的领域。最近的一些进步通常仅限于单级少量分段。在本文中,我们首先介绍了一个新颖的多通道(类)编码和解码体系结构,该体系结构有效地将多尺度查询信息和多类支持信息融合到一个查询支持嵌入中。多级分割直接在此嵌入后解码。为了获得更好的特征融合,在体系结构中提出了多层注意机制,其中包括对支持功能调制的关注和多尺度组合的注意力。最后,为了增强嵌入式空间学习,引入了一个额外的像素度量学习模块,并在输入图像的像素级嵌入式上提出了三重损失。对标准基准Pascal-5i和Coco-20i进行的广泛实验显示了我们方法对最新技术的明显好处
translated by 谷歌翻译
Few-shot segmentation (FSS) aims to segment unseen classes using a few annotated samples. Typically, a prototype representing the foreground class is extracted from annotated support image(s) and is matched to features representing each pixel in the query image. However, models learnt in this way are insufficiently discriminatory, and often produce false positives: misclassifying background pixels as foreground. Some FSS methods try to address this issue by using the background in the support image(s) to help identify the background in the query image. However, the backgrounds of theses images is often quite distinct, and hence, the support image background information is uninformative. This article proposes a method, QSR, that extracts the background from the query image itself, and as a result is better able to discriminate between foreground and background features in the query image. This is achieved by modifying the training process to associate prototypes with class labels including known classes from the training data and latent classes representing unknown background objects. This class information is then used to extract a background prototype from the query image. To successfully associate prototypes with class labels and extract a background prototype that is capable of predicting a mask for the background regions of the image, the machinery for extracting and using foreground prototypes is induced to become more discriminative between different classes. Experiments for both 1-shot and 5-shot FSS on both the PASCAL-5i and COCO-20i datasets demonstrate that the proposed method results in a significant improvement in performance for the baseline methods it is applied to. As QSR operates only during training, these improved results are produced with no extra computational complexity during testing.
translated by 谷歌翻译
Conventional training of a deep CNN based object detector demands a large number of bounding box annotations, which may be unavailable for rare categories. In this work we develop a few-shot object detector that can learn to detect novel objects from only a few annotated examples. Our proposed model leverages fully labeled base classes and quickly adapts to novel classes, using a meta feature learner and a reweighting module within a one-stage detection architecture. The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples. The reweighting module transforms a few support examples from the novel classes to a global vector that indicates the importance or relevance of meta features for detecting the corresponding objects. These two modules, together with a detection prediction module, are trained end-to-end based on an episodic few-shot learning scheme and a carefully designed loss function. Through extensive experiments we demonstrate that our model outperforms well-established baselines by a large margin for few-shot object detection, on multiple datasets and settings. We also present analysis on various aspects of our proposed model, aiming to provide some inspiration for future few-shot detection works.
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
培训语义分割模型需要大量的精细注释数据,使得很难快速适应不满足这种情况的新型类。很少拍摄的分割(FS-SEG)用许多约束来解决这个问题。在本文中,我们介绍了一种新的基准,称为广义的少量语义分割(GFS-SEG),分析了同时分割了具有很少的例子和基本类别的新型类别的泛化能力。第一研究表明,以前的代表性最先进的FS-SEG方法在GFS-SEG中缺乏,并且性能差异主要来自FS-SEG的约束设置。为了制作GFS-SEG易旧的,我们设置了GFS-SEG基线,可以在原始模型上实现不良性能的体现性能。因此,由于上下文对于语义分割是必不可少的,我们提出了显着提高性能的上下文感知原型学习(CAPL)1)利用支持样本的共同发生,以及2)将上下文信息动态地丰富到分类器,对每个查询映像的内容进行调节。两项贡献都是通过实验证明具有实际实际优点的贡献。对Pascal-VOC和Coco的广泛实验表现出CAPL的有效性,CAPL通过实现竞争性能来概括为FS-SEG。代码将公开可用。
translated by 谷歌翻译
很少有分段旨在学习一个细分模型,该模型可以推广到只有几个培训图像的新课程。在本文中,我们提出了一个交叉引用和局部全球条件网络(CRCNET),以进行几次分割。与以前仅预测查询图像掩码的作品不同,我们提出的模型同时对支持图像和查询图像进行了预测。我们的网络可以更好地在两个图像中使用交叉引用机制找到同时出现的对象,从而有助于少量分割任务。为了进一步改善功能比较,我们开发了一个局部全球条件模块,以捕获全球和本地关系。我们还开发了一个掩模修补模块,以重新完善前景区域的预测。Pascal VOC 2012,MS Coco和FSS-1000数据集的实验表明,我们的网络实现了新的最新性能。
translated by 谷歌翻译
冻结预训练的主链已成为标准范式,以避免在几次分段中过度拟合。在本文中,我们重新考虑范式并探索一个新的制度:{\ em对骨干中的一小部分参数}进行微调。我们提出了一种解决过度拟合问题的解决方案,从而使学习新颖班级的模型概括更好。我们的方法通过奇异值分解(SVD)将主链参数分解为三个连续的矩阵,然后{\ em仅微调单数值}并保持其他冻结。上面的设计使模型可以在新颖类中调整特征表示,同时在预先训练的主链中保持语义线索。我们在具有不同骨架的各种几种射击分割方法上评估了{\ em单数值微调(SVF)}方法。我们在Pascal-5 $^i $和Coco-20 $^i $上都获得了最先进的结果。希望这个简单的基准将鼓励研究人员重新考虑骨干微调在几次环境中的作用。源代码和模型将在\ url {https://github.com/syp2ysy/svf}上获得。
translated by 谷歌翻译
深度学习极大地提高了语义细分的性能,但是,它的成功依赖于大量注释的培训数据的可用性。因此,许多努力致力于域自适应语义分割,重点是将语义知识从标记的源域转移到未标记的目标域。现有的自我训练方法通常需要多轮训练,而基于对抗训练的另一个流行框架已知对超参数敏感。在本文中,我们提出了一个易于训练的框架,该框架学习了域自适应语义分割的域不变原型。特别是,我们表明域的适应性与很少的学习共享一个共同的角色,因为两者都旨在识别一些从大量可见数据中学到的知识的看不见的数据。因此,我们提出了一个统一的框架,用于域适应和很少的学习。核心思想是使用从几个镜头注释的目标图像中提取的类原型来对源图像和目标图像的像素进行分类。我们的方法仅涉及一个阶段训练,不需要对大规模的未经通知的目标图像进行培训。此外,我们的方法可以扩展到域适应性和几乎没有射击学习的变体。关于适应GTA5到CITYSCAPES和合成景观的实验表明,我们的方法实现了对最先进的竞争性能。
translated by 谷歌翻译
对象检测在过去十年中取得了实质性进展。然而,只有少量样品检测新颖类仍然有挑战性,因为低数据制度下的深度学习通常会导致降级的特征空间。现有的作品采用整体微调范例来解决这个问题,其中模型首先在具有丰富样本的所有基类上进行预培训,然后它用于雕刻新颖的类特征空间。尽管如此,这个范例仍然不完美。微调,一个小型类可以隐含地利用多个基类的知识来构造其特征空间,它引起分散的特征空间,因此违反了级别的可分离性。为了克服这些障碍,我们提出了一系列两步的微调框架,通过关联和歧视(FADI),为每个新颖类带来了一个具有两个积分步骤的判别特征空间。 1)在关联步骤中,与隐式利用多个基类相反,我们通过显式模仿特定的基类特征空间来构造一个紧凑的新颖类别特征空间。具体地,我们根据其语义相似性将每个小组与基类联系起来。之后,新类的特征空间可以容易地模仿相关基类的良好训练的特征空间。 2)在歧视步骤中,为了确保新型类和相关基类之间的可分离性,我们解除了基础和新类的分类分支。为了进一步放大所有类之间的阶级间可分性,施加了专用的专用边缘损失。对Pascal VOC和MS-Coco Datasets的广泛实验表明FADI实现了新的SOTA性能,显着改善了任何拍摄/分裂的基线+18.7。值得注意的是,优势在极其镜头方案上最为宣布。
translated by 谷歌翻译
Few-shot semantic segmentation aims to learn to segment new object classes with only a few annotated examples, which has a wide range of real-world applications. Most existing methods either focus on the restrictive setting of one-way few-shot segmentation or suffer from incomplete coverage of object regions. In this paper, we propose a novel few-shot semantic segmentation framework based on the prototype representation. Our key idea is to decompose the holistic class representation into a set of part-aware prototypes, capable of capturing diverse and fine-grained object features. In addition, we propose to leverage unlabeled data to enrich our part-aware prototypes, resulting in better modeling of intra-class variations of semantic objects. We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes based on labeled and unlabeled images. Extensive experimental evaluations on two benchmarks show that our method outperforms the prior art with a sizable margin.
translated by 谷歌翻译
以前的人类解析模型仅限于将人类解析为预定义的类,这对于通常具有新时尚项目类的实用时尚应用是不灵活的。在本文中,我们定义了一个新颖的单次人类解析(OSHP)任务,该任务需要将人解析为任何测试示例定义的一组开放式类别。在培训期间,仅公开基础课程,这仅与一部分测试时间类别重叠。为了解决OSHP中的三个主要挑战,即小型,测试偏见和类似部分,我们设计了一个端到端的一击人类解析网络(EOP-NET)。首先,提出了一个端到端的人解析框架,以将查询图像解析为粗粒和细粒度的人类类别,该框架建立了一个强大的嵌入网络,具有在不同粒度上共享的丰富语义信息,从人类阶级。然后,我们通过逐步平滑训练时间静态原型来提出学习势头更新的原型,这有助于稳定训练并学习健壮的功能。此外,我们设计了一种双重度量学习方案,该方案鼓励网络增强特征的表示能力和可传递性。因此,我们的EOP-NET可以学习代表性功能,这些功能可以快速适应新颖的类并减轻测试偏置问题。此外,我们在原型水平上采用了对比损失,从而在细粒度度量空间中执行了类别之间的距离,以区分相似的部分。我们根据OSHP任务量身定制了三个现有的人类解析基准。新基准测试的实验表明,EOP-NET的表现优于大量边缘的代表性单次分割模型,这是进一步研究这项新任务的强大基线。源代码可从https://github.com/charleshhy/one-shot-human-parsing获得。
translated by 谷歌翻译
Despite the remarkable success of existing methods for few-shot segmentation, there remain two crucial challenges. First, the feature learning for novel classes is suppressed during the training on base classes in that the novel classes are always treated as background. Thus, the semantics of novel classes are not well learned. Second, most of existing methods fail to consider the underlying semantic gap between the support and the query resulting from the representative bias by the scarce support samples. To circumvent these two challenges, we propose to activate the discriminability of novel classes explicitly in both the feature encoding stage and the prediction stage for segmentation. In the feature encoding stage, we design the Semantic-Preserving Feature Learning module (SPFL) to first exploit and then retain the latent semantics contained in the whole input image, especially those in the background that belong to novel classes. In the prediction stage for segmentation, we learn an Self-Refined Online Foreground-Background classifier (SROFB), which is able to refine itself using the high-confidence pixels of query image to facilitate its adaptation to the query image and bridge the support-query semantic gap. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ datasets demonstrates the advantages of these two novel designs both quantitatively and qualitatively.
translated by 谷歌翻译
Semantic segmentation assigns a class label to each image pixel. This dense prediction problem requires large amounts of manually annotated data, which is often unavailable. Few-shot learning aims to learn the pattern of a new category with only a few annotated examples. In this paper, we formulate the few-shot semantic segmentation problem from 1-way (class) to N-way (classes). Inspired by few-shot classification, we propose a generalized framework for few-shot semantic segmentation with an alternative training scheme. The framework is based on prototype learning and metric learning. Our approach outperforms the baselines by a large margin and shows comparable performance for 1-way few-shot semantic segmentation on PASCAL VOC 2012 dataset.
translated by 谷歌翻译
我们介绍了一些源自摄影师的本地化数据集,他们实际上试图了解他们拍摄的图像中的视觉内容。它包括有4,500多个视觉障碍者拍摄的超过4,500张图像中的100个类别的近10,000个细分。与现有的少数弹射对象检测和实例分段数据集相比,我们的数据集是第一个在对象中找到孔(例如,在我们的分段的12.3 \%中找到),它显示的对象相对于占据相对于尺寸的范围较大。图像和文本在我们的对象中的常见五倍以上(例如,在我们的分割的22.4%中找到)。对三种现代少量定位算法的分析表明,它们概括为我们的新数据集。这些算法通常很难找到带有孔,非常小且非常大的物体以及缺乏文本的物体的对象。为了鼓励更大的社区致力于这些尚未解决的挑战,我们在https://vizwiz.org上公开分享了带注释的少数数据集。
translated by 谷歌翻译
很少有语义细分旨在识别一个看不见类别的对象区域,只有几个带注释的示例作为监督。几次分割的关键是在支持图像和查询图像之间建立牢固的语义关系,并防止过度拟合。在本文中,我们提出了一个有效的多相似性超关联网络(MSHNET),以解决几个射击语义分割问题。在MSHNET中,我们提出了一种新的生成原型相似性(GPS),与余弦相似性可以在支持图像和查询图像之间建立牢固的语义关系。基于全局特征的本地生成的原型相似性在逻辑上与基于本地特征的全局余弦相似性互补,并且可以通过同时使用两个相似性来更全面地表达查询图像和受支持图像之间的关系。此外,我们提出了MSHNET中的对称合并块(SMB),以有效合并多层,多弹射和多相似性超相关特征。 MSHNET是基于相似性而不是特定类别特征而构建的,这些特征可以实现更一般的统一性并有效地减少过度拟合。在两个基准的语义分割数据集Pascal-5i和Coco-20i上,MSHNET在1次和5次语义分段任务上实现了新的最先进的表演。
translated by 谷歌翻译
Despite the great progress made by deep CNNs in image semantic segmentation, they typically require a large number of densely-annotated images for training and are difficult to generalize to unseen object categories. Few-shot segmentation has thus been developed to learn to perform segmentation from only a few annotated examples. In this paper, we tackle the challenging few-shot segmentation problem from a metric learning perspective and present PANet, a novel prototype alignment network to better utilize the information of the support set. Our PANet learns classspecific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes. With non-parametric metric learning, PANet offers high-quality prototypes that are representative for each semantic class and meanwhile discriminative for different classes. Moreover, PANet introduces a prototype alignment regularization between support and query. With this, PANet fully exploits knowledge from the support and provides better generalization on few-shot segmentation. Significantly, our model achieves the mIoU score of 48.1% and 55.7% on PASCAL-5 i for 1-shot and 5-shot settings respectively, surpassing the state-of-the-art method by 1.8% and 8.6%.
translated by 谷歌翻译