Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
通过将元学习纳入基于区域的检测框架中,很少有射击对象检测经过广泛的研究。尽管取得了成功,但所述范式仍然受到几个因素的限制,例如(i)新型类别的低质量区域建议以及(ii)不同类别之间的类间相关性的过失。这种限制阻碍了基础知识的概括,以检测新型级别对象。在这项工作中,我们设计了元数据,(i)是第一个图像级的少量检测器,(ii)引入了一种新颖的类间相关元学习策略,以捕获和利用不同类别之间的相关性的相关性稳健而准确的几个射击对象检测。 meta-detr完全在图像级别工作,没有任何区域建议,这规避了普遍的几杆检测框架中不准确的建议的约束。此外,引入的相关元学习使元数据能够同时参加单个进料中的多个支持类别,从而可以捕获不同类别之间的类间相关性,从而大大降低了相似类别的错误分类并增强知识概括性参加新颖的课程。对多个射击对象检测基准进行的实验表明,所提出的元元删除优于大幅度的最先进方法。实施代码可在https://github.com/zhanggongjie/meta-detr上获得。
translated by 谷歌翻译
少量对象检测(FSOD)是计算机视觉中快速生长的领域。它包括查找给定的一组类的所有出现,只有每个类的少数注释的示例。已经提出了许多方法来解决这一挑战,其中大部分是基于注意机制。然而,各种经典对象检测框架和培训策略使方法之间的性能比较困难。特别是对于基于关注的FSOD方法,比较不同关注机制对性能的影响是费力的。本文旨在填补这种缺点。为此,提出了一种灵活的框架,以允许实施文献中可用的大部分注意技术。要正确介绍这样的框架,首先提供了对现有FSOD方法的详细审查。然后在框架内重新实现一些不同的关注机制,并与固定的所有其他参数进行比较。
translated by 谷歌翻译
即使在几个例子中,人类能够学会识别新物品。相比之下,培训基于深度学习的对象探测器需要大量的注释数据。为避免需求获取和注释这些大量数据,但很少拍摄的对象检测旨在从目标域中的新类别的少数对象实例中学习。在本调查中,我们在几次拍摄对象检测中概述了本领域的状态。我们根据培训方案和建筑布局分类方法。对于每种类型的方法,我们描述了一般的实现以及提高新型类别性能的概念。在适当的情况下,我们在这些概念上给出短暂的外卖,以突出最好的想法。最终,我们介绍了常用的数据集及其评估协议,并分析了报告的基准结果。因此,我们强调了评估中的共同挑战,并确定了这种新兴对象检测领域中最有前景的电流趋势。
translated by 谷歌翻译
对象检测是计算机视觉和图像处理中的基本任务。基于深度学习的对象探测器非常成功,具有丰富的标记数据。但在现实生活中,它不保证每个对象类别都有足够的标记样本进行培训。当训练数据有限时,这些大型物体探测器易于过度装备。因此,有必要将几次拍摄的学习和零射击学习引入对象检测,这可以将低镜头对象检测命名在一起。低曝光对象检测(LSOD)旨在检测来自少数甚至零标记数据的对象,其分别可以分为几次对象检测(FSOD)和零拍摄对象检测(ZSD)。本文对基于深度学习的FSOD和ZSD进行了全面的调查。首先,本调查将FSOD和ZSD的方法分类为不同的类别,并讨论了它们的利弊。其次,本调查审查了数据集设置和FSOD和ZSD的评估指标,然后分析了在这些基准上的不同方法的性能。最后,本调查讨论了FSOD和ZSD的未来挑战和有希望的方向。
translated by 谷歌翻译
人类时尚理解是一项至关重要的计算机视觉任务,因为它具有用于现实世界应用的全面信息。这种关注人类时装细分和属性识别。与以前的作品相反,将每个任务分别建模为多头预测问题,我们的见解是通过Vision Transformer建模将这两个任务用一个统一的模型桥接,以使每个任务受益。特别是,我们介绍了分割的对象查询和属性预测的属性查询。查询及其相应的功能都可以通过掩码预测链接。然后,我们采用两流查询学习框架来学习解耦的查询表示。我们为属性流设计了一种新颖的多层渲染模块,以探索更细粒度的功能。解码器设计与DETR具有相同的精神。因此,我们将提出的方法\ textit {fahsionformer}命名。在三个人类时尚数据集上进行的广泛实验说明了我们方法的有效性。特别是,在\ textit {a intivit {a intim trictric(ap $^{\ text {mask}} _ {_ {\ text {iou+f text {iou+f textiT { } _1} $)用于分割和属性识别}。据我们所知,我们是人类时装分析的第一个统一的端到端视觉变压器框架。我们希望这种简单而有效的方法可以作为时尚分析的新灵活基准。代码可从https://github.com/xushilin1/fashionformer获得。
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
Few-shot object detection (FSOD), which aims at learning a generic detector that can adapt to unseen tasks with scarce training samples, has witnessed consistent improvement recently. However, most existing methods ignore the efficiency issues, e.g., high computational complexity and slow adaptation speed. Notably, efficiency has become an increasingly important evaluation metric for few-shot techniques due to an emerging trend toward embedded AI. To this end, we present an efficient pretrain-transfer framework (PTF) baseline with no computational increment, which achieves comparable results with previous state-of-the-art (SOTA) methods. Upon this baseline, we devise an initializer named knowledge inheritance (KI) to reliably initialize the novel weights for the box classifier, which effectively facilitates the knowledge transfer process and boosts the adaptation speed. Within the KI initializer, we propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights. Finally, our approach not only achieves the SOTA results across three public benchmarks, i.e., PASCAL VOC, COCO and LVIS, but also exhibits high efficiency with 1.8-100x faster adaptation speed against the other methods on COCO/LVIS benchmark during few-shot transfer. To our best knowledge, this is the first work to consider the efficiency problem in FSOD. We hope to motivate a trend toward powerful yet efficient few-shot technique development. The codes are publicly available at https://github.com/Ze-Yang/Efficient-FSOD.
translated by 谷歌翻译
Conventional training of a deep CNN based object detector demands a large number of bounding box annotations, which may be unavailable for rare categories. In this work we develop a few-shot object detector that can learn to detect novel objects from only a few annotated examples. Our proposed model leverages fully labeled base classes and quickly adapts to novel classes, using a meta feature learner and a reweighting module within a one-stage detection architecture. The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples. The reweighting module transforms a few support examples from the novel classes to a global vector that indicates the importance or relevance of meta features for detecting the corresponding objects. These two modules, together with a detection prediction module, are trained end-to-end based on an episodic few-shot learning scheme and a carefully designed loss function. Through extensive experiments we demonstrate that our model outperforms well-established baselines by a large margin for few-shot object detection, on multiple datasets and settings. We also present analysis on various aspects of our proposed model, aiming to provide some inspiration for future few-shot detection works.
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
Open world object detection aims at detecting objects that are absent in the object classes of the training data as unknown objects without explicit supervision. Furthermore, the exact classes of the unknown objects must be identified without catastrophic forgetting of the previous known classes when the corresponding annotations of unknown objects are given incrementally. In this paper, we propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR. In the first stage, we pre-train a model on the current annotated data to detect objects from the current known classes, and concurrently train an additional binary classifier to classify predictions into foreground or background classes. This helps the model to build an unbiased feature representations that can facilitate the detection of unknown classes in subsequent process. In the second stage, we fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint. Furthermore, we alleviate catastrophic forgetting when the annotations of the unknown classes becomes available incrementally by using knowledge distillation and exemplar replay. Experimental results on PASCAL VOC and MS-COCO show that our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
translated by 谷歌翻译
对少量语义分割(FSS)的研究引起了极大的关注,目的是在查询图像中仅给出目标类别的少数注释的支持图像。这项具有挑战性的任务的关键是通过利用查询和支持图像之间的细粒度相关性来充分利用支持图像中的信息。但是,大多数现有方法要么将支持信息压缩为几个班级原型,要么在像素级别上使用的部分支持信息(例如,唯一的前景),从而导致不可忽略的信息损失。在本文中,我们提出了密集的像素,互源和支持的注意力加权面膜聚合(DCAMA),其中前景和背景支持信息都是通过配对查询和支持特征之间的多级像素的相关性通过多级像素的相关性充分利用的。 DCAMA在变压器体系结构中以缩放点产生的关注实现,将每个查询像素视为令牌,计算其与所有支持像素的相似之处,并预测其分割标签是所有支持像素标签的添加剂聚集 - 相似之处。基于DCAMA的唯一公式,我们进一步提出了对N-shot分割的有效有效的一通推断,其中所有支持图像的像素立即为掩模聚集收集。实验表明,我们的DCAMA在Pascal-5i,Coco-20i和FSS-1000的标准FSS基准上显着提高了最先进的状态以前的最佳记录。烧烤研究还验证了设计dcama。
translated by 谷歌翻译
如果没有图像中的密集瓷砖锚点或网格点,稀疏的R-CNN可以通过以级联的训练方式更新的一组对象查询和建议框来实现有希望的结果。但是,由于性质稀疏以及查询与其参加地区之间的一对一关系,它在很大程度上取决于自我注意力,这通常在早期训练阶段不准确。此外,在密集对象的场景中,对象查询与许多无关的物体相互作用,从而降低了其独特性并损害了性能。本文提议在不同的框之间使用iOU作为自我注意力的价值路由的先验。原始注意力矩阵乘以从提案盒中计算出的相同大小的矩阵,并确定路由方案,以便可以抑制无关的功能。此外,为了准确提取分类和回归的功能,我们添加了两个轻巧投影头,以根据对象查询提供动态通道掩码,并且它们随动态convs的输出而繁殖,从而使结果适合两个不同的任务。我们在包括MS-Coco和CrowdHuman在内的不同数据集上验证了所提出的方案,这表明它可显着提高性能并提高模型收敛速度。
translated by 谷歌翻译
几次拍摄的语义分割旨在将新颖的类对象分段为仅具有少数标记的支持图像。大多数高级解决方案利用度量学习框架,通过将每个查询功能与学习的类特定的原型匹配来执行分段。然而,由于特征比较不完整,该框架遭受了偏见的分类。为了解决这个问题,我们通过引入类别特定的和类别不可知的原型来提出自适应原型表示,从而构建与查询功能学习语义对齐的完整样本对。互补特征学习方式有效地丰富了特征比较,并有助于在几次拍摄设置中产生一个非偏见的分段模型。它用双分支端到端网络(\即,特定于类分支和类别不可知分支)实现,它生成原型,然后组合查询特征以执行比较。此外,所提出的类别无神不可话的分支简单而且有效。在实践中,它可以自适应地为查询图像生成多种类别 - 不可知的原型,并以自我对比方式学习特征对齐。广泛的Pascal-5 $ ^ i $和Coco-20 $ ^ i $展示了我们方法的优越性。在不牺牲推理效率的费用中,我们的模型实现了最先进的,导致1-Shot和5-Shot Settings进行语义分割。
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
很少有射击对象检测(FSOD),目的是使用很少的培训示例来检测新颖的对象,最近对社区引起了极大的研究兴趣。基于度量学习的方法已证明使用基于两分支的暹罗网络对此任务有效,并计算图像区域之间的相似性和几乎没有射击示例以进行检测。但是,在以前的工作中,两个分支之间的相互作用仅在检测头中受到限制,而将其余数百个层留在单独的特征提取中。受到有关视觉变压器和视觉变压器的最新工作的启发,我们通过将交叉转换器纳入功能骨干和检测头中,提出了一种新颖的FSOD基于跨变速器的模型(FCT)。提出了不对称批次的交叉注意,以从不同批次大小的两个分支中汇总关键信息。我们的模型可以通过引入多级交互来改善两个分支之间的几个相似性学习。对Pascal VOC和MSCOCO FSOD基准测试的全面实验证明了我们模型的有效性。
translated by 谷歌翻译
Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
translated by 谷歌翻译
很少有分割的目的是仅给出少数标记的样品,旨在细分看不见的级对象。原型学习,支持功能通过平均全局和局部对象信息产生单个原型,在FSS中已广泛使用。但是,仅利用原型矢量可能不足以代表所有训练数据的功能。为了提取丰富的特征并做出更精确的预测,我们提出了一个多相似性和注意力网络(MSANET),包括两个新型模块,一个多相似性模块和一个注意模块。多相似模块利用支持图像和查询图像的多个特征图来估计准确的语义关系。注意模块指示网络专注于相关的信息。该网络在标准FSS数据集,Pascal-5i 1-Shot,Pascal-5i 5-Shot,Coco-20i 1-Shot和Coco-20i 5-Shot上进行了测试。具有RESNET-101骨架的MSANET可在所有4基准测试数据集中达到最先进的性能,而平均交叉点(MIOU)为69.13%,73.99%,51.09%,56.80%。代码可在https://github.com/aivresearch/msanet上获得
translated by 谷歌翻译
Most existing 3D point cloud object detection approaches heavily rely on large amounts of labeled training data. However, the labeling process is costly and time-consuming. This paper considers few-shot 3D point cloud object detection, where only a few annotated samples of novel classes are needed with abundant samples of base classes. To this end, we propose Prototypical VoteNet to recognize and localize novel instances, which incorporates two new modules: Prototypical Vote Module (PVM) and Prototypical Head Module (PHM). Specifically, as the 3D basic geometric structures can be shared among categories, PVM is designed to leverage class-agnostic geometric prototypes, which are learned from base classes, to refine local features of novel categories.Then PHM is proposed to utilize class prototypes to enhance the global feature of each object, facilitating subsequent object localization and classification, which is trained by the episodic training strategy. To evaluate the model in this new setting, we contribute two new benchmark datasets, FS-ScanNet and FS-SUNRGBD. We conduct extensive experiments to demonstrate the effectiveness of Prototypical VoteNet, and our proposed method shows significant and consistent improvements compared to baselines on two benchmark datasets.
translated by 谷歌翻译