Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
translated by 谷歌翻译
Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel open-vocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS. Code is available at: https://github.com/clin1223/VLDet.
translated by 谷歌翻译
由于检测数据集的规模小,当前对象探测器的词汇量受到限制。另一方面,图像分类器的原因是大约更大的词汇表,因为他们的数据集更大,更容易收集。我们提出守则,只需在图像分类数据上培训检测器的分类器,从而扩展了探测器的词汇量到数万个概念。与现有工作不同,拒绝不会根据模型预测将图像标签分配给框,使其更容易实现和兼容一系列检测架构和骨架。我们的结果表明,即使没有箱子注释,否则差异也能产生出色的探测器。它优于开放词汇和长尾检测基准的事先工作。拒绝为所有类和8.3地图提供了2.4地图的增益,用于开放词汇LVIS基准测试中的新型类。在标准的LVIS基准测试中,守护者达到41.7地图所有课程和41.7地图以获得罕见课程。我们首次培训一个探测器,其中包含所有二十一千类的ImageNet数据集,并显示它在没有微调的情况下推广到新数据集。代码可在https://github.com/facebookresearch/dorm提供。
translated by 谷歌翻译
将简单的体系结构与大规模预训练相结合已导致图像分类的大量改进。对于对象检测,预训练和缩放方法的确定性不佳,尤其是在长尾和开放式摄影的环境中,训练数据相对较少。在本文中,我们提出了一个强大的配方,用于将图像文本模型转移到开放式对象检测中。我们使用具有最小修改,对比度文本预训练和端到端检测微调的标准视觉变压器体系结构。我们对该设置的缩放属性的分析表明,增加图像级预训练和模型大小在下游检测任务上产生一致的改进。我们提供适应性策略和正规化,以实现零击文本条件和单次图像条件对象检测的非常强劲的性能。代码和型号可在GitHub上找到。
translated by 谷歌翻译
尽管对象检测方面取得了很大进展,但由于实例级边界盒注释所需的巨大人性化,大多数现有方法都仅限于一小一少量的对象类别。为了减轻问题,最近的开放词汇和零射击检测方法试图检测培训期间未见的对象类别。但是,这些方法仍然依赖于一组基类上手动提供的边界盒注释。我们提出了一个开放的词汇检测框架,可以在没有手动提供边界盒注释的情况下培训。我们的方法通过利用预先训练的视觉语言模型的本地化能力来实现这一目标,并产生可直接用于训练对象探测器的伪边界盒标签。 Coco,Pascal VOC,Objects365和LVIS的实验结果证明了我们方法的有效性。具体而言,我们的方法优于使用人类注释的边界箱训练的最先进(SOTA),即使我们的培训源未配备手动边界盒标签,也可以在COCO新型类别上用3%AP培训。在利用手动边界箱标签作为基线时,我们的方法主要超过8%的AP。
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Open vocabulary object detection has been greatly advanced by the recent development of vision-language pretrained model, which helps recognize novel objects with only semantic categories. The prior works mainly focus on knowledge transferring to the object proposal classification and employ class-agnostic box and mask prediction. In this work, we propose CondHead, a principled dynamic network design to better generalize the box regression and mask segmentation for open vocabulary setting. The core idea is to conditionally parameterize the network heads on semantic embedding and thus the model is guided with class-specific knowledge to better detect novel categories. Specifically, CondHead is composed of two streams of network heads, the dynamically aggregated head and the dynamically generated head. The former is instantiated with a set of static heads that are conditionally aggregated, these heads are optimized as experts and are expected to learn sophisticated prediction. The latter is instantiated with dynamically generated parameters and encodes general class-specific information. With such a conditional design, the detection model is bridged by the semantic embedding to offer strongly generalizable class-wise box and mask prediction. Our method brings significant improvement to the state-of-the-art open vocabulary object detection methods with very minor overhead, e.g., it surpasses a RegionClip model by 3.0 detection AP on novel categories, with only 1.1% more computation.
translated by 谷歌翻译
通过将元学习纳入基于区域的检测框架中,很少有射击对象检测经过广泛的研究。尽管取得了成功,但所述范式仍然受到几个因素的限制,例如(i)新型类别的低质量区域建议以及(ii)不同类别之间的类间相关性的过失。这种限制阻碍了基础知识的概括,以检测新型级别对象。在这项工作中,我们设计了元数据,(i)是第一个图像级的少量检测器,(ii)引入了一种新颖的类间相关元学习策略,以捕获和利用不同类别之间的相关性的相关性稳健而准确的几个射击对象检测。 meta-detr完全在图像级别工作,没有任何区域建议,这规避了普遍的几杆检测框架中不准确的建议的约束。此外,引入的相关元学习使元数据能够同时参加单个进料中的多个支持类别,从而可以捕获不同类别之间的类间相关性,从而大大降低了相似类别的错误分类并增强知识概括性参加新颖的课程。对多个射击对象检测基准进行的实验表明,所提出的元元删除优于大幅度的最先进方法。实施代码可在https://github.com/zhanggongjie/meta-detr上获得。
translated by 谷歌翻译
开放世界对象检测是一个更具笼统和挑战性的目标,旨在识别和本地化由任意类别名称描述的对象。最近的工作GLIP通过将检测数据集的所有类别名称连接到句子中,从而将此问题作为接地问题,从而导致类别名称之间的效率低下的相互作用。本文介绍了Distclip,这是一种通过诉诸于设计概念词典的知识富集,是一种平行的视觉概念训练预训练方法,用于开放世界检测。为了提高学习效率,我们提出了一种新型的并行概念公式,该公式分别提取概念,以更好地利用异质数据集(即检测,接地和图像文本对)进行培训。我们进一步设计了来自各种在线资源和检测数据集的概念字典〜(带有描述),以提供每个概念的先验知识。通过用描述丰富这些概念,我们明确地建立了各种概念之间的关系,以促进开放域学习。所提出的概念词典进一步用于提供足够的负面概念,用于构建单词区域对齐损失\,并完成图像对文本对数据标题中缺少描述的对象的标签。所提出的框架显示出强烈的零射击性能性能,例如,在LVIS数据集上,我们的DETCLIP-T优于9.9%的地图GLIPT-T优于GLIP-T,并且与完全避免的型号相比,稀有类别的稀有类别提高了13.5%。作为我们的。
translated by 谷歌翻译
长期以来,将物体检测推向开放量和几乎没有射击转移一直是计算机视觉研究的挑战。这项工作探讨了一种持续的学习方法,该方法使探测器能够通过多数据远见语言的预训练扩展其零/少量功能。我们使用自然语言作为知识表示,我们探讨了从不同培训数据集积累“视觉词汇”的方法,并将任务统一为语言条件的检测框架。具体而言,我们提出了一种新颖的语言感知探测器OMDET和一种新颖的培训机制。拟议的多模式检测网络可以解决多数据库联合培训中的技术挑战,并且可以推广到任意数量的培训数据集,而无需手动标签分类合并的要求。与单独训练相比,Coco,Pascal VOC和更宽的面部/行人的实验结果通过在关节训练中或更高的分数来证实了疗效。此外,我们对超过400万个独特的对象词汇进行了预先培训,并在ODINW的35个下游任务上评估了所得模型。结果表明,OMDET能够在ODINW上实现最新的微调性能。分析表明,通过扩展提出的预训练方法,OMDET继续改善其零/少量调整性能,这表明了进一步扩展的有希望的方法。
translated by 谷歌翻译
这项工作的目的是使用零手动注释建立可扩展的管道,以将对象检测器扩展到新颖/看不见的类别。为此,我们做出以下四个贡献:(i)追求概括,我们提出了一个两阶段的开放式摄制对象检测器,其中类无形的对象建议与预先训练的视觉视觉训练的文本编码一起分类语言模型; (ii)要将视觉潜在空间(RPN框建议)与预训练的文本编码器配对,我们提出了区域提示的概念,以学习将文本嵌入空间与区域视觉对象特征相结合; (iii)为了扩展学习过程以检测更广泛的对象,我们通过新颖的自我训练框架利用可用的在线资源,该框架允许在嘈杂的未经图像的网络图像上训练所提出的检测器。最后,(iv)评估我们所提出的检测器,称为及时插图,我们对具有挑战性的LVI和MS-COCO数据集进行了广泛的实验。提示件表现出优于现有方法的卓越性能,而其他培训图像和零手动注释较少。带代码的项目页面:https://fcjian.github.io/promptdet。
translated by 谷歌翻译
Open world object detection aims at detecting objects that are absent in the object classes of the training data as unknown objects without explicit supervision. Furthermore, the exact classes of the unknown objects must be identified without catastrophic forgetting of the previous known classes when the corresponding annotations of unknown objects are given incrementally. In this paper, we propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR. In the first stage, we pre-train a model on the current annotated data to detect objects from the current known classes, and concurrently train an additional binary classifier to classify predictions into foreground or background classes. This helps the model to build an unbiased feature representations that can facilitate the detection of unknown classes in subsequent process. In the second stage, we fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint. Furthermore, we alleviate catastrophic forgetting when the annotations of the unknown classes becomes available incrementally by using knowledge distillation and exemplar replay. Experimental results on PASCAL VOC and MS-COCO show that our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
translated by 谷歌翻译
现有的开放式视频探测器通常通过利用不同形式的弱监督来扩大其词汇大小。这有助于推断出新的对象。开放式视频检测(OVD)中使用的两种流行形式的弱点,包括预审计的剪辑模型和图像级监督。我们注意到,这两种监督模式均未在检测任务中最佳地对齐:剪辑经过图像文本对培训,并且缺乏对象的精确定位,而图像级监督已与启发式方法一起使用,这些启发式方法无法准确指定本地对象区域。在这项工作中,我们建议通过从剪辑模型中执行以对象为中心的语言嵌入来解决此问题。此外,我们仅使用伪标记的过程来视觉上仅通过图像级监督对象,该过程提供高质量的对象建议,并有助于在训练过程中扩展词汇。我们通过新的重量转移函数在上述两个对象对准策略之间建立桥梁,该策略汇总了它们的免费强度。本质上,提出的模型试图最大程度地减少OVD设置中对象和以图像为中心表示之间的差距。在可可基准上,我们提出的方法在新颖类中实现了40.3 AP50,绝对11.9比以前的最佳性能获得了11.9的增长。对于LVIS,我们超过了5.0 Mask AP的最先进VILD模型,总体上有3.4个。 。代码:https://bit.ly/3byzoqp。
translated by 谷歌翻译
Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. Code is available at https://github.com/jozhang97/DETA.
translated by 谷歌翻译
最近的方法表明,直接在大规模图像文本对集合上训练深神网络可以在各种识别任务上进行零拍传输。一个中心问题是如何将其推广到对象检测,这涉及本地化的非语义任务以及分类的语义任务。为了解决这个问题,我们引入了一种视觉嵌入对准方法,该方法将审计模型(例如夹子)(例如夹子)的概括能力传输到像Yolov5这样的对象检测器。我们制定了一个损耗函数,使我们能够将图像和文本嵌入在预审计的模型夹中对齐与检测器的修改语义预测头。通过这种方法,我们能够训练一个对象检测器,该对象检测器可以在可可,ILSVRC和视觉基因组零摄像机检测基准上实现最先进的性能。在推断期间,我们的模型可以适应以检测任何数量的对象类,而无需其他培训。我们还发现,标准对象检测缩放可以很好地传输到我们的方法,并在Yolov5模型和Yolov3模型的各种尺度上找到一致的改进。最后,我们开发了一种自我标记的方法,该方法可提供显着的分数改进,而无需额外的图像或标签。
translated by 谷歌翻译
什么构成一个物体?这是计算机愿景中的长期问题。为了实现这一目标,已经开发了许多基于学习的基于学习的方法来得分对象。但是,它们通常不会划过新域和未经看不见的对象。在本文中,我们倡导现有方法缺乏由人类可理解的语义管理的自上而下的监督信号。为了弥合这一差距,我们探索了已经用对齐的图像文本对培训的多模态视觉变压器(MVIT)。我们对各个域和新型对象的广泛实验显示了MVITS的最先进的性能,以使图像中的通用对象本地化。基于这些发现,我们使用多尺度特征处理和可变形的自我关注来开发一种高效且灵活的MVIT架构,可以自适应地生成给定特定语言查询的提议。我们展示了MVIT提案在各种应用中的重要性,包括开放世界对象检测,突出和伪装对象检测,监督和自我监督的检测任务。此外,MVITS提供了具有可理解文本查询的增强的交互性。代码:https://git.io/j1hpy。
translated by 谷歌翻译
使用图像文本对的对比语言图像预测(剪辑)在零拍摄和传输学习设置中的图像分类中取得了令人印象深刻的结果。但是,我们表明,直接应用此类模型以识别对象检测的图像区域导致由于域移位导致的性能差:剪辑训练以与文本描述的整体匹配,而不捕获图像之间的细粒度对齐地区和文本跨度。为了缓解此问题,我们提出了一种称为RegionClip的新方法,可显着扩展剪辑以学习区域级视觉表示,从而在图像区域和文本概念之间实现细粒度对齐。我们的方法利用剪辑模型将图像区域与模板标题匹配,然后预先列出我们的模型以对准要素空间中的这些区域文本对。将预磨料模型转移到开放词汇对象检测任务时,我们的方法显着优于3.8 AP50和2.2 AP的最新技术,分别用于COCO和LVIS数据集的新型类别。更多,学习区域表示支持对象检测的零拍摄推断,显示了对COCO和LVIS数据集的有希望的结果。我们的代码可在https://github.com/microsoft/regionclip上获得。
translated by 谷歌翻译
We present in this paper a novel denoising training method to speedup DETR (DEtection TRansformer) training and offer a deepened understanding of the slow convergence issue of DETR-like methods. We show that the slow convergence results from the instability of bipartite graph matching which causes inconsistent optimization goals in early training stages. To address this issue, except for the Hungarian loss, our method additionally feeds ground-truth bounding boxes with noises into Transformer decoder and trains the model to reconstruct the original boxes, which effectively reduces the bipartite graph matching difficulty and leads to a faster convergence. Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement. As a result, our DN-DETR results in a remarkable improvement ($+1.9$AP) under the same setting and achieves the best result (AP $43.4$ and $48.6$ with $12$ and $50$ epochs of training respectively) among DETR-like methods with ResNet-$50$ backbone. Compared with the baseline under the same setting, DN-DETR achieves comparable performance with $50\%$ training epochs. Code is available at \url{https://github.com/FengLi-ust/DN-DETR}.
translated by 谷歌翻译
本文介绍了用于学习对象级别,语言感知和富含语义的视觉表示的接地语言图像预培训(GLIP)模型。 Glip统一对象检测和短语进行预培训。统一带来了两个好处:1)它允许GLIP从检测和接地数据中学习,以改善两个任务和引导良好的接地模型; 2)GLIP可以通过以自培训方式产生接地盒来利用大规模的图像文本对,使学习的表示是语义丰富的。在我们的实验中,我们在27M的接地数据上预先列车触胶,包括3M人的注释和24M Web爬网的图像文本对。学习的表示表明了强烈的零射击和对各种对象识别任务的可转换性。 1)直接在Coco和LVIS上评估(在训练期间没有在Coco中看到任何图像)时,Plip分别达到49.8 AP和26.9 AP,超过许多监督基线。 2)在COCO上微调后,GLIP在Val和61.5 AP上实现60.8 AP在测试开发上,超过先前的SOTA。 3)当转移到下游对象检测任务时,具有完全监控动态头的1次触发器竞争对手。代码将在https://github.com/microsoft/glip发布。
translated by 谷歌翻译
最近对物体检测的自我监督预防方法在很大程度上专注于预先绘制物体探测器的骨干,忽略了检测架构的关键部分。相反,我们介绍了DetReg,这是一种新的自我监督方法,用于预先列出整个对象检测网络,包括对象本地化和嵌入组件。在预先绘制期间,DetReg预测对象本地化以与无监督区域提议生成器匹配本地化,并同时将相应的特征嵌入与自我监控图像编码器的嵌入式对齐。我们使用DETR系列探测器实施DetReg,并显示它在Coco,Pascal VOC和空中客车船基准上的Fineetuned时改善了竞争性基线。在低数据制度中,包括半监督和几秒钟学习设置,DetReg建立了许多最先进的结果,例如,在Coco上,我们看到10次检测和+3.5的AP改进A +6.0 AP改进当培训只有1%的标签时。对于代码和预用模型,请访问https://amirbar.net/detreg的项目页面
translated by 谷歌翻译