在这项工作中,我们研究了对象检测模型的自我监督预审计的不同方法。我们首先设计一个通用框架,通过随机采样和投射框来学习从图像中学习空间一致的密集表示,并将其投影到每个增强视图,并最大程度地提高相应的盒子功能之间的相似性。我们研究文献中的现有设计选择,例如盒子生成,功能提取策略,并使用其在实例级图像表示学习技术上获得成功启发的多种视图。我们的结果表明,该方法对超参数的不同选择是可靠的,并且使用多个视图不如实例级图像表示学习所显示的那样有效。我们还设计了两个辅助任务,以通过(1)通过使用对比度损失从采样设置中预测盒子中的一个视图中的框来预测框,并且(2)使用变压器预测盒子坐标,这可能会受益。下游对象检测任务。我们发现,在标记数据上预审计的模型时,这些任务不会导致更好的对象检测性能。
translated by 谷歌翻译
我们提出了一种适用于半全球任务的自学学习(SSL)方法,例如对象检测和语义分割。我们通过在训练过程中最大程度地减少像素级局部对比度(LC)损失,代表了同一图像转换版本的相应图像位置之间的局部一致性。可以将LC-LOSS添加到以最小开销的现有自我监督学习方法中。我们使用可可,Pascal VOC和CityScapes数据集评估了两个下游任务的SSL方法 - 对象检测和语义细分。我们的方法的表现优于现有的最新SSL方法可可对象检测的方法1.9%,Pascal VOC检测1.4%,而CityScapes Sementation则为0.6%。
translated by 谷歌翻译
对比的自我监督学习在很大程度上缩小了对想象成的预先训练的差距。然而,它的成功高度依赖于想象成的以对象形象,即相同图像的不同增强视图对应于相同的对象。当预先训练在具有许多物体的更复杂的场景图像上,如此重种策划约束会立即不可行。为了克服这一限制,我们介绍了对象级表示学习(ORL),这是一个新的自我监督的学习框架迈向场景图像。我们的主要洞察力是利用图像级自我监督的预培训作为发现对象级语义对应之前的,从而实现了从场景图像中学习的对象级表示。对Coco的广泛实验表明,ORL显着提高了自我监督学习在场景图像上的性能,甚至超过了在几个下游任务上的监督Imagenet预训练。此外,当可用更加解标的场景图像时,ORL提高了下游性能,证明其在野外利用未标记数据的巨大潜力。我们希望我们的方法可以激励未来的研究从场景数据的更多通用无人监督的代表。
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
最近自我监督学习成功的核心组成部分是裁剪数据增强,其选择要在自我监督损失中用作正视图的图像的子区域。底层假设是给定图像的随机裁剪和调整大小的区域与感兴趣对象的信息共享信息,其中学习的表示将捕获。这种假设在诸如想象网的数据集中大多满足,其中存在大,以中心为中心的对象,这很可能存在于完整图像的随机作物中。然而,在诸如OpenImages或Coco的其他数据集中,其更像是真实世界未保健数据的代表,通常存在图像中的多个小对象。在这项工作中,我们表明,基于通常随机裁剪的自我监督学习在此类数据集中表现不佳。我们提出用从对象提案算法获得的作物取代一种或两种随机作物。这鼓励模型学习对象和场景级别语义表示。使用这种方法,我们调用对象感知裁剪,导致对分类和对象检测基准的场景裁剪的显着改进。例如,在OpenImages上,我们的方法可以使用基于Moco-V2的预训练来实现8.8%的提高8.8%地图。我们还显示了对Coco和Pascal-Voc对象检测和分割任务的显着改善,通过最先进的自我监督的学习方法。我们的方法是高效,简单且通用的,可用于最现有的对比和非对比的自我监督的学习框架。
translated by 谷歌翻译
最近对物体检测的自我监督预防方法在很大程度上专注于预先绘制物体探测器的骨干,忽略了检测架构的关键部分。相反,我们介绍了DetReg,这是一种新的自我监督方法,用于预先列出整个对象检测网络,包括对象本地化和嵌入组件。在预先绘制期间,DetReg预测对象本地化以与无监督区域提议生成器匹配本地化,并同时将相应的特征嵌入与自我监控图像编码器的嵌入式对齐。我们使用DETR系列探测器实施DetReg,并显示它在Coco,Pascal VOC和空中客车船基准上的Fineetuned时改善了竞争性基线。在低数据制度中,包括半监督和几秒钟学习设置,DetReg建立了许多最先进的结果,例如,在Coco上,我们看到10次检测和+3.5的AP改进A +6.0 AP改进当培训只有1%的标签时。对于代码和预用模型,请访问https://amirbar.net/detreg的项目页面
translated by 谷歌翻译
Object detection with transformers (DETR) reaches competitive performance with Faster R-CNN via a transformer encoder-decoder architecture. Inspired by the great success of pre-training transformers in natural language processing, we propose a pretext task named random query patch detection to Unsupervisedly Pre-train DETR (UP-DETR) for object detection. Specifically, we randomly crop patches from the given image and then feed them as queries to the decoder. The model is pre-trained to detect these query patches from the original image. During the pre-training, we address two critical issues: multi-task learning and multi-query localization. (1) To trade off classification and localization preferences in the pretext task, we freeze the CNN backbone and propose a patch feature reconstruction branch which is jointly optimized with patch detection.(2) To perform multi-query localization, we introduce UP-DETR from single-query patch and extend it to multiquery patches with object query shuffle and attention mask. In our experiments, UP-DETR significantly boosts the performance of DETR with faster convergence and higher average precision on object detection, one-shot detection and panoptic segmentation. Code and pre-training models: https://github.com/dddzg/up-detr.
translated by 谷歌翻译
To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning method that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. We present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only <1% slower), but demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation; and outperforms the state-of-the-art methods by a large margin. Specifically, over the strong MoCo-v2 baseline, our method achieves significant improvements of 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation.
translated by 谷歌翻译
本文介绍了密集的暹罗网络(Denseiam),这是一个简单的无监督学习框架,用于密集的预测任务。它通过以两种类型的一致性(即像素一致性和区域一致性)之间最大化一个图像的两个视图之间的相似性来学习视觉表示。具体地,根据重叠区域中的确切位置对应关系,Denseiam首先最大化像素级的空间一致性。它还提取一批与重叠区域中某些子区域相对应的区域嵌入,以形成区域一致性。与以前需要负像素对,动量编码器或启发式面膜的方法相反,Denseiam受益于简单的暹罗网络,并优化了不同粒度的一致性。它还证明了简单的位置对应关系和相互作用的区域嵌入足以学习相似性。我们将Denseiam应用于ImageNet,并在各种下游任务上获得竞争性改进。我们还表明,只有在一些特定于任务的损失中,简单的框架才能直接执行密集的预测任务。在现有的无监督语义细分基准中,它以2.1 miou的速度超过了最新的细分方法,培训成本为28%。代码和型号在https://github.com/zwwwayne/densesiam上发布。
translated by 谷歌翻译
自我监督的方法(SSL)通过最大化两个增强视图之间的相互信息,裁剪是一种巨大的成功,其中裁剪是一种流行的增强技术。裁剪区域广泛用于构造正对,而裁剪后的左侧区域很少被探讨在现有方法中,尽管它们在一起构成相同的图像实例并且两者都有助于对类别的描述。在本文中,我们首次尝试从完整的角度来展示两种地区的重要性,并提出称为区域对比学习(RegionCl)的简单但有效的借口任务。具体地,给定两个不同的图像,我们随机从具有相同大小的每个图像随机裁剪区域(称为粘贴视图)并将它们交换以分别与左区域(称为CANVAS视图)一起组成两个新图像。然后,可以根据以下简单标准提供对比度对,即,每个视图是(1)阳性,其视图从相同的原始图像增强,并且与从其他图像增强的视图增强的视图。对于对流行的SSL方法进行微小的修改,RegionCL利用这些丰富的对并帮助模型区分来自画布和粘贴视图的区域特征,因此学习更好的视觉表示。 Imagenet,Coco和Citycapes上的实验表明,RegionCL通过大型边缘改善Moco V2,Densecl和Simsiam,并在分类,检测和分割任务上实现最先进的性能。代码将在https://github.com/annbless/regioncl.git上获得。
translated by 谷歌翻译
We present DetCo, a simple yet effective self-supervised approach for object detection. Unsupervised pre-training methods have been recently designed for object detection, but they are usually deficient in image classification, or the opposite. Unlike them, DetCo transfers well on downstream instance-level dense prediction tasks, while maintaining competitive image-level classification accuracy. The advantages are derived from (1) multi-level supervision to intermediate representations, (2) contrastive learning between global image and local patches. These two designs facilitate discriminative and consistent global and local representation at each level of feature pyramid, improving detection and classification, simultaneously.Extensive experiments on VOC, COCO, Cityscapes, and ImageNet demonstrate that DetCo not only outperforms recent methods on a series of 2D and 3D instance-level detection tasks, but also competitive on image classification. For example, on ImageNet classification, DetCo is 6.9% and 5.0% top-1 accuracy better than InsLoc and DenseCL, which are two contemporary works designed for object detection. Moreover, on COCO detection, DetCo is 6.9 AP better than SwAV with Mask R-CNN C4. Notably, DetCo largely boosts up Sparse R-CNN, a recent strong detector, from 45.0 AP to 46.5 AP (+1.5 AP), establishing a new SOTA on COCO. Code is available.
translated by 谷歌翻译
The DETR object detection approach applies the transformer encoder and decoder architecture to detect objects and achieves promising performance. In this paper, we present a simple approach to address the main problem of DETR, the slow convergence, by using representation learning technique. In this approach, we detect an object bounding box as a pair of keypoints, the top-left corner and the center, using two decoders. By detecting objects as paired keypoints, the model builds up a joint classification and pair association on the output queries from two decoders. For the pair association we propose utilizing contrastive self-supervised learning algorithm without requiring specialized architecture. Experimental results on MS COCO dataset show that Pair DETR can converge at least 10x faster than original DETR and 1.5x faster than Conditional DETR during training, while having consistently higher Average Precision scores.
translated by 谷歌翻译
自我监督学习(SSL)的承诺是利用大量未标记的数据来解决复杂的任务。尽管简单,图像级学习取得了出色的进步,但最新方法显示出包括图像结构知识的优势。但是,通过引入手工制作的图像分割来定义感兴趣的区域或专门的增强策略,这些方法牺牲了使SSL如此强大的简单性和通用性。取而代之的是,我们提出了一个自我监督的学习范式,该学习范式本身会发现这种图像结构。我们的方法,ODIN,夫妻对象发现和表示网络,以发现有意义的图像分割,而无需任何监督。由此产生的学习范式更简单,更易碎,更一般,并且取得了最先进的转移学习结果,以进行对象检测和实例对可可的细分,以及对Pascal和CityScapes的语义细分,同时超过监督的预先培训,用于戴维斯的视频细分。
translated by 谷歌翻译
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or "views") of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a "swapped" prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.
translated by 谷歌翻译
基于对比的学习的预培训的目标是利用大量的未标记数据来产生可以容易地调整下游的模型。电流方法围绕求解图像辨别任务:给定锚图像,该图像的增强对应物和一些其他图像,该模型必须产生表示,使得锚和其对应物之间的距离很小,并且锚和其他图像很大。这种方法存在两个重要问题:(i)通过对比图像级别的表示,很难生成有利于下游对象级任务(如实例分段)的详细对象敏感功能; (ii)制造增强对应的增强策略是固定的,在预培训的后期阶段做出更低的学习。在这项工作中,我们引入课程对比对象级预培训(CCOP)来解决这些问题:(i)我们使用选择性搜索来查找粗略对象区域并使用它们构建图像间对象级对比度损耗和一个图像内对象级别歧视损失进入我们的预训练目标; (ii)我们提出了一种课程学习机制,其自适应地增强所生成的区域,这允许模型一致地获取有用的学习信号,即使在预训练的后期阶段也是如此。我们的实验表明,当在多对象场景图像数据集上进行预训练时,我们的方法通过大量对象级任务的大幅度提高了MoCo V2基线。代码可在https://github.com/chenhongyiyang/ccop中找到。
translated by 谷歌翻译
在计算病理学工作流程中检测和分裂ObjectSwithinWholesLideImagesis。自我监督学习(SSL)吸引了这种重度注释的任务。尽管自然图像的密集任务具有广泛的基准,但不幸的是,在当前的病理学作品中,此类研究仍然没有。我们的论文打算缩小这一差距。我们首先基于病理图像中密集预测任务的代表性SSL方法。然后,我们提出了概念对比学习(结论),这是密集预训练的SSL框架。我们探讨了结论如何使用不同来源提供的概念,并最终提出了一种简单的无依赖性概念生成方法,该方法不依赖于外部分割算法或显着检测模型。广泛的实验表明,在不同环境中,结论比以前的最新SSL方法具有优势。沿着我们的探索,我们弥补了几个重要而有趣的组成部分,这有助于致力于病理图像的密集预训练。我们希望这项工作可以提供有用的数据点,并鼓励社区为感兴趣的问题进行结论预培训。代码可用。
translated by 谷歌翻译
Contrastive learning methods for unsupervised visual representation learning have reached remarkable levels of transfer performance. We argue that the power of contrastive learning has yet to be fully unleashed, as current methods are trained only on instance-level pretext tasks, leading to representations that may be sub-optimal for downstream tasks requiring dense pixel predictions. In this paper, we introduce pixel-level pretext tasks for learning dense feature representations. The first task directly applies contrastive learning at the pixel level. We additionally propose a pixel-to-propagation consistency task that produces better results, even surpassing the state-of-the-art approaches by a large margin. Specifically, it achieves 60.2 AP, 41.4 / 40.5 mAP and 77.2 mIoU when transferred to Pascal VOC object detection (C4), COCO object detection (FPN / C4) and Cityscapes semantic segmentation using a ResNet-50 backbone network, which are 2.6 AP, 0.8 / 1.0 mAP and 1.0 mIoU better than the previous best methods built on instance-level contrastive learning. Moreover, the pixel-level pretext tasks are found to be effective for pretraining not only regular backbone networks but also head networks used for dense downstream tasks, and are complementary to instance-level contrastive methods. These results demonstrate the strong potential of defining pretext tasks at the pixel level, and suggest a new path forward in unsupervised visual representation learning. Code is available at https://github.com/zdaxie/PixPro.
translated by 谷歌翻译
In contrastive self-supervised learning, the common way to learn discriminative representation is to pull different augmented "views" of the same image closer while pushing all other images further apart, which has been proven to be effective. However, it is unavoidable to construct undesirable views containing different semantic concepts during the augmentation procedure. It would damage the semantic consistency of representation to pull these augmentations closer in the feature space indiscriminately. In this study, we introduce feature-level augmentation and propose a novel semantics-consistent feature search (SCFS) method to mitigate this negative effect. The main idea of SCFS is to adaptively search semantics-consistent features to enhance the contrast between semantics-consistent regions in different augmentations. Thus, the trained model can learn to focus on meaningful object regions, improving the semantic representation ability. Extensive experiments conducted on different datasets and tasks demonstrate that SCFS effectively improves the performance of self-supervised learning and achieves state-of-the-art performance on different downstream tasks.
translated by 谷歌翻译
对比度学习的许多最新方法已努力弥补在ImageNet等标志性图像和Coco等复杂场景上进行预处理的预处理之间的差距。这一差距之所以存在很大程度上是因为普遍使用的随机作物增强量在不同物体的拥挤场景图像中获得语义上不一致的内容。以前的作品使用预处理管道来定位明显的对象以改进裁剪,但是端到端的解决方案仍然难以捉摸。在这项工作中,我们提出了一个框架,该框架通过共同学习表示和细分来实现这一目标。我们利用分割掩码来训练具有掩模依赖性对比损失的模型,并使用经过部分训练的模型来引导更好的掩模。通过在这两个组件之间进行迭代,我们将分割信息中的对比度更新进行基础,并同时改善整个训练的分割。实验表明我们的表示形式在分类,检测和分割方面鲁棒性转移到下游任务。
translated by 谷歌翻译