基于多个实例检测网络(MIDN),大量作品为弱监督对象检测(WSOD)做出了巨大的努力。但是,大多数方法忽略了一个事实,即在训练阶段每个图像中都存在压倒性的负面实例,这会误导培训并使网络落入本地最小值。为了解决这个问题,本文提出了基于硬采样和软采样的在线渐进式实例平衡采样(OPI)算法。该算法包括两个模块:渐进式实例平衡(PIB)模块和渐进式实例重新加权(PIR)模块。 PIB模块结合了随机抽样和iou均衡采样,逐渐地挖掘出硬性实例,同时平衡积极实例和负面实例。 PIR模块进一步利用了分类器得分和相邻的改进,以重新获得使网络关注积极实例的积极实例的权重。 Pascal VOC 2007和2012数据集的广泛实验结果表明,所提出的方法可以显着改善基线,这也可与许多现有的最新结果相媲美。此外,与基线相比,所提出的方法不需要额外的网络参数,并且补充培训开销很小,可以根据实例分类器修补范式轻松地集成到其他方法中。
translated by 谷歌翻译
弱监督对象检测(WSOD)旨在仅训练需要图像级注释的对象检测器。最近,一些作品设法选择了从训练有素的WSOD网络生成的准确框,以监督半监督的检测框架以提高性能。但是,这些方法只需根据图像级标准将设置的训练分为标记和未标记的集合,从而选择了足够的错误标记或错误的局部盒子预测作为伪基真正的真实性,从而产生了次优的检测性能解决方案。为了克服这个问题,我们提出了一个新颖的WSOD框架,其新范式从弱监督到嘈杂的监督(W2N)。通常,通过训练有素的WSOD网络产生的给定的伪基真实性,我们提出了一种两模块迭代训练算法来完善伪标签并逐步监督更好的对象探测器。在定位适应模块中,我们提出正规化损失,以减少原始伪基真实性中判别零件的比例,从而获得更好的伪基真实性,以进行进一步的训练。在半监督的模块中,我们提出了两个任务实例级拆分方法,以选择用于训练半监督检测器的高质量标签。不同基准测试的实验结果验证了W2N的有效性,我们的W2N优于所有现有的纯WSOD方法和转移学习方法。我们的代码可在https://github.com/1170300714/w2n_wsod上公开获得。
translated by 谷歌翻译
弱监督的对象检测(WSOD)是一项任务,可使用仅在图像级注释上训练的模型来检测图像中的对象。当前的最新模型受益于自我监督的实例级别的监督,但是由于弱监督不包括计数或位置信息,因此最常见的``Argmax''标签方法通常忽略了许多对象实例。为了减轻此问题,我们提出了一种新颖的多个实例标记方法,称为对象发现。我们进一步在弱监督下引入了新的对比损失,在该监督下,没有实例级信息可用于采样,称为弱监督对比损失(WSCL)。WSCL旨在通过利用一致的功能来嵌入同一类中的向量来构建对象发现的可靠相似性阈值。结果,我们在2014年和2017年MS-Coco以及Pascal VOC 2012上取得了新的最新结果,并在Pascal VOC 2007上取得了竞争成果。
translated by 谷歌翻译
半弱监督和监督的学习最近在对象检测文献中引起了很大的关注,因为它们可以减轻成功训练深度学习模型所需的注释成本。半监督学习的最先进方法依赖于使用多阶段过程训练的学生老师模型,并大量数据增强。为弱监督的设置开发了自定义网络,因此很难适应不同的检测器。在本文中,引入了一种弱半监督的训练方法,以减少这些训练挑战,但通过仅利用一小部分全标记的图像,并在弱标记图像中提供信息来实现最先进的性能。特别是,我们基于通用抽样的学习策略以在线方式产生伪基真实(GT)边界框注释,消除了对多阶段培训的需求和学生教师网络配置。这些伪GT框是根据通过得分传播过程累积的对象建议的分类得分从弱标记的图像中采样的。 PASCAL VOC数据集的经验结果表明,使用VOC 2007作为完全标记的拟议方法可提高性能5.0%,而VOC 2012作为弱标记数据。同样,有了5-10%的完全注释的图像,我们观察到MAP中的10%以上的改善,表明对图像级注释的适度投资可以大大改善检测性能。
translated by 谷歌翻译
Weakly-supervised object detection (WSOD) models attempt to leverage image-level annotations in lieu of accurate but costly-to-obtain object localization labels. This oftentimes leads to substandard object detection and localization at inference time. To tackle this issue, we propose D2DF2WOD, a Dual-Domain Fully-to-Weakly Supervised Object Detection framework that leverages synthetic data, annotated with precise object localization, to supplement a natural image target domain, where only image-level labels are available. In its warm-up domain adaptation stage, the model learns a fully-supervised object detector (FSOD) to improve the precision of the object proposals in the target domain, and at the same time learns target-domain-specific and detection-aware proposal features. In its main WSOD stage, a WSOD model is specifically tuned to the target domain. The feature extractor and the object proposal generator of the WSOD model are built upon the fine-tuned FSOD model. We test D2DF2WOD on five dual-domain image benchmarks. The results show that our method results in consistently improved object detection and localization compared with state-of-the-art methods.
translated by 谷歌翻译
多年来,使用单点监督的对象检测受到了越来越多的关注。在本文中,我们将如此巨大的性能差距归因于产生高质量的提案袋的失败,这对于多个实例学习至关重要(MIL)。为了解决这个问题,我们引入了现成建议方法(OTSP)方法的轻量级替代方案,从而创建点对点网络(P2BNET),该网络可以通过在中生成建议袋来构建一个互平衡的提案袋一种锚点。通过充分研究准确的位置信息,P2BNET进一步构建了一个实例级袋,避免了多个物体的混合物。最后,以级联方式进行的粗到精细政策用于改善提案和地面真相(GT)之间的IOU。从这些策略中受益,P2BNET能够生产出高质量的实例级袋以进行对象检测。相对于MS可可数据集中的先前最佳PSOD方法,P2BNET将平均平均精度(AP)提高了50%以上。它还证明了弥合监督和边界盒监督检测器之间的性能差距的巨大潜力。该代码将在github.com/ucas-vg/p2bnet上发布。
translated by 谷歌翻译
零拍摄对象检测(ZSD),将传统检测模型扩展到检测来自Unseen类别的对象的任务,已成为计算机视觉中的新挑战。大多数现有方法通过严格的映射传输策略来解决ZSD任务,这可能导致次优ZSD结果:1)这些模型的学习过程忽略了可用的看不见的类信息,因此可以轻松地偏向所看到的类别; 2)原始视觉特征空间并不合适,缺乏歧视信息。为解决这些问题,我们开发了一种用于ZSD的新型语义引导的对比网络,命名为Contrastzsd,一种检测框架首先将对比学习机制带入零拍摄检测的领域。特别地,对比度包括两个语义导向的对比学学习子网,其分别与区域类别和区域区域对之间形成对比。成对对比度任务利用从地面真理标签和预定义的类相似性分布派生的附加监督信号。在那些明确的语义监督的指导下,模型可以了解更多关于看不见的类别的知识,以避免看到概念的偏见问题,同时优化视觉功能的数据结构,以更好地辨别更好的视觉语义对齐。广泛的实验是在ZSD,即Pascal VOC和MS Coco的两个流行基准上进行的。结果表明,我们的方法优于ZSD和广义ZSD任务的先前最先进的。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with "attention" mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
translated by 谷歌翻译
复杂的水下环境为物体检测带来了新的挑战,例如未平衡的光条件,低对比度,阻塞和水生生物的模仿。在这种情况下,水下相机捕获的物体将变得模糊,并且通用探测器通常会在这些模糊的物体上失败。这项工作旨在从两个角度解决问题:不确定性建模和艰难的例子采矿。我们提出了一个名为Boosting R-CNN的两阶段水下检测器,该检测器包括三个关键组件。首先,提出了一个名为RetinArpn的新区域建议网络,该网络提供了高质量的建议,并考虑了对象和IOU预测,以确定对象事先概率的不确定性。其次,引入了概率推理管道,以结合第一阶段的先验不确定性和第二阶段分类评分,以模拟最终检测分数。最后,我们提出了一种名为Boosting Reweighting的新的硬示例挖掘方法。具体而言,当区域提案网络误认为样品的对象的事先概率时,提高重新加权将在训练过程中增加R-CNN头部样品的分类损失,同时减少具有准确估计的先验的简易样品丢失。因此,可以在第二阶段获得强大的检测头。在推理阶段,R-CNN具有纠正第一阶段的误差以提高性能的能力。在两个水下数据集和两个通用对象检测数据集上进行的全面实验证明了我们方法的有效性和鲁棒性。
translated by 谷歌翻译
Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics, imprecise bounding box regression, a scarcity of real-world datasets, and sensitive localization evaluation. In this paper, we propose a comprehensive solution to these challenges. First, we find that the existing anchor-free label assignment method is prone to mislabeling small targets as background, leading to their omission by detectors. To overcome this issue, we propose an all-scale pseudo-box-based label assignment scheme that relaxes the constraints on scale and decouples the spatial assignment from the size of the ground-truth target. Second, motivated by the structured prior of feature pyramids, we introduce the one-stage cascade refinement network (OSCAR), which uses the high-level head as soft proposals for the low-level refinement head. This allows OSCAR to process the same target in a cascade coarse-to-fine manner. Finally, we present a new research benchmark for infrared small target detection, consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets, the normalized contrast evaluation metric, and the DeepInfrared toolkit for detection. We conduct extensive ablation studies to evaluate the components of OSCAR and compare its performance to state-of-the-art model-driven and data-driven methods on the SIRST-V2 benchmark. Our results demonstrate that a top-down cascade refinement framework can improve the accuracy of infrared small target detection without sacrificing efficiency. The DeepInfrared toolkit, dataset, and trained models are available at https://github.com/YimianDai/open-deepinfrared to advance further research in this field.
translated by 谷歌翻译
物体检测在计算机视觉中取得了巨大的进步。具有外观降级的小物体检测是一个突出的挑战,特别是对于鸟瞰观察。为了收集足够的阳性/阴性样本进行启发式训练,大多数物体探测器预设区域锚,以便将交叉联盟(iou)计算在地面判处符号数据上。在这种情况下,小物体经常被遗弃或误标定。在本文中,我们提出了一种有效的动态增强锚(DEA)网络,用于构建新颖的训练样本发生器。与其他最先进的技术不同,所提出的网络利用样品鉴别器来实现基于锚的单元和无锚单元之间的交互式样本筛选,以产生符合资格的样本。此外,通过基于保守的基于锚的推理方案的多任务联合训练增强了所提出的模型的性能,同时降低计算复杂性。所提出的方案支持定向和水平对象检测任务。对两个具有挑战性的空中基准(即,DotA和HRSC2016)的广泛实验表明,我们的方法以适度推理速度和用于训练的计算开销的准确性实现最先进的性能。在DotA上,我们的DEA-NET与ROI变压器的基线集成了0.40%平均平均精度(MAP)的先进方法,以便用较弱的骨干网(Resnet-101 VS Resnet-152)和3.08%平均 - 平均精度(MAP),具有相同骨干网的水平对象检测。此外,我们的DEA网与重新排列的基线一体化实现最先进的性能80.37%。在HRSC2016上,它仅使用3个水平锚点超过1.1%的最佳型号。
translated by 谷歌翻译
Open-set object detection (OSOD) aims to detect the known categories and identify unknown objects in a dynamic world, which has achieved significant attentions. However, previous approaches only consider this problem in data-abundant conditions, while neglecting the few-shot scenes. In this paper, we seek a solution for the few-shot open-set object detection (FSOSOD), which aims to quickly train a detector based on few samples while detecting all known classes and identifying unknown classes. The main challenge for this task is that few training samples induce the model to overfit on the known classes, resulting in a poor open-set performance. We propose a new FSOSOD algorithm to tackle this issue, named Few-shOt Open-set Detector (FOOD), which contains a novel class weight sparsification classifier (CWSC) and a novel unknown decoupling learner (UDL). To prevent over-fitting, CWSC randomly sparses parts of the normalized weights for the logit prediction of all classes, and then decreases the co-adaptability between the class and its neighbors. Alongside, UDL decouples training the unknown class and enables the model to form a compact unknown decision boundary. Thus, the unknown objects can be identified with a confidence probability without any pseudo-unknown samples for training. We compare our method with several state-of-the-art OSOD methods in few-shot scenes and observe that our method improves the recall of unknown classes by 5%-9% across all shots in VOC-COCO dataset setting.
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
仅使用图像级注释的弱监督对象检测(WSOD)在过去几年中引起了不断增长的关注。然而,此类任务通常以专注于自然图像的特定于域的解决方案,而我们表明应用于预先训练的深度特征的简单多实例方法会产生优异的非摄影数据集的性能,可能包括新类。该方法不包括任何微调或跨域学习,因此有效且可能适用于任意数据集和类。我们调查了拟议方法的几种口味,一些包括多层的Perceptron和多层分类器。尽管其简单性,我们的方法在一系列公开的数据集中展示了竞争结果,包括绘画(人民艺术,象征),水彩画,剪贴画和漫画,并允许快速学习未经看的视觉类别。
translated by 谷歌翻译
接受注释较弱的对象探测器是全面监督者的负担得起的替代方案。但是,它们之间仍然存在显着的性能差距。我们建议通过微调预先训练的弱监督检测器来缩小这一差距,并使用``Box-In-box''(bib'(bib)自动从训练集中自动选择了一些完全注销的样品,这是一种新颖的活跃学习专门针对弱势监督探测器的据可查的失败模式而设计的策略。 VOC07和可可基准的实验表明,围嘴表现优于其他活跃的学习技术,并显着改善了基本的弱监督探测器的性能,而每个类别仅几个完全宣布的图像。围嘴达到了完全监督的快速RCNN的97%,在VOC07上仅10%的全已通量图像。在可可(COCO)上,平均每类使用10张全面通量的图像,或同等的训练集的1%,还减少了弱监督检测器和完全监督的快速RCN之间的性能差距(In AP)以上超过70% ,在性能和数据效率之间表现出良好的权衡。我们的代码可在https://github.com/huyvvo/bib上公开获取。
translated by 谷歌翻译
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For 300 × 300 input, SSD achieves 74.3% mAP 1 on VOC2007 test at 59 FPS on a Nvidia Titan X and for 512 × 512 input, SSD achieves 76.9% mAP, outperforming a comparable state-of-the-art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at: https://github.com/weiliu89/caffe/tree/ssd .
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
To balance the annotation labor and the granularity of supervision, single-frame annotation has been introduced in temporal action localization. It provides a rough temporal location for an action but implicitly overstates the supervision from the annotated-frame during training, leading to the confusion between actions and backgrounds, i.e., action incompleteness and background false positives. To tackle the two challenges, in this work, we present the Snippet Classification model and the Dilation-Erosion module. In the Dilation-Erosion module, we expand the potential action segments with a loose criterion to alleviate the problem of action incompleteness and then remove the background from the potential action segments to alleviate the problem of action incompleteness. Relying on the single-frame annotation and the output of the snippet classification, the Dilation-Erosion module mines pseudo snippet-level ground-truth, hard backgrounds and evident backgrounds, which in turn further trains the Snippet Classification model. It forms a cyclic dependency. Furthermore, we propose a new embedding loss to aggregate the features of action instances with the same label and separate the features of actions from backgrounds. Experiments on THUMOS14 and ActivityNet 1.2 validate the effectiveness of the proposed method. Code has been made publicly available (https://github.com/LingJun123/single-frame-TAL).
translated by 谷歌翻译
Fast R-CNN
Ross Girshick
分类:
2015-04-30
This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9× faster than R-CNN, is 213× faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3× faster, tests 10× faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https: //github.com/rbgirshick/fast-rcnn.
translated by 谷歌翻译