Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axisaligned 2D bounding boxes, it can be shown that IoU can be directly used as a regression loss. However, IoU has a plateau making it infeasible to optimize in the case of nonoverlapping bounding boxes. In this paper, we address the weaknesses of IoU by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized IoU (GIoU ) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, IoU based, and new, GIoU based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.
translated by 谷歌翻译
Bounding box regression is the crucial step in object detection. In existing methods, while n-norm loss is widely adopted for bounding box regression, it is not tailored to the evaluation metric, i.e., Intersection over Union (IoU). Recently, IoU loss and generalized IoU (GIoU) loss have been proposed to benefit the IoU metric, but still suffer from the problems of slow convergence and inaccurate regression. In this paper, we propose a Distance-IoU (DIoU) loss by incorporating the normalized distance between the predicted box and the target box, which converges much faster in training than IoU and GIoU losses. Furthermore, this paper summarizes three geometric factors in bounding box regression, i.e., overlap area, central point distance and aspect ratio, based on which a Complete IoU (CIoU) loss is proposed, thereby leading to faster convergence and better performance. By incorporating DIoU and CIoU losses into state-of-the-art object detection algorithms, e.g., YOLO v3, SSD and Faster R-CNN, we achieve notable performance gains in terms of not only IoU metric but also GIoU metric. Moreover, DIoU can be easily adopted into non-maximum suppression (NMS) to act as the criterion, further boosting performance improvement. The source code and trained models are available at https://github.com/Zzh-tju/DIoU.
translated by 谷歌翻译
在现代探测器中,默认使用四变独立回归定位损耗,如平滑 - $ \ ell_1 $丢失。然而,这种损失超薄了,使其与联盟(iou)的最终评估度量,交叉口不一致。直接采用标准IOU也不是不可行的,因为在非重叠盒的情况下的恒定零高原和最小值的非零梯度可能使其不可培养。因此,我们提出了一种解决这些问题的系统方法。首先,我们提出了一个新的公制,延伸的iou(eiou),当两个盒子没有重叠时,它是良好的定义,当重叠时,它是不重叠的并且减少到标准iou。其次,我们介绍了凸化技术(CT)以在EIOU的基础上构建损失,这可以保证梯度最小为零。第三,我们提出了一种稳定的优化技术(SOT),使分数欧盟损失更加稳定,平稳地接近最低。第四,为了充分利用基于EIOO的损失的能力,我们引入了一个相互关联的iou预测头,以进一步提升本地化准确性。通过拟议的贡献,新方法与Reset50 + FPN的备用R-CNN掺入,作为骨干收益率\ TextBF {4.2 Map} Gain on Voc2007和Coco2017上的基准下滑 - $ \ ell_1 $损失,几乎\ textbf {没有培训和推理计算成本}。具体而言,度量标准更长的是,增益越令人显着,在Coco2017上的VOC2007和\ TextBF {5.4 MAP}上越突出,可以在Coco2017上以公式$ AP_ {90} $。
translated by 谷歌翻译
我们提出对象盒,这是一种新颖的单阶段锚定且高度可推广的对象检测方法。与现有的基于锚固的探测器和无锚的探测器相反,它们更偏向于其标签分配中的特定对象量表,我们仅将对象中心位置用作正样本,并在不同的特征级别中平均处理所有对象,而不论对象'尺寸或形状。具体而言,我们的标签分配策略将对象中心位置视为形状和尺寸不足的锚定,并以无锚固的方式锚定,并允许学习每个对象的所有尺度。为了支持这一点,我们将新的回归目标定义为从中心单元位置的两个角到边界框的四个侧面的距离。此外,为了处理比例变化的对象,我们提出了一个量身定制的损失来处理不同尺寸的盒子。结果,我们提出的对象检测器不需要在数据集中调整任何依赖数据集的超参数。我们在MS-Coco 2017和Pascal VOC 2012数据集上评估了我们的方法,并将我们的结果与最先进的方法进行比较。我们观察到,与先前的作品相比,对象盒的性能优惠。此外,我们执行严格的消融实验来评估我们方法的不同组成部分。我们的代码可在以下网址提供:https://github.com/mohsenzand/objectbox。
translated by 谷歌翻译
Confluence是对对象检测的边界框后处理中的非墨西哥抑制(NMS)替代的新型非交流(IOU)替代方案。它克服了基于IOU的NMS变体的固有局限性,以通过使用归一化的曼哈顿距离启发的接近度度量来表示边界框聚类的更稳定,一致的预测指标来表示边界框群集。与贪婪和柔软的NMS不同,它不仅依赖分类置信度得分来选择最佳边界框,而是选择与给定群集中最接近其他盒子的框并删除高度汇合的相邻框。在MS Coco和CrowdHuman基准测试中,汇合的平均精度最高2.3-3.8%,而平均召回率则与DEACTO标准和ART NMS NMS变体相比,平均召回率最高为5.3-7.2%。广泛的定性分析和阈值灵敏度分析实验支持了定量结果,这支持了结论,即汇合比NMS变体更健壮。 Confluence代表边界框处理中的范式变化,有可能在边界框回归过程中替换IOU。
translated by 谷歌翻译
在对象检测中,边界框回归(BBR)是决定对象定位性能的关键步骤。但是,我们发现BBR的大多数先前的损失功能都有两个主要缺点:(i)$ \ ell_n $ -norm和IOU基于IOU的损失功能都无法效率地描述BBR的目标,这会导致收敛速度缓慢和不准确的回归结果。 。 (ii)大多数损失函数都忽略了BBR中的不平衡问题,即与目标盒有较小重叠的大量锚盒对BBR的优化有最大的影响。为了减轻造成的不利影响,我们进行了彻底的研究,以利用本文中BBR损失的潜力。首先,提出了有关联合(EIOU)损失的有效交集,该交集明确测量了BBR中三个几何因素的差异,即重叠面积,中心点和侧面长度。之后,我们说明有效的示例挖掘(EEM)问题,并提出了焦点损失的回归版本,以使回归过程集中在高质量的锚点上。最后,将上述两个部分组合在一起以获得新的损失函数,即焦点损失。对合成数据集和真实数据集进行了广泛的实验。与其他BBR损失相比,在收敛速度和定位精度上都可以显着优势。
translated by 谷歌翻译
对象检测是典型的多任务学习应用程序,其同时优化分类和回归。但是,分类损失总是以基于锚的方法的多任务损失主导,妨碍了任务的一致和平衡优化。在本文中,我们发现转移边界盒可以在分类中改变正面和负样本的划分,意思是分类取决于回归。此外,考虑到不同的数据集,优化器和回归损耗功能,我们总结了关于微调损耗重量的三个重要结论。基于上述结论,我们提出了自适应损失重量调整(ALWA)以根据损失的统计特征来解决优化基于锚的方法的不平衡。通过将Alwa纳入以前的最先进的探测器,我们在Pascal VOC和MS Coco上实现了显着的性能增益,即使是L1,Smoothl1和Ciou丢失。代码可在https://github.com/ywx-hub/alwa获得。
translated by 谷歌翻译
In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at https://github.com/zhaoweicai/cascade-rcnn (Caffe) and https://github.com/zhaoweicai/Detectron-Cascade-RCNN (Detectron).
translated by 谷歌翻译
本文研究了涉及对象集,对象检测,实例级分段和多对象跟踪的基本视觉任务的性能评估标准。现有标准的算法排名可能会以不同的参数选择波动,例如联合(IOU)阈值的交叉点使他们的评估不可靠。更重要的是,没有能够验证我们是否可以相信标准的评估。这项工作提出了对性能标准的可信赖性的概念,该概念需要(i)对可靠性的参数鲁棒性,(ii)理智测试中的上下文意义,以及(iii)与数学要求(例如度量属性)的一致性。我们观察到这些要求被许多广泛使用的标准忽略了,并使用一组形状的指标探索替代标准。我们还根据建议的可信度要求评估所有这些标准。
translated by 谷歌翻译
在过去的几年中,360 {\ deg}摄像机在过去几年中越来越受欢迎。在本文中,我们提出了两种基本技术-360 {\ deg}图像中的对象检测的视野IOU(fov-iou)和360Augmentation。尽管大多数专为透视图像设计的对象检测神经网络适用于EquiretectAffular投影(ERP)格式的360 {\ deg}图像,但由于ERP图像中的失真,它们的性能会恶化。我们的方法可以很容易地与现有的透视对象检测器集成在一起,并显着改善了性能。 FOV-iou计算球形图像中两个视野边界框的交叉点,该框可用于训练,推理和评估,而360augmentation是一种数据增强技术,特定于360 {\ deg}对象检测任务随机旋转球形图像并由于球体对平面投影而解决偏差。我们在具有不同类型的透视对象检测器的360室数据集上进行了广泛的实验,并显示了我们方法的一致有效性。
translated by 谷歌翻译
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the prediction logits due to its inefficiency in distilling the localization information. In this paper, we investigate whether logit mimicking always lags behind feature imitation. Towards this goal, we first present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student. Second, we introduce the concept of valuable localization region that can aid to selectively distill the classification and localization knowledge for a certain region. Combining these two new components, for the first time, we show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years. The thorough studies exhibit the great potential of logit mimicking that can significantly alleviate the localization ambiguity, learn robust feature representation, and ease the training difficulty in the early stage. We also provide the theoretical connection between the proposed LD and the classification KD, that they share the equivalent optimization effect. Our distillation scheme is simple as well as effective and can be easily applied to both dense horizontal object detectors and rotated object detectors. Extensive experiments on the MS COCO, PASCAL VOC, and DOTA benchmarks demonstrate that our method can achieve considerable AP improvement without any sacrifice on the inference speed. Our source code and pretrained models are publicly available at https://github.com/HikariTJU/LD.
translated by 谷歌翻译
Non-maximum suppression is an integral part of the object detection pipeline. First, it sorts all detection boxes on the basis of their scores. The detection box M with the maximum score is selected and all other detection boxes with a significant overlap (using a pre-defined threshold) with M are suppressed. This process is recursively applied on the remaining boxes. As per the design of the algorithm, if an object lies within the predefined overlap threshold, it leads to a miss. To this end, we propose Soft-NMS, an algorithm which decays the detection scores of all other objects as a continuous function of their overlap with M. Hence, no object is eliminated in this process. Soft-NMS obtains consistent improvements for the coco-style mAP metric on standard datasets like PASCAL VOC 2007 (1.7% for both R-FCN and Faster-RCNN) and MS-COCO (1.3% for R-FCN and 1.1% for Faster-RCNN) by just changing the NMS algorithm without any additional hyper-parameters. UsingDeformable-RFCN, Soft-NMS improves state-of-the-art in object detection from 39.8% to 40.9% with a single model. Further, the computational complexity of Soft-NMS is the same as traditional NMS and hence it can be efficiently implemented. Since Soft-NMS does not require any extra training and is simple to implement, it can be easily integrated into any object detection pipeline. Code for Soft-NMS is publicly available on GitHub http://bit.ly/ 2nJLNMu.
translated by 谷歌翻译
空中无人机镜头的视觉检查是当今土地搜索和救援(SAR)运营的一个组成部分。由于此检查是对人类的缓慢而繁琐,令人疑惑的工作,我们提出了一种新颖的深入学习算法来自动化该航空人员检测(APD)任务。我们试验模型架构选择,在线数据增强,转移学习,图像平铺和其他几种技术,以提高我们方法的测试性能。我们将新型航空检验视网膜(空气)算法呈现为这些贡献的结合。空中探测器在精度(〜21个百分点增加)和速度方面,在常用的SAR测试数据上表现出最先进的性能。此外,我们为SAR任务中的APD问题提供了新的正式定义。也就是说,我们提出了一种新的评估方案,在现实世界SAR本地化要求方面排名探测器。最后,我们提出了一种用于稳健的新型后处理方法,近似对象定位:重叠边界框(MOB)算法的合并。在空中检测器中使用的最终处理阶段在真实的空中SAR任务面前显着提高了其性能和可用性。
translated by 谷歌翻译
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:tinyurl.com/FCOSv1
translated by 谷歌翻译
检测微小的物体是一个非常具有挑战性的问题,因为一个小物体只包含几个像素的大小。我们证明,由于缺乏外观信息,最新的检测器不会对微小物体产生令人满意的结果。我们的主要观察结果是,基于联合(IOU)的相交(例如IOU本身及其扩展)对微小物体的位置偏差非常敏感,并且在基于锚固的检测器中使用时会大大恶化检测性能。为了减轻这一点,我们提出了使用Wasserstein距离进行微小对象检测的新评估度量。具体而言,我们首先将边界框建模为2D高斯分布,然后提出一个新的公制称为标准化的瓦斯汀距离(NWD),以通过相应的高斯分布来计算它们之间的相似性。提出的NWD度量可以轻松地嵌入分配中,非最大抑制作用以及任何基于锚固的检测器的损耗函数,以替换常用的IOU度量。我们在新的数据集上评估了我们的度量,以用于微小对象检测(AI-TOD),其中平均对象大小比现有对象检测数据集小得多。广泛的实验表明,在配备NWD指标时,我们的方法的性能比标准的微调基线高6.7 AP点,并且比最先进的竞争对手高6.0 AP点。代码可在以下网址提供:https://github.com/jwwangchn/nwd。
translated by 谷歌翻译
无锚的检测器基本上将对象检测作为密集的分类和回归。对于流行的无锚检测器,通常是引入单个预测分支来估计本地化的质量。当我们深入研究分类和质量估计的实践时,会观察到以下不一致之处。首先,对于某些分配了完全不同标签的相邻样品,训练有素的模型将产生相似的分类分数。这违反了训练目标并导致绩效退化。其次,发现检测到具有较高信心的边界框与相应的地面真相具有较小的重叠。准确的局部边界框将被非最大抑制(NMS)过程中的精确量抑制。为了解决不一致问题,提出了动态平滑标签分配(DSLA)方法。基于最初在FCO中开发的中心概念,提出了平稳的分配策略。在[0,1]中将标签平滑至连续值,以在正样品和负样品之间稳定过渡。联合(IOU)在训练过程中会动态预测,并与平滑标签结合。分配动态平滑标签以监督分类分支。在这样的监督下,质量估计分支自然合并为分类分支,这简化了无锚探测器的体系结构。全面的实验是在MS Coco基准上进行的。已经证明,DSLA可以通过减轻上述无锚固探测器的不一致来显着提高检测准确性。我们的代码在https://github.com/yonghaohe/dsla上发布。
translated by 谷歌翻译
现代领先的物体探测器是从深层CNN的骨干分类器网络重新批准的两阶段或一级网络。YOLOV3是一种这样的非常熟知的最新状态单次检测器,其采用输入图像并将其划分为相等大小的网格矩阵。具有物体中心的网格单元是负责检测特定对象的电池。本文介绍了一种新的数学方法,为准确紧密绑定函数预测分配每个对象的多个网格。我们还提出了一个有效的离线拷贝粘贴数据增强,用于对象检测。我们提出的方法显着优于一些现有的对象探测器,具有进一步更好的性能的前景。
translated by 谷歌翻译
Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with humanannotated, pixel-level segmentation masks. Such pixelaccurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called "BoxSup", produces competitive results (e.g., 62.0% mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8% mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PAS-CAL VOC 2012 and PASCAL-CONTEXT [24].
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
学习准确的对象探测器通常需要具有精确对象边界框的大规模培训数据。但是,标记此类数据是昂贵且耗时的。随着众包标签过程和对象的歧义可能会引起嘈杂的边界盒注释,对象探测器将遭受退化的训练数据。在这项工作中,我们旨在应对使用不准确的边界框来学习健壮对象探测器的挑战。受到以下事实的启发:本地化精度在分类精度不准确的框中显着遭受不准确的框架的影响,我们建议将分类作为用于完善定位结果的指导信号。具体而言,通过将对象视为一袋实例,我们引入了一种对象感知的多个实例学习方法(OA-MIL),其中具有对象感知的实例选择和对象感知实例扩展。前者旨在选择准确的培训实例,而不是直接使用不准确的框注释。后者的重点是生成高质量的选择实例。关于合成嘈杂数据集的广泛实验(即嘈杂的Pascal VOC和MS-Coco)和真正的嘈杂小麦头数据集证明了我们OA-MIL的有效性。代码可从https://github.com/cxliu0/oa-mil获得。
translated by 谷歌翻译