In this report, we present PP-YOLOE, an industrial state-of-the-art object detector with high performance and friendly deployment. We optimize on the basis of the previous PP-YOLOv2, using anchor-free paradigm, more powerful backbone and neck equipped with CSPRepResStage, ET-head and dynamic label assignment algorithm TAL. We provide s/m/l/x models for different practice scenarios. As a result, PP-YOLOE-l achieves 51.4 mAP on COCO test-dev and 78.1 FPS on Tesla V100, yielding a remarkable improvement of (+1.9 AP, +13.35% speed up) and (+1.3 AP, +24.96% speed up), compared to the previous state-of-the-art industrial models PP-YOLOv2 and YOLOX respectively. Further, PP-YOLOE inference speed achieves 149.2 FPS with TensorRT and FP16-precision. We also conduct extensive experiments to verify the effectiveness of our designs. Source code and pre-trained models are available at https://github.com/PaddlePaddle/PaddleDetection.
translated by 谷歌翻译
多年来,Yolo系列一直是有效对象检测的事实上的行业级别标准。尤洛社区(Yolo Community)绝大多数繁荣,以丰富其在众多硬件平台和丰富场景中的使用。在这份技术报告中,我们努力将其限制推向新的水平,以坚定不移的行业应用心态前进。考虑到对真实环境中速度和准确性的多种要求,我们广泛研究了行业或学术界的最新对象检测进步。具体而言,我们从最近的网络设计,培训策略,测试技术,量化和优化方法中大量吸收了思想。最重要的是,我们整合了思想和实践,以在各种规模上建立一套可供部署的网络,以适应多元化的用例。在Yolo作者的慷慨许可下,我们将其命名为Yolov6。我们还向用户和贡献者表示热烈欢迎,以进一步增强。为了了解性能,我们的Yolov6-N在NVIDIA TESLA T4 GPU上以1234 fps的吞吐量在可可数据集上击中35.9%的AP。 Yolov6-S在495 fps处的43.5%AP罢工,在相同规模〜(Yolov5-S,Yolox-S和Ppyoloe-S)上超过其他主流探测器。我们的量化版本的Yolov6-S甚至在869 fps中带来了新的43.3%AP。此外,与其他推理速度相似的检测器相比,Yolov6-m/L的精度性能(即49.5%/52.3%)更好。我们仔细进行了实验以验证每个组件的有效性。我们的代码可在https://github.com/meituan/yolov6上提供。
translated by 谷歌翻译
更好的准确性和效率权衡在对象检测中是一个具有挑战性的问题。在这项工作中,我们致力于研究对象检测的关键优化和神经网络架构选择,以提高准确性和效率。我们调查了无锚策略对轻质对象检测模型的适用性。我们增强了骨干结构并设计了颈部的轻质结构,从而提高了网络的特征提取能力。我们改善标签分配策略和损失功能,使培训更稳定和高效。通过这些优化,我们创建了一个名为PP-Picodet的新的实时对象探测器系列,这在移动设备的对象检测上实现了卓越的性能。与其他流行型号相比,我们的模型在准确性和延迟之间实现了更好的权衡。 Picodet-s只有0.99m的参数达到30.6%的地图,它是地图的绝对4.8%,同时与yolox-nano相比将移动CPU推理延迟减少55%,并且与Nanodet相比,MAP的绝对改善了7.1%。当输入大小为320时,它在移动臂CPU上达到123个FPS(使用桨Lite)。Picodet-L只有3.3M参数,达到40.9%的地图,这是地图的绝对3.7%,比yolov5s更快44% 。如图1所示,我们的模型远远优于轻量级对象检测的最先进的结果。代码和预先训练的型号可在https://github.com/paddlepaddle/paddledentions提供。
translated by 谷歌翻译
Arbitrary-oriented object detection is a fundamental task in visual scenes involving aerial images and scene text. In this report, we present PP-YOLOE-R, an efficient anchor-free rotated object detector based on PP-YOLOE. We introduce a bag of useful tricks in PP-YOLOE-R to improve detection precision with marginal extra parameters and computational cost. As a result, PP-YOLOE-R-l and PP-YOLOE-R-x achieve 78.14 and 78.28 mAP respectively on DOTA 1.0 dataset with single-scale training and testing, which outperform almost all other rotated object detectors. With multi-scale training and testing, PP-YOLOE-R-l and PP-YOLOE-R-x further improve the detection precision to 80.02 and 80.73 mAP. In this case, PP-YOLOE-R-x surpasses all anchor-free methods and demonstrates competitive performance to state-of-the-art anchor-based two-stage models. Further, PP-YOLOE-R is deployment friendly and PP-YOLOE-R-s/m/l/x can reach 69.8/55.1/48.3/37.1 FPS respectively on RTX 2080 Ti with TensorRT and FP16-precision. Source code and pre-trained models are available at https://github.com/PaddlePaddle/PaddleDetection, which is powered by https://github.com/PaddlePaddle/Paddle.
translated by 谷歌翻译
In this paper, we aim to design an efficient real-time object detector that exceeds the YOLO series and is easily extensible for many object recognition tasks such as instance segmentation and rotated object detection. To obtain a more efficient model architecture, we explore an architecture that has compatible capacities in the backbone and neck, constructed by a basic building block that consists of large-kernel depth-wise convolutions. We further introduce soft labels when calculating matching costs in the dynamic label assignment to improve accuracy. Together with better training techniques, the resulting object detector, named RTMDet, achieves 52.8% AP on COCO with 300+ FPS on an NVIDIA 3090 GPU, outperforming the current mainstream industrial detectors. RTMDet achieves the best parameter-accuracy trade-off with tiny/small/medium/large/extra-large model sizes for various application scenarios, and obtains new state-of-the-art performance on real-time instance segmentation and rotated object detection. We hope the experimental results can provide new insights into designing versatile real-time object detectors for many object recognition tasks. Code and models are released at https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet.
translated by 谷歌翻译
In this report, we present a fast and accurate object detection method dubbed DAMO-YOLO, which achieves higher performance than the state-of-the-art YOLO series. DAMO-YOLO is extended from YOLO with some new technologies, including Neural Architecture Search (NAS), efficient Reparameterized Generalized-FPN (RepGFPN), a lightweight head with AlignedOTA label assignment, and distillation enhancement. In particular, we use MAE-NAS, a method guided by the principle of maximum entropy, to search our detection backbone under the constraints of low latency and high performance, producing ResNet-like / CSP-like structures with spatial pyramid pooling and focus modules. In the design of necks and heads, we follow the rule of "large neck, small head". We import Generalized-FPN with accelerated queen-fusion to build the detector neck and upgrade its CSPNet with efficient layer aggregation networks (ELAN) and reparameterization. Then we investigate how detector head size affects detection performance and find that a heavy neck with only one task projection layer would yield better results. In addition, AlignedOTA is proposed to solve the misalignment problem in label assignment. And a distillation schema is introduced to improve performance to a higher level. Based on these new techs, we build a suite of models at various scales to meet the needs of different scenarios, i.e., DAMO-YOLO-Tiny/Small/Medium. They can achieve 43.0/46.8/50.0 mAPs on COCO with the latency of 2.78/3.83/5.62 ms on T4 GPUs respectively. The code is available at https://github.com/tinyvision/damo-yolo.
translated by 谷歌翻译
Yolov7在5 fps到160 fps的速度和准确性上都超过了所有已知对象探测器,并且在GPU V100上具有30 fps或更高的所有已知实时对象探测器中,精度最高的56.8%AP。YOLOV7-E6对象检测器(56 fps v100,55.9%AP)优于两个基于变压器的检测器SWIN-L-CASCADE MAKS R-CNN(9.2 fps A100,53.9%AP)的速度和2%的准确性和2%基于卷积的检测器Convnext-XL级联膜面罩R-CNN(8.6 fps a100,55.2%AP)的速度为551%,精度为0.7%AP,Yolov7优于:Yolor,Yolox,Yolox,Scaled-Yolov4,Yolov4,Yolov5,Yolov5,Yolov5,Yolov5,Yolov5,Yolov5,Yolov5,Yolov5,Yolov5,DETR,可变形的DETR,DINO-5SCALE-R50,VIT-ADAPTER-B和许多其他对象探测器的速度和准确性。此外,我们仅在不使用任何其他数据集或预先训练的权重的情况下从头开始训练Yolov7。源代码在https://github.com/wongkinyiu/yolov7中发布。
translated by 谷歌翻译
无人驾驶飞机(UAV)的实时对象检测是一个具有挑战性的问题,因为Edge GPU设备作为物联网(IoT)节点的计算资源有限。为了解决这个问题,在本文中,我们提出了一种基于Yolox模型的新型轻型深度学习体系结构,用于Edge GPU上的实时对象检测。首先,我们设计了一个有效且轻巧的PixSF头,以更换Yolox的原始头部以更好地检测小物体,可以将其进一步嵌入深度可分离的卷积(DS Conv)中,以达到更轻的头。然后,开发为减少网络参数的颈层中的较小结构,这是精度和速度之间的权衡。此外,我们将注意模块嵌入头层中,以改善预测头的特征提取效果。同时,我们还改进了标签分配策略和损失功能,以减轻UAV数据集的类别不平衡和盒子优化问题。最后,提出了辅助头进行在线蒸馏,以提高PIXSF Head中嵌入位置嵌入和特征提取的能力。在NVIDIA Jetson NX和Jetson Nano GPU嵌入平台上,我们的轻质模型的性能得到了实验验证。扩展的实验表明,与目前的模型相比,Fasterx模型在Visdrone2021数据集中实现了更好的折衷和延迟之间的折衷。
translated by 谷歌翻译
复杂的水下环境为物体检测带来了新的挑战,例如未平衡的光条件,低对比度,阻塞和水生生物的模仿。在这种情况下,水下相机捕获的物体将变得模糊,并且通用探测器通常会在这些模糊的物体上失败。这项工作旨在从两个角度解决问题:不确定性建模和艰难的例子采矿。我们提出了一个名为Boosting R-CNN的两阶段水下检测器,该检测器包括三个关键组件。首先,提出了一个名为RetinArpn的新区域建议网络,该网络提供了高质量的建议,并考虑了对象和IOU预测,以确定对象事先概率的不确定性。其次,引入了概率推理管道,以结合第一阶段的先验不确定性和第二阶段分类评分,以模拟最终检测分数。最后,我们提出了一种名为Boosting Reweighting的新的硬示例挖掘方法。具体而言,当区域提案网络误认为样品的对象的事先概率时,提高重新加权将在训练过程中增加R-CNN头部样品的分类损失,同时减少具有准确估计的先验的简易样品丢失。因此,可以在第二阶段获得强大的检测头。在推理阶段,R-CNN具有纠正第一阶段的误差以提高性能的能力。在两个水下数据集和两个通用对象检测数据集上进行的全面实验证明了我们方法的有效性和鲁棒性。
translated by 谷歌翻译
In this paper, we introduce an anchor-box free and single shot instance segmentation method, which is conceptually simple, fully convolutional and can be used by easily embedding it into most off-the-shelf detection methods. Our method, termed PolarMask, formulates the instance segmentation problem as predicting contour of instance through instance center classification and dense distance regression in a polar coordinate. Moreover, we propose two effective approaches to deal with sampling high-quality center examples and optimization for dense distance regression, respectively, which can significantly improve the performance and simplify the training process. Without any bells and whistles, PolarMask achieves 32.9% in mask mAP with single-model and single-scale training/testing on the challenging COCO dataset.For the first time, we show that the complexity of instance segmentation, in terms of both design and computation complexity, can be the same as bounding box object detection and this much simpler and flexible instance segmentation framework can achieve competitive accuracy. We hope that the proposed PolarMask framework can serve as a fundamental and strong baseline for single shot instance segmentation task. Code is available at: github.com/xieenze/PolarMask.
translated by 谷歌翻译
无锚的检测器基本上将对象检测作为密集的分类和回归。对于流行的无锚检测器,通常是引入单个预测分支来估计本地化的质量。当我们深入研究分类和质量估计的实践时,会观察到以下不一致之处。首先,对于某些分配了完全不同标签的相邻样品,训练有素的模型将产生相似的分类分数。这违反了训练目标并导致绩效退化。其次,发现检测到具有较高信心的边界框与相应的地面真相具有较小的重叠。准确的局部边界框将被非最大抑制(NMS)过程中的精确量抑制。为了解决不一致问题,提出了动态平滑标签分配(DSLA)方法。基于最初在FCO中开发的中心概念,提出了平稳的分配策略。在[0,1]中将标签平滑至连续值,以在正样品和负样品之间稳定过渡。联合(IOU)在训练过程中会动态预测,并与平滑标签结合。分配动态平滑标签以监督分类分支。在这样的监督下,质量估计分支自然合并为分类分支,这简化了无锚探测器的体系结构。全面的实验是在MS Coco基准上进行的。已经证明,DSLA可以通过减轻上述无锚固探测器的不一致来显着提高检测准确性。我们的代码在https://github.com/yonghaohe/dsla上发布。
translated by 谷歌翻译
我们分析了实时对象检测模型的网络结构,发现功能串联阶段中的特征非常丰富。在此处应用注意模块可以有效提高模型的检测准确性。但是,常用的注意模块或自我发项模块在检测准确性和推理效率方面的性能差。因此,我们提出了一个新型的自我发场模块,称为颈部网络的特征串联阶段,称为2D局部特征叠加的自我注意。这个自我发场模块通过局部特征和本地接收场反映了全球特征。我们还建议并优化有效的脱钩头和AB-OTA,并实现SOTA结果。对于使用我们建议的改进,获得了49.0 \%(66.2 fps),46.1 \%(80.6 fps)和39.1 \%(100 fps)的平均精度。我们的模型平均精度超过了Yolov5 0.8 \%-3.1 \%。
translated by 谷歌翻译
DETR方法中引入的查询机制正在改变对象检测的范例,最近有许多基于查询的方法获得了强对象检测性能。但是,当前基于查询的检测管道遇到了以下两个问题。首先,需要多阶段解码器来优化随机初始化的对象查询,从而产生较大的计算负担。其次,训练后的查询是固定的,导致不满意的概括能力。为了纠正上述问题,我们在较快的R-CNN框架中提出了通过查询生成网络预测的特征对象查询,并开发了一个功能性的查询R-CNN。可可数据集的广泛实验表明,我们的特征查询R-CNN获得了所有R-CNN探测器的最佳速度准确性权衡,包括最近的最新稀疏R-CNN检测器。该代码可在\ url {https://github.com/hustvl/featurized-queryrcnn}中获得。
translated by 谷歌翻译
现代物体检测网络追求一般物体检测数据集的更高精度,同时计算负担也随着精度的提高而越来越多。然而,推理时间和精度对于需要是实时的对象检测系统至关重要。没有额外的计算成本,有必要研究精度改进。在这项工作中,提出了两种模块以提高零成本的检测精度,这是一般对象检测网络的FPN和检测头改进。我们采用规模注意机制,以有效地保险熔断多级功能映射,参数较少,称为SA-FPN模块。考虑到分类头和回归头的相关性,我们使用顺序头取代广泛使用的并联头部,称为SEQ-Head模块。为了评估有效性,我们将这两个模块应用于一些现代最先进的对象检测网络,包括基于锚和无锚。 Coco DataSet上的实验结果表明,具有两个模块的网络可以将原始网络超越1.1 AP和0.8 AP,分别为锚的锚和无锚网络的零成本。代码将在https://git.io/jtfgl提供。
translated by 谷歌翻译
由于许多安全性系统(例如手术机器人和自动驾驶汽车)在不稳定的环境中运行,具有传感器噪声和不完整的数据,因此希望对象探测器将本地化不确定性考虑在内。但是,基于锚的对象检测的现有不确定性估计方法存在几个局限性。 1)它们对具有不同特征和尺度的异质对象性质的不确定性进行建模,例如位置(中心点)和尺度(宽度,高度),这可能很难估算。 2)它们将框偏移型为高斯分布,这与遵循Dirac Delta分布的地面真相边界框不兼容。 3)由于基于锚的方法对锚定超参数敏感,因此它们的定位不确定性也可能对选择超参数的选择高度敏感。为了应对这些局限性,我们提出了一种称为UAD的新定位不确定性估计方法,用于无锚对象检测。我们的方法捕获了均匀的四个方向(左,右,顶部,底部)的四个方向的不确定性,因此它可以判断哪个方向不确定,并在[0,1]中提供不确定性的定量值。为了实现这种不确定性估计,我们设计了一种新的不确定性损失,负功率对数可能性损失,以通过加权其IOU加权可能性损失来衡量本地化不确定性,从而减轻了模型错误指定问题。此外,我们提出了反映分类评分的估计不确定性的不确定性感知局灶性损失。可可数据集的实验结果表明,我们的方法在不牺牲计算效率的情况下显着提高了最高1.8点的FCO。
translated by 谷歌翻译
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:tinyurl.com/FCOSv1
translated by 谷歌翻译
我们为变体视觉任务提供了一个概念上简单,灵活和通用的视觉感知头,例如分类,对象检测,实例分割和姿势估计以及不同的框架,例如单阶段或两个阶段的管道。我们的方法有效地标识了图像中的对象,同时同时生成高质量的边界框或基于轮廓的分割掩码或一组关键点。该方法称为Unihead,将不同的视觉感知任务视为通过变压器编码器体系结构学习的可分配点。给定固定的空间坐标,Unihead将其自适应地分散到了不同的空间点和有关它们的关系的原因。它以多个点的形式直接输出最终预测集,使我们能够在具有相同头部设计的不同框架中执行不同的视觉任务。我们展示了对成像网分类的广泛评估以及可可套件的所有三个曲目,包括对象检测,实例分割和姿势估计。如果没有铃铛和口哨声,Unihead可以通过单个视觉头设计统一这些视觉任务,并与为每个任务开发的专家模型相比,实现可比的性能。我们希望我们的简单和通用的Unihead能够成为可靠的基线,并有助于促进通用的视觉感知研究。代码和型号可在https://github.com/sense-x/unihead上找到。
translated by 谷歌翻译
现有的实例分割方法已经达到了令人印象深刻的表现,但仍遭受了共同的困境:一个实例推断出冗余表示(例如,多个框,网格和锚点),这导致了多个重复的预测。因此,主流方法通常依赖于手工设计的非最大抑制(NMS)后处理步骤来选择最佳预测结果,这会阻碍端到端训练。为了解决此问题,我们建议一个称为Uniinst的无盒和无端机实例分割框架,该框架仅对每个实例产生一个唯一的表示。具体而言,我们设计了一种实例意识到的一对一分配方案,即仅产生一个表示(Oyor),该方案根据预测和地面真相之间的匹配质量,动态地为每个实例动态分配一个独特的表示。然后,一种新颖的预测重新排列策略被优雅地集成到框架中,以解决分类评分和掩盖质量之间的错位,从而使学习的表示形式更具歧视性。借助这些技术,我们的Uniinst,第一个基于FCN的盒子和无NMS实例分段框架,实现竞争性能,例如,使用Resnet-50-FPN和40.2 mask AP使用Resnet-101-FPN,使用Resnet-50-FPN和40.2 mask AP,使用Resnet-101-FPN,对抗AP可可测试-DEV的主流方法。此外,提出的实例感知方法对于遮挡场景是可靠的,在重锁定的ochuman基准上,通过杰出的掩码AP优于公共基线。我们的代码将在出版后提供。
translated by 谷歌翻译
Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings.Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
translated by 谷歌翻译