In recent years, object detection has achieved a very large performance improvement, but the detection result of small objects is still not very satisfactory. This work proposes a strategy based on feature fusion and dilated convolution that employs dilated convolution to broaden the receptive field of feature maps at various scales in order to address this issue. On the one hand, it can improve the detection accuracy of larger objects. On the other hand, it provides more contextual information for small objects, which is beneficial to improving the detection accuracy of small objects. The shallow semantic information of small objects is obtained by filtering out the noise in the feature map, and the feature information of more small objects is preserved by using multi-scale fusion feature module and attention mechanism. The fusion of these shallow feature information and deep semantic information can generate richer feature maps for small object detection. Experiments show that this method can have higher accuracy than the traditional YOLOv3 network in the detection of small objects and occluded objects. In addition, we achieve 32.8\% Mean Average Precision on the detection of small objects on MS COCO2017 test set. For 640*640 input, this method has 88.76\% mAP on the PASCAL VOC2012 dataset.
translated by 谷歌翻译
近年来,基于深度学习的面部检测算法取得了长足的进步。这些算法通常可以分为两类,即诸如更快的R-CNN和像Yolo这样的单阶段检测器之类的两个阶段检测器。由于准确性和速度之间的平衡更好,因此在许多应用中广泛使用了一阶段探测器。在本文中,我们提出了一个基于一阶段检测器Yolov5的实时面部检测器,名为Yolo-Facev2。我们设计一个称为RFE的接收场增强模块,以增强小面的接受场,并使用NWD损失来弥补IOU对微小物体的位置偏差的敏感性。对于面部阻塞,我们提出了一个名为Seam的注意模块,并引入了排斥损失以解决它。此外,我们使用重量函数幻灯片来解决简单和硬样品之间的不平衡,并使用有效的接收场的信息来设计锚。宽面数据集上的实验结果表明,在所有简单,中和硬子集中都可以找到我们的面部检测器及其变体的表现及其变体。源代码https://github.com/krasjet-yu/yolo-facev2
translated by 谷歌翻译
交通标志检测是无人驾驶系统的具有挑战性的任务,特别是对于检测多尺度目标和检测的实时问题。在交通标志检测过程中,目标的比例大大变化,这将对检测精度产生一定的影响。特征金字塔广泛用于解决这个问题,但它可能会破坏不同的交通标志尺度的功能一致性。此外,在实际应用中,常用方法难以提高多尺度交通标志的检测精度,同时确保实时检测。在本文中,我们提出了一种改进的特征金字塔模型,名为AF-FPN,它利用自适应注意模块(AAM)和特征增强模块(FEM)来减少特征映射生成过程中的信息损失,并提高表示能力特征金字塔。我们用AF-FPN替换了YOLOV5中的原始特征金字塔网络,这在确保实时检测的前提下提高了YOLOV5网络的多尺度目标的检测性能。此外,提出了一种新的自动学习数据增强方法来丰富数据集,提高模型的稳健性,使其更适合实际情况。关于清华腾讯100K(TT100K)数据集的广泛实验结果证明了与多种最先进的方法相比,所提出的方法的有效性和优越性。
translated by 谷歌翻译
Dunhuang murals are a collection of Chinese style and national style, forming a self-contained Chinese-style Buddhist art. It has very high historical and cultural value and research significance. Among them, the lines of Dunhuang murals are highly general and expressive. It reflects the character's distinctive character and complex inner emotions. Therefore, the outline drawing of murals is of great significance to the research of Dunhuang Culture. The contour generation of Dunhuang murals belongs to image edge detection, which is an important branch of computer vision, aims to extract salient contour information in images. Although convolution-based deep learning networks have achieved good results in image edge extraction by exploring the contextual and semantic features of images. However, with the enlargement of the receptive field, some local detail information is lost. This makes it impossible for them to generate reasonable outline drawings of murals. In this paper, we propose a novel edge detector based on self-attention combined with convolution to generate line drawings of Dunhuang murals. Compared with existing edge detection methods, firstly, a new residual self-attention and convolution mixed module (Ramix) is proposed to fuse local and global features in feature maps. Secondly, a novel densely connected backbone extraction network is designed to efficiently propagate rich edge feature information from shallow layers into deep layers. Compared with existing methods, it is shown on different public datasets that our method is able to generate sharper and richer edge maps. In addition, testing on the Dunhuang mural dataset shows that our method can achieve very competitive performance.
translated by 谷歌翻译
无人驾驶飞机(UAV)的实时对象检测是一个具有挑战性的问题,因为Edge GPU设备作为物联网(IoT)节点的计算资源有限。为了解决这个问题,在本文中,我们提出了一种基于Yolox模型的新型轻型深度学习体系结构,用于Edge GPU上的实时对象检测。首先,我们设计了一个有效且轻巧的PixSF头,以更换Yolox的原始头部以更好地检测小物体,可以将其进一步嵌入深度可分离的卷积(DS Conv)中,以达到更轻的头。然后,开发为减少网络参数的颈层中的较小结构,这是精度和速度之间的权衡。此外,我们将注意模块嵌入头层中,以改善预测头的特征提取效果。同时,我们还改进了标签分配策略和损失功能,以减轻UAV数据集的类别不平衡和盒子优化问题。最后,提出了辅助头进行在线蒸馏,以提高PIXSF Head中嵌入位置嵌入和特征提取的能力。在NVIDIA Jetson NX和Jetson Nano GPU嵌入平台上,我们的轻质模型的性能得到了实验验证。扩展的实验表明,与目前的模型相比,Fasterx模型在Visdrone2021数据集中实现了更好的折衷和延迟之间的折衷。
translated by 谷歌翻译
特征金字塔网络(FPN)已成为对象检测模型考虑对象的各种尺度的重要模块。但是,小物体上的平均精度(AP)相对低于中和大物体上的AP。原因是CNN较深层导致信息丢失作为特征提取水平的原因。我们提出了一个新的比例顺序(S^2)特征FPN的特征提取,以增强小物体的特征信息。我们将FPN结构视为尺度空间和提取尺度序列(s^2)特征,该特征是在FPN的水平轴上通过3D卷积。它基本上是扩展不变的功能,并建立在小物体的高分辨率金字塔功能图上。此外,建议的S^2功能可以扩展到基于FPN的大多数对象检测模型。我们证明所提出的S2功能可以提高COCO数据集中一阶段和两阶段探测器的性能。根据提出的S2功能,我们分别为Yolov4-P5和Yolov4-P6获得了高达1.3%和1.1%的AP改善。对于更快的RCNN和Mask R-CNN,我们分别观察到AP改进的2.0%和1.6%,分别具有建议的S^2功能。
translated by 谷歌翻译
本文提出了平行残留的双融合特征金字塔网络(PRB-FPN),以快速准确地单光对象检测。特征金字塔(FP)在最近的视觉检测中被广泛使用,但是由于汇总转换,FP的自上而下的途径无法保留准确的定位。随着使用更多层的更深骨干,FP的优势被削弱了。此外,它不能同时准确地检测到小物体。为了解决这些问题,我们提出了一种新的并行FP结构,具有双向(自上而下和自下而上)的融合以及相关的改进,以保留高质量的特征以进行准确定位。我们提供以下设计改进:(1)具有自下而上的融合模块(BFM)的平行分歧FP结构,以高精度立即检测小物体和大对象。 (2)串联和重组(CORE)模块为特征融合提供了自下而上的途径,该途径导致双向融合FP,可以从低层特征图中恢复丢失的信息。 (3)进一步纯化核心功能以保留更丰富的上下文信息。自上而下和自下而上的途径中的这种核心净化只能在几次迭代中完成。 (4)将残留设计添加到核心中,导致了一个新的重核模块,该模块可以轻松训练和集成,并具有更深入或更轻的骨架。所提出的网络可在UAVDT17和MS COCO数据集上实现最新性能。代码可在https://github.com/pingyang1117/prbnet_pytorch上找到。
translated by 谷歌翻译
现代物体检测网络追求一般物体检测数据集的更高精度,同时计算负担也随着精度的提高而越来越多。然而,推理时间和精度对于需要是实时的对象检测系统至关重要。没有额外的计算成本,有必要研究精度改进。在这项工作中,提出了两种模块以提高零成本的检测精度,这是一般对象检测网络的FPN和检测头改进。我们采用规模注意机制,以有效地保险熔断多级功能映射,参数较少,称为SA-FPN模块。考虑到分类头和回归头的相关性,我们使用顺序头取代广泛使用的并联头部,称为SEQ-Head模块。为了评估有效性,我们将这两个模块应用于一些现代最先进的对象检测网络,包括基于锚和无锚。 Coco DataSet上的实验结果表明,具有两个模块的网络可以将原始网络超越1.1 AP和0.8 AP,分别为锚的锚和无锚网络的零成本。代码将在https://git.io/jtfgl提供。
translated by 谷歌翻译
在当前的突出物体检测网络中,最流行的方法是使用U形结构。然而,大量的参数导致更多的计算和存储资源的消耗,无法在有限的存储器设备上部署在有限的存储器设备上不可行。其他一些浅层网络与U形结构相比不会保持相同的精度,并且具有更多参数的深网络结构不会收敛到全球最小损耗,速度很大。为了克服所有这些缺点,我们提出了一种具有三种贡献的新的深度卷积网络架构:(1)使用较小的卷积神经网络(CNN)在我们改进的凸起物体中压缩模型,包括压缩和强化提取模块(ISFCREM)以减少模型的参数。 (2)在ISFCREM中引入信道注意机制,以称量不同的通道,以提高特征表示的能力。 (3)应用新优化器在培训期间累积长期梯度信息,以便自适应地调整学习率。结果表明,该方法几乎可以将模型压缩到原始尺寸的1/3,而不会在与其他模型相比的六个广泛使用的突出物体检测的六个广泛使用的数据集中更快地播放。我们的代码在https://gitee.com/binzhangbinzhangbin/code-a-novel-tentent-based-network-for-fast-salient-object-detection.git
translated by 谷歌翻译
最近,国内Covid-19的流行状况很严重,但是在某些公共场所,有些人不戴口罩或不正确戴口罩,这要求相关人员立即提醒和监督他们正确戴口罩。但是,面对如此重要且复杂的工作,有必要在公共场所戴上自动面具。本文提出了一种基于改进的Yolov4的新面具戴上检测方法。具体而言,首先,我们将坐标注意模块添加到主链中以坐标特征融合和表示。其次,我们进行了一系列网络结构改进,以增强模型性能和鲁棒性。第三,我们部署K-Means聚类算法以使九个锚点更适合我们的NPMD数据集。实验结果表明,改进的Yolov4的性能更好,超过基线4.06%AP,可比速度为64.37 fps。
translated by 谷歌翻译
尽管Yolov2方法在对象检测时非常快,但由于其骨干网络的性能较低和多尺度区域特征的缺乏,其检测准确性受到限制。因此,在本文中提出了一种基于Yolov2的Yolo(DC)Yolo(DC-SPP-YOLO)方法的密集连接(DC)和空间金字塔池(SPP)方法。具体而言,在Yolov2的骨干网络中采用了卷积层的密集连接,以增强特征提取并减轻消失的梯度问题。此外,引入了改进的空间金字塔池以池并加入多尺度区域特征,以便网络可以更全面地学习对象功能。 DC-SPP-YOLO模型是根据由MSE(均方误差)损耗和跨透镜损失组成的新损失函数建立和训练的。实验结果表明,DC-SPP-Yolo的地图(平均平均精度)高于Pascal VOC数据集和UA-Detrac数据集上的Yolov2。提出了DC-SPP-Yolo方法的有效性。
translated by 谷歌翻译
对象检测是计算机视觉中的重要下游任务。对于车载边缘计算平台,很难实现实时检测要求。而且,由大量可分开的卷积层建立的轻巧模型无法达到足够的精度。我们引入了一种新的轻质卷积技术GSCONV,以减轻模型,但保持准确性。 GSCONV在模型的准确性和速度之间取得了极好的权衡。而且,我们提供了一个设计范式,即纤细的颈部,以实现探测器的更高计算成本效益。在二十多组比较实验中,我们的方法的有效性得到了强有力的证明。特别是,通过我们的方法改善的检测器获得了最先进的结果(例如,与原件相比,在Tesla T4 GPU上以〜100fps的速度为70.9%MAP0.5。代码可从https://github.com/alanli1997/slim-neck-by-gsconv获得。
translated by 谷歌翻译
Object detection, one of the three main tasks of computer vision, has been used in various applications. The main process is to use deep neural networks to extract the features of an image and then use the features to identify the class and location of an object. Therefore, the main direction to improve the accuracy of object detection tasks is to improve the neural network to extract features better. In this paper, I propose a convolutional module with a transformer[1], which aims to improve the recognition accuracy of the model by fusing the detailed features extracted by CNN[2] with the global features extracted by a transformer and significantly reduce the computational effort of the transformer module by deflating the feature mAP. The main execution steps are convolutional downsampling to reduce the feature map size, then self-attention calculation and upsampling, and finally concatenation with the initial input. In the experimental part, after splicing the block to the end of YOLOv5n[3] and training 300 epochs on the coco dataset, the mAP improved by 1.7% compared with the previous YOLOv5n, and the mAP curve did not show any saturation phenomenon, so there is still potential for improvement. After 100 rounds of training on the Pascal VOC dataset, the accuracy of the results reached 81%, which is 4.6 better than the faster RCNN[4] using resnet101[5] as the backbone, but the number of parameters is less than one-twentieth of it.
translated by 谷歌翻译
We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that Corner-Net achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors.
translated by 谷歌翻译
Channel and spatial attention mechanism has proven to provide an evident performance boost of deep convolution neural networks (CNNs). Most existing methods focus on one or run them parallel (series), neglecting the collaboration between the two attentions. In order to better establish the feature interaction between the two types of attention, we propose a plug-and-play attention module, which we term "CAT"-activating the Collaboration between spatial and channel Attentions based on learned Traits. Specifically, we represent traits as trainable coefficients (i.e., colla-factors) to adaptively combine contributions of different attention modules to fit different image hierarchies and tasks better. Moreover, we propose the global entropy pooling (GEP) apart from global average pooling (GAP) and global maximum pooling (GMP) operators, an effective component in suppressing noise signals by measuring the information disorder of feature maps. We introduce a three-way pooling operation into attention modules and apply the adaptive mechanism to fuse their outcomes. Extensive experiments on MS COCO, Pascal-VOC, Cifar-100, and ImageNet show that our CAT outperforms existing state-of-the-art attention mechanisms in object detection, instance segmentation, and image classification. The model and code will be released soon.
translated by 谷歌翻译
我们分析了实时对象检测模型的网络结构,发现功能串联阶段中的特征非常丰富。在此处应用注意模块可以有效提高模型的检测准确性。但是,常用的注意模块或自我发项模块在检测准确性和推理效率方面的性能差。因此,我们提出了一个新型的自我发场模块,称为颈部网络的特征串联阶段,称为2D局部特征叠加的自我注意。这个自我发场模块通过局部特征和本地接收场反映了全球特征。我们还建议并优化有效的脱钩头和AB-OTA,并实现SOTA结果。对于使用我们建议的改进,获得了49.0 \%(66.2 fps),46.1 \%(80.6 fps)和39.1 \%(100 fps)的平均精度。我们的模型平均精度超过了Yolov5 0.8 \%-3.1 \%。
translated by 谷歌翻译
最近已经设计了一些轻巧的卷积神经网络(CNN)模型,用于遥感对象检测(RSOD)。但是,他们中的大多数只是用可分离的卷积代替了香草卷积,这可能是由于很多精确损失而无法有效的,并且可能无法检测到方向的边界框(OBB)。同样,现有的OBB检测方法很难准确限制CNN预测的对象的形状。在本文中,我们提出了一个有效的面向轻质对象检测器(LO-DET)。具体而言,通道分离聚集(CSA)结构旨在简化可分开的卷积的复杂性,并开发了动态的接收场(DRF)机制,以通过自定义卷积内核及其感知范围来保持高精度,以保持高精度。网络复杂性。 CSA-DRF组件在保持高精度的同时优化了效率。然后,对角支撑约束头(DSC-Head)组件旨在检测OBB,并更准确,更稳定地限制其形状。公共数据集上的广泛实验表明,即使在嵌入式设备上,拟议的LO-DET也可以非常快地运行,具有检测方向对象的竞争精度。
translated by 谷歌翻译
最近,场景文本检测是一个具有挑战性的任务。具有任意形状或大宽高比的文本通常很难检测。以前的基于分段的方法可以更准确地描述曲线文本,但遭受过分分割和文本粘附。在本文中,我们提出了基于关注的特征分解 - 改变 - 用于场景文本检测,它利用上下文信息和低级功能来增强基于分段的文本检测器的性能。在特征融合的阶段,我们引入交叉级注意模块来通过添加融合多缩放功能的注意机制来丰富文本的上下文信息。在概率图生成的阶段,提出了一种特征分解 - 重建模块来缓解大宽高比文本的过分分割问题,其根据其频率特性分解文本特征,然后通过添加低级特征来重建它。实验已经在两个公共基准数据集中进行,结果表明,我们的提出方法实现了最先进的性能。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
X射线图像在制造业的质量保证中起着重要作用,因为它可以反映焊接区域的内部条件。然而,不同缺陷类型的形状和规模大大变化,这使得模型检测焊接缺陷的挑战性。在本文中,我们提出了一种基于卷积神经网络的焊接缺陷检测方法,即打火机和更快的YOLO(LF-YOLO)。具体地,增强的多尺度特征(RMF)模块旨在实现基于参数和无参数的多尺度信息提取操作。 RMF使得提取的特征映射能够代表更丰富的信息,该信息是通过卓越的层级融合结构实现的。为了提高检测网络的性能,我们提出了一个有效的特征提取(EFE)模块。 EFE处理具有极低消耗量的输入数据,并提高了实际行业中整个网络的实用性。实验结果表明,我们的焊接缺陷检测网络在性能和消耗之间实现了令人满意的平衡,达到92.9平均平均精度MAP50,每秒61.5帧(FPS)。为了进一步证明我们方法的能力,我们在公共数据集MS Coco上测试它,结果表明我们的LF-YOLO具有出色的多功能性检测性能。代码可在https://github.com/lmomoy/lf-yolo上获得。
translated by 谷歌翻译