近年来,文本发现的主要范例是将文本检测和识别的任务结合到一个端到端的框架中。在此范式下,这两个任务都是通过从输入图像中提取的共享全局特征图操作来完成的。端到端方法面临的主要挑战之一是识别跨音阶变化(较小或较大的文本)和任意单词旋转角的文本时的性能退化。在这项工作中,我们通过提出一种新型的全球到本地关注机制来解决这些挑战,用于文本斑点,称为玻璃,将全球和本地特征融合在一起。全局功能是从共享骨干线中提取的,从整个图像中保留上下文信息,而本地功能则在调整大小的高分辨率旋转的单词作物上单独计算。从当地农作物中提取的信息减轻了尺度和单词旋转的许多固有困难。我们显示了跨音阶和角度的性能分析,突出了尺度和角度的肢体的改善。此外,我们引入了一个方向感知的损失项,以监督检测任务,并显示其对所有角度的检测和识别性能的贡献。最后,我们通过将玻璃纳入其他领先的文本发现架构,改善其文本斑点性能来表明玻璃是一般的。我们的方法在包括新发布的Textocr在内的多个基准上实现了最新的结果。
translated by 谷歌翻译
由于字体,大小,颜色和方向的各种文本变化,任意形状的场景文本检测是一项具有挑战性的任务。大多数现有基于回归的方法求助于回归文本区域的口罩或轮廓点以建模文本实例。但是,回归完整的口罩需要高训练的复杂性,并且轮廓点不足以捕获高度弯曲的文本的细节。为了解决上述限制,我们提出了一个名为TextDCT的新颖的轻巧锚文本检测框架,该框架采用离散的余弦变换(DCT)将文本掩码编码为紧凑型向量。此外,考虑到金字塔层中训练样本不平衡的数量,我们仅采用单层头来进行自上而下的预测。为了建模单层头部的多尺度文本,我们通过将缩水文本区域视为正样本,并通过融合来介绍一个新颖的积极抽样策略,并通过融合来设计特征意识模块(FAM),以实现空间意识和规模的意识丰富的上下文信息并关注更重要的功能。此外,我们提出了一种分割的非量最大抑制(S-NMS)方法,该方法可以过滤低质量的掩模回归。在四个具有挑战性的数据集上进行了广泛的实验,这表明我们的TextDCT在准确性和效率上都获得了竞争性能。具体而言,TextDCT分别以每秒17.2帧(FPS)和F-measure的F-MEASIE达到85.1,而CTW1500和Total-Text数据集的F-Measure 84.9分别为15.1 fps。
translated by 谷歌翻译
典型的文本检测器遵循两阶段的发现策略:首先检测文本实例的精确边界,然后在定期的文本区域内执行文本识别。尽管这种策略取得了实质性进展,但有两个基本的局限性。 1)文本识别的性能在很大程度上取决于文本检测的精度,从而导致从检测到识别的潜在误差传播。 2)桥接检测和识别的ROI种植会带来背景的噪音,并在合并或从特征地图中插值时导致信息丢失。在这项工作中,我们提出了单个镜头自力更生的场景文本sottter(SRSTS),该场景通过将识别解除识别来规避这些限制。具体而言,我们并行进行文本检测和识别,并通过共享的积极锚点架起它们。因此,即使确切的文本边界要检测到具有挑战性,我们的方法也能够正确识别文本实例。此外,我们的方法可大大降低文本检测的注释成本。在常规基准和任意形状的基准上进行了广泛的实验表明,就准确性和效率而言,我们的SRST与以前的最先进的观察者相比有利。
translated by 谷歌翻译
几乎所有场景文本发现(检测和识别)方法依赖于昂贵的框注释(例如,文本线框,单词级框和字符级框)。我们首次证明培训场景文本发现模型可以通过每个实例的单点的极低成本注释来实现。我们提出了一种端到端的场景文本发现方法,将场景文本拍摄作为序列预测任务,如语言建模。给予图像作为输入,我们将所需的检测和识别结果作为一系列离散令牌制定,并使用自动回归变压器来预测序列。我们在几个水平,多面向和任意形状的场景文本基准上实现了有希望的结果。最重要的是,我们表明性能对点注释的位置不是很敏感,这意味着它可以比需要精确位置的边界盒更容易地注释并自动生成。我们认为,这种先锋尝试表明了场景文本的重要机会,比以前可能的比例更大的比例更大。
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
最近,场景文本检测是一个具有挑战性的任务。具有任意形状或大宽高比的文本通常很难检测。以前的基于分段的方法可以更准确地描述曲线文本,但遭受过分分割和文本粘附。在本文中,我们提出了基于关注的特征分解 - 改变 - 用于场景文本检测,它利用上下文信息和低级功能来增强基于分段的文本检测器的性能。在特征融合的阶段,我们引入交叉级注意模块来通过添加融合多缩放功能的注意机制来丰富文本的上下文信息。在概率图生成的阶段,提出了一种特征分解 - 重建模块来缓解大宽高比文本的过分分割问题,其根据其频率特性分解文本特征,然后通过添加低级特征来重建它。实验已经在两个公共基准数据集中进行,结果表明,我们的提出方法实现了最先进的性能。
translated by 谷歌翻译
In this paper, we aim to design an efficient real-time object detector that exceeds the YOLO series and is easily extensible for many object recognition tasks such as instance segmentation and rotated object detection. To obtain a more efficient model architecture, we explore an architecture that has compatible capacities in the backbone and neck, constructed by a basic building block that consists of large-kernel depth-wise convolutions. We further introduce soft labels when calculating matching costs in the dynamic label assignment to improve accuracy. Together with better training techniques, the resulting object detector, named RTMDet, achieves 52.8% AP on COCO with 300+ FPS on an NVIDIA 3090 GPU, outperforming the current mainstream industrial detectors. RTMDet achieves the best parameter-accuracy trade-off with tiny/small/medium/large/extra-large model sizes for various application scenarios, and obtains new state-of-the-art performance on real-time instance segmentation and rotated object detection. We hope the experimental results can provide new insights into designing versatile real-time object detectors for many object recognition tasks. Code and models are released at https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet.
translated by 谷歌翻译
Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion. However, the segmentation process only considers each pixel independently, and the expansion process is difficult to achieve a favorable accuracy-speed trade-off. In this paper, we propose a Context-aware and Boundary-guided Network (CBN) to tackle these problems. In CBN, a basic text detector is firstly used to predict initial segmentation results. Then, we propose a context-aware module to enhance text kernel feature representations, which considers both global and local contexts. Finally, we introduce a boundary-guided module to expand enhanced text kernels adaptively with only the pixels on the contours, which not only obtains accurate text boundaries but also keeps high speed, especially on high-resolution output maps. In particular, with a lightweight backbone, the basic detector equipped with our proposed CBN achieves state-of-the-art results on several popular benchmarks, and our proposed CBN can be plugged into several segmentation-based methods. Code will be available on https://github.com/XiiZhao/cbn.pytorch.
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
面部检测是为了在图像中搜索面部的所有可能区域,并且如果有任何情况,则定位面部。包括面部识别,面部表情识别,面部跟踪和头部姿势估计的许多应用假设面部的位置和尺寸在图像中是已知的。近几十年来,研究人员从Viola-Jones脸上检测器创造了许多典型和有效的面部探测器到当前的基于CNN的CNN。然而,随着图像和视频的巨大增加,具有面部刻度的变化,外观,表达,遮挡和姿势,传统的面部探测器被挑战来检测野外面孔的各种“脸部。深度学习技术的出现带来了非凡的检测突破,以及计算的价格相当大的价格。本文介绍了代表性的深度学习的方法,并在准确性和效率方面提出了深度和全面的分析。我们进一步比较并讨论了流行的并挑战数据集及其评估指标。进行了几种成功的基于深度学习的面部探测器的全面比较,以使用两个度量来揭示其效率:拖鞋和延迟。本文可以指导为不同应用选择合适的面部探测器,也可以开发更高效和准确的探测器。
translated by 谷歌翻译
近年来,自主驾驶LIDAR数据的3D对象检测一直在迈出卓越的进展。在最先进的方法中,已经证明了将点云进行编码为鸟瞰图(BEV)是有效且有效的。与透视图不同,BEV在物体之间保留丰富的空间和距离信息;虽然在BEV中相同类型的更远物体不会较小,但它们包含稀疏点云特征。这一事实使用共享卷积神经网络削弱了BEV特征提取。为了解决这一挑战,我们提出了范围感知注意网络(RAANET),提取更强大的BEV功能并产生卓越的3D对象检测。范围感知的注意力(RAA)卷曲显着改善了近距离的特征提取。此外,我们提出了一种新的辅助损耗,用于密度估计,以进一步增强覆盖物体的Raanet的检测精度。值得注意的是,我们提出的RAA卷积轻量级,并兼容,以集成到用于BEV检测的任何CNN架构中。 Nuscenes DataSet上的广泛实验表明,我们的提出方法优于基于LIDAR的3D对象检测的最先进的方法,具有16 Hz的实时推断速度,为LITE版本为22 Hz。该代码在匿名GitHub存储库HTTPS://github.com/Anonymous0522 / ange上公开提供。
translated by 谷歌翻译
基于无人机(UAV)基于无人机的视觉对象跟踪已实现了广泛的应用,并且由于其多功能性和有效性而引起了智能运输系统领域的越来越多的关注。作为深度学习革命性趋势的新兴力量,暹罗网络在基于无人机的对象跟踪中闪耀,其准确性,稳健性和速度有希望的平衡。由于开发了嵌入式处理器和深度神经网络的逐步优化,暹罗跟踪器获得了广泛的研究并实现了与无人机的初步组合。但是,由于无人机在板载计算资源和复杂的现实情况下,暹罗网络的空中跟踪仍然在许多方面都面临严重的障碍。为了进一步探索基于无人机的跟踪中暹罗网络的部署,这项工作对前沿暹罗跟踪器进行了全面的审查,以及使用典型的无人机板载处理器进行评估的详尽无人用分析。然后,进行板载测试以验证代表性暹罗跟踪器在现实世界无人机部署中的可行性和功效。此外,为了更好地促进跟踪社区的发展,这项工作分析了现有的暹罗跟踪器的局限性,并进行了以低弹片评估表示的其他实验。最后,深入讨论了基于无人机的智能运输系统的暹罗跟踪的前景。领先的暹罗跟踪器的统一框架,即代码库及其实验评估的结果,请访问https://github.com/vision4robotics/siamesetracking4uav。
translated by 谷歌翻译
任意形状的文本检测是一项具有挑战性的任务,这是由于大小和宽高比,任意取向或形状,不准确的注释等各种变化的任务。最近引起了大量关注。但是,文本的准确像素级注释是强大的,现有的场景文本检测数据集仅提供粗粒的边界注释。因此,始终存在大量错误分类的文本像素或背景像素,从而降低基于分割的文本检测方法的性能。一般来说,像素是否属于文本与与相邻注释边界的距离高度相关。通过此观察,在本文中,我们通过概率图提出了一种创新且可靠的基于分割的检测方法,以准确检测文本实例。为了具体,我们采用Sigmoid alpha函数(SAF)将边界及其内部像素之间的距离传输到概率图。但是,由于粗粒度文本边界注释的不确定性,一个概率图无法很好地覆盖复杂的概率分布。因此,我们采用一组由一系列Sigmoid alpha函数计算出的概率图来描述可能的概率分布。此外,我们提出了一个迭代模型,以学习预测和吸收概率图,以提供足够的信息来重建文本实例。最后,采用简单的区域生长算法来汇总概率图以完成文本实例。实验结果表明,我们的方法在几个基准的检测准确性方面实现了最先进的性能。
translated by 谷歌翻译
本文介绍了Houghnet,这是一种单阶段,无锚,基于投票的,自下而上的对象检测方法。受到广义的霍夫变换的启发,霍尼特通过在该位置投票的总和确定了某个位置的物体的存在。投票是根据对数极极投票领域的近距离和长距离地点收集的。由于这种投票机制,Houghnet能够整合近距离和远程的班级条件证据以进行视觉识别,从而概括和增强当前的对象检测方法,这通常仅依赖于本地证据。在可可数据集中,Houghnet的最佳型号达到$ 46.4 $ $ $ ap $(和$ 65.1 $ $ $ ap_ {50} $),与自下而上的对象检测中的最先进的作品相同,超越了最重要的一项 - 阶段和两阶段方法。我们进一步验证了提案在其他视觉检测任务中的有效性,即视频对象检测,实例分割,3D对象检测和人为姿势估计的关键点检测以及其他“图像”图像生成任务的附加“标签”,其中集成的集成在所有情况下,我们的投票模块始终提高性能。代码可在https://github.com/nerminsamet/houghnet上找到。
translated by 谷歌翻译
Maintaining the identity of multiple objects in real-time video is a challenging task, as it is not always feasible to run a detector on every frame. Thus, motion estimation systems are often employed, which either do not scale well with the number of targets or produce features with limited semantic information. To solve the aforementioned problems and allow the tracking of dozens of arbitrary objects in real-time, we propose SiamMOTION. SiamMOTION includes a novel proposal engine that produces quality features through an attention mechanism and a region-of-interest extractor fed by an inertia module and powered by a feature pyramid network. Finally, the extracted tensors enter a comparison head that efficiently matches pairs of exemplars and search areas, generating quality predictions via a pairwise depthwise region proposal network and a multi-object penalization module. SiamMOTION has been validated on five public benchmarks, achieving leading performance against current state-of-the-art trackers. Code available at: https://github.com/lorenzovaquero/SiamMOTION
translated by 谷歌翻译
Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics, imprecise bounding box regression, a scarcity of real-world datasets, and sensitive localization evaluation. In this paper, we propose a comprehensive solution to these challenges. First, we find that the existing anchor-free label assignment method is prone to mislabeling small targets as background, leading to their omission by detectors. To overcome this issue, we propose an all-scale pseudo-box-based label assignment scheme that relaxes the constraints on scale and decouples the spatial assignment from the size of the ground-truth target. Second, motivated by the structured prior of feature pyramids, we introduce the one-stage cascade refinement network (OSCAR), which uses the high-level head as soft proposals for the low-level refinement head. This allows OSCAR to process the same target in a cascade coarse-to-fine manner. Finally, we present a new research benchmark for infrared small target detection, consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets, the normalized contrast evaluation metric, and the DeepInfrared toolkit for detection. We conduct extensive ablation studies to evaluate the components of OSCAR and compare its performance to state-of-the-art model-driven and data-driven methods on the SIRST-V2 benchmark. Our results demonstrate that a top-down cascade refinement framework can improve the accuracy of infrared small target detection without sacrificing efficiency. The DeepInfrared toolkit, dataset, and trained models are available at https://github.com/YimianDai/open-deepinfrared to advance further research in this field.
translated by 谷歌翻译
在过去的十年中,由于航空图像引起的物体的规模和取向的巨大变化,对象检测已经实现了自然图像中的显着进展,而不是在空中图像中。更重要的是,缺乏大规模基准已成为在航拍图像(ODAI)中对物体检测发展的主要障碍。在本文中,我们在航空图像(DotA)中的物体检测和用于ODAI的综合基线的大规模数据集。所提出的DOTA数据集包含1,793,658个对象实例,18个类别的面向边界盒注释从11,268个航拍图像中收集。基于该大规模和注释的数据集,我们构建了具有超过70个配置的10个最先进算法的基线,其中已经评估了每个模型的速度和精度性能。此外,我们为ODAI提供了一个代码库,并建立一个评估不同算法的网站。以前在Dota上运行的挑战吸引了全球1300多队。我们认为,扩大的大型DOTA数据集,广泛的基线,代码库和挑战可以促进鲁棒算法的设计和对空中图像对象检测问题的可再现研究。
translated by 谷歌翻译
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
translated by 谷歌翻译