在本文中,我们评估了以自我为中心数据的最新OCR方法。我们在Epic-Kitchens图像中注释文本,并证明现有的OCR方法与旋转的文本难以抗争,这是在处理物体上经常观察到的。我们引入了一个简单的旋转和合并过程,该过程可以应用于预先训练的OCR模型,该模型将标准化的编辑距离误差减半。这表明未来的OCR尝试应将旋转纳入模型设计和培训程序。
translated by 谷歌翻译
Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.
translated by 谷歌翻译
近年来,文本发现的主要范例是将文本检测和识别的任务结合到一个端到端的框架中。在此范式下,这两个任务都是通过从输入图像中提取的共享全局特征图操作来完成的。端到端方法面临的主要挑战之一是识别跨音阶变化(较小或较大的文本)和任意单词旋转角的文本时的性能退化。在这项工作中,我们通过提出一种新型的全球到本地关注机制来解决这些挑战,用于文本斑点,称为玻璃,将全球和本地特征融合在一起。全局功能是从共享骨干线中提取的,从整个图像中保留上下文信息,而本地功能则在调整大小的高分辨率旋转的单词作物上单独计算。从当地农作物中提取的信息减轻了尺度和单词旋转的许多固有困难。我们显示了跨音阶和角度的性能分析,突出了尺度和角度的肢体的改善。此外,我们引入了一个方向感知的损失项,以监督检测任务,并显示其对所有角度的检测和识别性能的贡献。最后,我们通过将玻璃纳入其他领先的文本发现架构,改善其文本斑点性能来表明玻璃是一般的。我们的方法在包括新发布的Textocr在内的多个基准上实现了最新的结果。
translated by 谷歌翻译
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
translated by 谷歌翻译
In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-toend object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2% on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.
translated by 谷歌翻译
在对象检测中,广泛采用了非最大抑制(NMS)方法以删除检测到的密集盒的水平重复,以生成最终的对象实例。但是,由于密集检测框的质量降低,而不是对上下文信息的明确探索,因此通过简单的交叉联盟(IOU)指标的现有NMS方法往往在多面向和长尺寸的对象检测方面表现不佳。通过重复删除与常规NMS方法区分,我们提出了一个新的图形融合网络,称为GFNET,用于多个方向的对象检测。我们的GFNET是可扩展的和适应性熔断的密集检测框,可检测更准确和整体的多个方向对象实例。具体而言,我们首先采用一种局部意识的聚类算法将密集检测框分组为不同的簇。我们将为属于一个集群的检测框构建一个实例子图。然后,我们通过图形卷积网络(GCN)提出一个基于图的融合网络,以学习推理并融合用于生成最终实例框的检测框。在公共可用多面向文本数据集(包括MSRA-TD500,ICDAR2015,ICDAR2017-MLT)和多方向对象数据集(DOTA)上进行广泛实验。
translated by 谷歌翻译
Furigana是日语写作中使用的发音笔记。能够检测到这些可以帮助提高光学特征识别(OCR)性能,或通过正确显示Furigana来制作日本书面媒体的更准确的数字副本。该项目的重点是在日本书籍和漫画中检测Furigana。尽管已经研究了日本文本的检测,但目前尚无提议检测Furigana的方法。我们构建了一个包含日本书面媒体和Furigana注释的新数据集。我们建议对此类数据的评估度量,该度量与对象检测中使用的评估协议类似,除非它允许对象组通过一个注释标记。我们提出了一种基于数学形态和连接组件分析的Furigana检测方法。我们评估数据集的检测,并比较文本提取的不同方法。我们还分别评估了不同类型的图像,例如书籍和漫画,并讨论每种图像的挑战。所提出的方法在数据集上达到76 \%的F1得分。该方法在常规书籍上表现良好,但在漫画和不规则格式的书籍上的表现较少。最后,我们证明所提出的方法可以在漫画109数据集上提高OCR的性能5 \%。源代码可通过\ texttt {\ url {https://github.com/nikolajkb/furiganadetection}}}
translated by 谷歌翻译
This study focuses on improving the optical character recognition (OCR) data for panels in the COMICS dataset, the largest dataset containing text and images from comic books. To do this, we developed a pipeline for OCR processing and labeling of comic books and created the first text detection and recognition datasets for western comics, called "COMICS Text+: Detection" and "COMICS Text+: Recognition". We evaluated the performance of state-of-the-art text detection and recognition models on these datasets and found significant improvement in word accuracy and normalized edit distance compared to the text in COMICS. We also created a new dataset called "COMICS Text+", which contains the extracted text from the textboxes in the COMICS dataset. Using the improved text data of COMICS Text+ in the comics processing model from resulted in state-of-the-art performance on cloze-style tasks without changing the model architecture. The COMICS Text+ dataset can be a valuable resource for researchers working on tasks including text detection, recognition, and high-level processing of comics, such as narrative understanding, character relations, and story generation. All the data and inference instructions can be accessed in https://github.com/gsoykan/comics_text_plus.
translated by 谷歌翻译
定向对象检测是在空中图像中的具有挑战性的任务,因为航空图像中的物体以任意的方向显示并且经常密集包装。主流探测器使用五个参数或八个主角表示描述了旋转对象,这遭受了定向对象定义的表示模糊性。在本文中,我们提出了一种基于平行四边形的面积比的新型表示方法,称为ARP。具体地,ARP回归定向对象的最小边界矩形和三个面积比。三个面积比包括指向物体与最小的外接矩形的面积比和两个平行四边形到最小的矩形。它简化了偏移学习,消除了面向对象的角度周期性或标签点序列的问题。为了进一步弥补近横向物体的混淆问题,采用对象和其最小的外缘矩形的面积比来指导每个物体的水平或定向检测的选择。此外,使用水平边界盒和三个面积比的旋转高效交叉点(R-EIOU)丢失和三个面积比旨在优化用于旋转对象的边界盒回归。遥感数据集的实验结果,包括HRSC2016,DOTA和UCAS-AOD,表明我们的方法达到了卓越的检测性能,而不是许多最先进的方法。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
本文提出了2022年访问量的挑战的最终结果。 OOV竞赛介绍了一个重要方面,而光学角色识别(OCR)模型通常不会研究,即,在培训时对看不见的场景文本实例的识别。竞赛编制了包含326,385张图像的公共场景文本数据集的集合,其中包含4,864,405个场景文本实例,从而涵盖了广泛的数据分布。形成了一个新的独立验证和测试集,其中包括在训练时出词汇量不超出词汇的场景文本实例。竞争是在两项任务中进行的,分别是端到端和裁剪的文本识别。介绍了基线和不同参与者的结果的详尽分析。有趣的是,在新研究的设置下,当前的最新模型显示出显着的性能差距。我们得出的结论是,在此挑战中提出的OOV数据集将是要探索的重要领域,以开发场景文本模型,以实现更健壮和广义的预测。
translated by 谷歌翻译
在过去的十年中,由于航空图像引起的物体的规模和取向的巨大变化,对象检测已经实现了自然图像中的显着进展,而不是在空中图像中。更重要的是,缺乏大规模基准已成为在航拍图像(ODAI)中对物体检测发展的主要障碍。在本文中,我们在航空图像(DotA)中的物体检测和用于ODAI的综合基线的大规模数据集。所提出的DOTA数据集包含1,793,658个对象实例,18个类别的面向边界盒注释从11,268个航拍图像中收集。基于该大规模和注释的数据集,我们构建了具有超过70个配置的10个最先进算法的基线,其中已经评估了每个模型的速度和精度性能。此外,我们为ODAI提供了一个代码库,并建立一个评估不同算法的网站。以前在Dota上运行的挑战吸引了全球1300多队。我们认为,扩大的大型DOTA数据集,广泛的基线,代码库和挑战可以促进鲁棒算法的设计和对空中图像对象检测问题的可再现研究。
translated by 谷歌翻译
场景文本检测的具有挑战性的领域需要复杂的数据注释,这是耗时和昂贵的。弱监管等技术可以减少所需的数据量。本文提出了一种薄弱的现场文本检测监控方法,这是利用加强学习(RL)。RL代理收到的奖励由神经网络估算,而不是从地面真理标签推断出来。首先,我们增强了具有多种培训优化的文本检测的现有监督RL方法,允许我们将性能差距缩放到基于回归的算法。然后,我们将拟议的系统在现实世界数据的漏洞和半监督培训中使用。我们的结果表明,在弱监督环境中培训是可行的。但是,我们发现在半监督设置中使用我们的模型,例如,将标记的合成数据与未经发布的实际数据相结合,产生最佳结果。
translated by 谷歌翻译
对多人体育广播视频中的关键参与者和行动的全面了解是一个具有挑战性的问题。与新闻或金融视频不同,体育视频有限。虽然对多人体育和玩家的检测的操作识别都有强大的研究,但了解视频帧中的上下文文本仍然是体育视频理解中最有影响力的途径之一。在这项工作中,我们研究体育时钟的极其准确的语义文本检测和识别,以及其中的挑战。我们遵守运动时钟的独特属性,这使得难以利用通用预训练的探测器和识别器,因此可以准确地理解文本以与外部知识对齐的程度。我们提出了一种新的遥远监督技术来自动构建体育时钟数据集。除了合适的数据增强之外,与任何最先进的文本检测和识别模型架构相结合,我们提取极其准确的语义文本。最后,我们分享了我们的计算架构流水线,以扩展工业设置中的该系统,并提出了一个强大的数据集,以验证我们的结果。
translated by 谷歌翻译
几乎所有场景文本发现(检测和识别)方法依赖于昂贵的框注释(例如,文本线框,单词级框和字符级框)。我们首次证明培训场景文本发现模型可以通过每个实例的单点的极低成本注释来实现。我们提出了一种端到端的场景文本发现方法,将场景文本拍摄作为序列预测任务,如语言建模。给予图像作为输入,我们将所需的检测和识别结果作为一系列离散令牌制定,并使用自动回归变压器来预测序列。我们在几个水平,多面向和任意形状的场景文本基准上实现了有希望的结果。最重要的是,我们表明性能对点注释的位置不是很敏感,这意味着它可以比需要精确位置的边界盒更容易地注释并自动生成。我们认为,这种先锋尝试表明了场景文本的重要机会,比以前可能的比例更大的比例更大。
translated by 谷歌翻译
Object detection is an important and challenging problem in computer vision. Although the past decade has witnessed major advances in object detection in natural scenes, such successes have been slow to aerial imagery, not only because of the huge variation in the scale, orientation and shape of the object instances on the earth's surface, but also due to the scarcity of wellannotated datasets of objects in aerial scenes. To advance object detection research in Earth Vision, also known as Earth Observation and Remote Sensing, we introduce a large-scale Dataset for Object deTection in Aerial images (DOTA). To this end, we collect 2806 aerial images from different sensors and platforms. Each image is of the size about 4000 × 4000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. These DOTA images are then annotated by experts in aerial image interpretation using 15 common object categories. The fully annotated DOTA images contains 188, 282 instances, each of which is labeled by an arbitrary (8 d.o.f.) quadrilateral. To build a baseline for object detection in Earth Vision, we evaluate state-of-the-art object detection algorithms on DOTA. Experiments demonstrate that DOTA well represents real Earth Vision applications and are quite challenging.
translated by 谷歌翻译
自动检测武器对于改善个人的安全性和福祉是重要的,仍然是由于各种尺寸,武器形状和外观,这是一项艰巨的任务。查看点变化和遮挡也是使这项任务更加困难的原因。此外,目前的物体检测算法处理矩形区域,但是一个细长和长的步枪可以真正地覆盖区域的一部分区域,其余部分可能包含未经紧的细节。为了克服这些问题,我们提出了一种用于定向意识武器检测的CNN架构,其提供具有改进的武器检测性能的面向边界框。所提出的模型不仅通过将角度作为分类问题的角度分成8个类而且提供方向,而是作为回归问题。对于培训我们的武器检测模型,包括总6400件武器图像的新数据集从网上收集,然后用面向定向的边界框手动注释。我们的数据集不仅提供导向的边界框作为地面真相,还提供了水平边界框。我们还以多种现代对象探测器提供我们的数据集,用于在该领域进一步研究。所提出的模型在该数据集上进行评估,并且与搁板对象检测器的比较分析产生了卓越的拟议模型的性能,以标准评估策略测量。数据集和模型实现在此链接上公开可用:https://bit.ly/2tyzicf。
translated by 谷歌翻译
接受注释较弱的对象探测器是全面监督者的负担得起的替代方案。但是,它们之间仍然存在显着的性能差距。我们建议通过微调预先训练的弱监督检测器来缩小这一差距,并使用``Box-In-box''(bib'(bib)自动从训练集中自动选择了一些完全注销的样品,这是一种新颖的活跃学习专门针对弱势监督探测器的据可查的失败模式而设计的策略。 VOC07和可可基准的实验表明,围嘴表现优于其他活跃的学习技术,并显着改善了基本的弱监督探测器的性能,而每个类别仅几个完全宣布的图像。围嘴达到了完全监督的快速RCNN的97%,在VOC07上仅10%的全已通量图像。在可可(COCO)上,平均每类使用10张全面通量的图像,或同等的训练集的1%,还减少了弱监督检测器和完全监督的快速RCN之间的性能差距(In AP)以上超过70% ,在性能和数据效率之间表现出良好的权衡。我们的代码可在https://github.com/huyvvo/bib上公开获取。
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
State-of-the-art object detectors are fast and accurate, but they require a large amount of well annotated training data to obtain good performance. However, obtaining a large amount of training annotations specific to a particular task, i.e., fine-grained annotations, is costly in practice. In contrast, obtaining common-sense relationships from text, e.g., "a table-lamp is a lamp that sits on top of a table", is much easier. Additionally, common-sense relationships like "on-top-of" are easy to annotate in a task-agnostic fashion. In this paper, we propose a probabilistic model that uses such relational knowledge to transform an off-the-shelf detector of coarse object categories (e.g., "table", "lamp") into a detector of fine-grained categories (e.g., "table-lamp"). We demonstrate that our method, RelDetect, achieves performance competitive to finetuning based state-of-the-art object detector baselines when an extremely low amount of fine-grained annotations is available ($0.2\%$ of entire dataset). We also demonstrate that RelDetect is able to utilize the inherent transferability of relationship information to obtain a better performance ($+5$ mAP points) than the above baselines on an unseen dataset (zero-shot transfer). In summary, we demonstrate the power of using relationships for object detection on datasets where fine-grained object categories can be linked to coarse-grained categories via suitable relationships.
translated by 谷歌翻译