Scene text images have different shapes and are subjected to various distortions, e.g. perspective distortions. To handle these challenges, the state-of-the-art methods rely on a rectification network, which is connected to the text recognition network. They form a linear pipeline which uses text rectification on all input images, even for images that can be recognized without it. Undoubtedly, the rectification network improves the overall text recognition performance. However, in some cases, the rectification network generates unnecessary distortions on images, resulting in incorrect predictions in images that would have otherwise been correct without it. In order to alleviate the unnecessary distortions, the portmanteauing of features is proposed. The portmanteau feature, inspired by the portmanteau word, is a feature containing information from both the original text image and the rectified image. To generate the portmanteau feature, a non-linear input pipeline with a block matrix initialization is presented. In this work, the transformer is chosen as the recognition network due to its utilization of attention and inherent parallelism, which can effectively handle the portmanteau feature. The proposed method is examined on 6 benchmarks and compared with 13 state-of-the-art methods. The experimental results show that the proposed method outperforms the state-of-the-art methods on various of the benchmarks.
translated by 谷歌翻译
Scene text recognition (STR) involves the task of reading text in cropped images of natural scenes. Conventional models in STR employ convolutional neural network (CNN) followed by recurrent neural network in an encoder-decoder framework. In recent times, the transformer architecture is being widely adopted in STR as it shows strong capability in capturing long-term dependency which appears to be prominent in scene text images. Many researchers utilized transformer as part of a hybrid CNN-transformer encoder, often followed by a transformer decoder. However, such methods only make use of the long-term dependency mid-way through the encoding process. Although the vision transformer (ViT) is able to capture such dependency at an early stage, its utilization remains largely unexploited in STR. This work proposes the use of a transformer-only model as a simple baseline which outperforms hybrid CNN-transformer models. Furthermore, two key areas for improvement were identified. Firstly, the first decoded character has the lowest prediction accuracy. Secondly, images of different original aspect ratios react differently to the patch resolutions while ViT only employ one fixed patch resolution. To explore these areas, Pure Transformer with Integrated Experts (PTIE) is proposed. PTIE is a transformer model that can process multiple patch resolutions and decode in both the original and reverse character orders. It is examined on 7 commonly used benchmarks and compared with over 20 state-of-the-art methods. The experimental results show that the proposed method outperforms them and obtains state-of-the-art results in most benchmarks.
translated by 谷歌翻译
在过去的几十年中,由于其在广泛的应用中,现场文本认可从学术界和实际用户获得了全世界的关注。尽管在光学字符识别方面取得了成就,但由于诸如扭曲或不规则布局等固有问题,现场文本识别仍然具有挑战性。大多数现有方法主要利用基于复发或卷积的神经网络。然而,虽然经常性的神经网络(RNN)通常由于顺序计算而遭受慢的训练速度,并且遇到消失的梯度或瓶颈,但CNN在复杂性和性能之间衡量折衷。在本文中,我们介绍了SAFL,一种基于自我关注的神经网络模型,具有场景文本识别的焦点损失,克服现有方法的限制。使用焦损而不是负值对数似然有助于模型更多地关注低频样本训练。此外,为应对扭曲和不规则文本,我们在传递到识别网络之前,我们利用空间变换(STN)来纠正文本。我们执行实验以比较拟议模型的性能与七个基准。数值结果表明,我们的模型实现了最佳性能。
translated by 谷歌翻译
场景文本识别(str)是图像和文本之间的重要桥梁,吸引了丰富的研究关注。虽然卷积神经网络(CNNS)在此任务中取得了显着的进展,但大多数现有工作都需要额外的模块(上下文建模模块)来帮助CNN捕获全局依赖项来解决归纳偏差并加强文本特征之间的关系。最近,该变压器已被提出作为通过自我关注机制的全球背景建模的有希望的网络,但在应用于识别时主要缺点是效率。我们提出了一个1-D拆分来解决复杂性的挑战,并用变压器编码器替换CNN,以减少对上下文建模模块的需求。此外,最近的方法使用冻结的初始嵌入来指导解码器对文本进行解码,导致精度损失。我们建议使用从变压器编码器中学到的学习学习的可读初始嵌入,使其自适应不同的输入图像。最重要的是,我们介绍了一个新颖的文本识别架构,名为基于变压器的文本识别器,其中包含三个阶段(转换,特征提取和预测)组成的初始嵌入指导(TRIG)。广泛的实验表明,我们的方法可以在文本识别基准上实现最先进的。
translated by 谷歌翻译
文本识别是文档数字化的长期研究问题。现有的方法通常是基于CNN构建的,以用于图像理解,并为Char-Level文本生成而建立RNN。此外,通常需要另一种语言模型来提高整体准确性作为后处理步骤。在本文中,我们提出了一种使用预训练的图像变压器和文本变压器模型(即Trocr)提出的端到端文本识别方法,该模型利用了变压器体系结构,以实现图像理解和文字级级文本生成。TROR模型很简单,但有效,可以通过大规模合成数据进行预训练,并通过人体标记的数据集进行微调。实验表明,TROR模型的表现优于印刷,手写和场景文本识别任务上的当前最新模型。Trocr模型和代码可在\ url {https://aka.ms/trocr}上公开获得。
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
基于关注的编码器解码器框架广泛用于场景文本识别任务。然而,对于当前的最先进的(SOTA)方法,就输入文本图像的本地视觉和全局上下文信息的有效使用而言,存在改进的余地,以及场景之间的鲁棒相关性处理模块(编码器)和文本处理模块(解码器)。在本文中,我们提出了一种表示和相关性增强的编码器解码器框架(Rceed)来解决这些缺陷和断裂性能瓶颈。在编码器模块中,将本地视觉功能,全局上下文特征和位置信息进行对齐并融合以生成小型综合特征图。在解码器模块中,使用两种方法来增强场景和文本特征空间之间的相关性。 1)解码器初始化由从编码器导出的整体特征和全局瞥觉矢量引导。 2)通过多头一般注意力产生的富集瞥见载体的特征来帮助RNN迭代和每个时间步骤的字符预测。同时,我们还设计了一个LABRAMORM-DROPOUT LSTM单元,以改善模型的可变文本的概括。基准的广泛实验展示了在现场文本识别任务中的有利性能,尤其是不规则的性能。
translated by 谷歌翻译
建模语义信息对于场景文本识别有用。在这项工作中,我们建议与视觉语义变压器(VST)共同模拟语义和视觉信息。 VST首先从具有变压器模块和主视觉语义对齐模块中的视觉特征映射明确地提取主语义信息。然后将语义信息与视觉特征映射(被视为序列)连接以形成伪多域序列,该伪多域序列组合视觉和语义信息,随后将其馈入基于变压器的交互模块,以便能够在视觉和视觉之间学习相互作用语义特征。以这种方式,可以通过语义信息和反之亦然可以增强视觉特征。可视特征的增强版本通过辅助视觉 - 语义对准模块进一步解码,其与主要一个共享权重。最后,通过获得最终文本预测的第三变压器模块共同处理解码的视觉特征和增强的语义特征。在包括常规/不规则文本识别数据集的七个公共基准测试中的实验验证了我们所提出的模型,在七个基准中的四个基准中达到最先进的效果。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
基于关注的编码器 - 解码器框架在现场文本识别中变得流行,主要是由于其在从视觉和语义域集成识别线索方面的优越性。然而,最近的研究表明,这两个线索可能在困难的文本中错位(例如,具有稀有文本形状)并引入诸如角色位置的约束来缓解问题。尽管有一定的成功,但无内容的位置嵌入稳定地与有意义的本地图像区域嵌入。在本文中,我们提出了一种名为多域字符距离感知(MDCDP)的新型模块,以建立视觉和语义相关位置编码。 MDCDP使用位置嵌入在注意机制后查询视觉和语义功能。它自然地编码了位置线索,其描述了字符之间的视觉和语义距离。我们开发一个名为CDISTNET的新型架构,堆叠MDCDP几次以指导精确的距离建模。因此,即使呈现的各种困难,视觉语义对准也很好地建造。我们将CDISTNET应用于两个增强的数据集和六个公共基准。实验表明,CDISTNET实现了最先进的识别准确性。虽然可视化也表明CDISTNET在视觉和语义域中实现了适当的注意本地化。我们将在验收时发布我们的代码。
translated by 谷歌翻译
上下文感知的str方法通常使用内部自回旋(AR)语言模型(LM)。 AR模型的固有局限性动机是采用外部LM的两阶段方法。输入图像上外部LM的条件独立性可能导致其错误地纠正正确的预测,从而导致明显的低效率。我们的方法Parseq使用置换语言建模学习了具有共同权重的内部AR LMS集合。它统一了无上下文的非AR和上下文感知的AR推断,并使用双向上下文统一了迭代的精致。使用合成训练数据,Parseq实现了最新的(SOTA),从而获得了Str基准(精度为91.9%)和更具挑战性的数据集。在对实际数据进行培训时,它建立了新的SOTA结果(精度为96.0%)。 Parseq由于其简单,统一的结构和平行的令牌处理,对准确性与参数计数,拖放和延迟非常最佳。由于其广泛使用了注意力,它对在现实世界图像中常见的任意导向文本具有鲁棒性。代码,预处理的权重和数据可在以下网址提供:https://github.com/baudm/parseq。
translated by 谷歌翻译
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via `sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets.
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
多年来,场景文本识别(STR)一直是计算机视觉的积极研究主题。为了解决这个具有挑战性的问题,已经提出了许多创新的方法,并将语言知识纳入STR模型最近已成为一个显着的趋势。在这项工作中,我们首先从视觉变压器(VIT)的最新进展中汲取灵感来构建一个概念上简单而强大的视觉str模型,该模型建立在VIT和胜过以前的现场文本识别的先前最新模型,包括纯视觉模型和语言增强方法。为了整合语言知识,我们进一步提出了一种多粒性预测策略,以隐式方式将信息从语言模式注入模型,即NLP中广泛使用的子字表示(BPE和Wordpiece)被引入输出空间,除了传统的字符级别表示外,不采用独立语言模型(LM)。所得的算法(称为MGP-STR)能够将Str的性能包络提高到更高的水平。具体而言,它的平均识别精度在标准基准上达到93.35%。代码将很快发布。
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
功能提取器在文本识别(TR)中起着至关重要的作用,但是由于昂贵的手动调整,自定义其体系结构的探索相对较少。在这项工作中,受神经体系结构搜索(NAS)的成功启发,我们建议搜索合适的功能提取器。我们通过探索具有良好功能提取器的原理来设计特定于域的搜索空间。该空间包括用于空间模型的3D结构空间和顺序模型的基于转换的空间。由于该空间是巨大且结构复杂的,因此无法应用现有的NAS算法。我们提出了一种两阶段算法,以有效地在空间中进行搜索。在第一阶段,我们将空间切成几个块,并借助辅助头逐步训练每个块。我们将延迟约束引入第二阶段,并通过自然梯度下降从受过训练的超级网络搜索子网络。在实验中,进行了一系列消融研究,以更好地了解设计的空间,搜索算法和搜索架构。我们还将所提出的方法与手写和场景TR任务上的各种最新方法进行了比较。广泛的结果表明,我们的方法可以以较小的延迟获得更好的识别性能。
translated by 谷歌翻译
视觉变压器在众多计算机视觉任务上表现出了巨大的成功。然而,由于计算复杂性和记忆足迹是二次的,因此其中心分量(软磁性注意力)禁止视觉变压器扩展到高分辨率图像。尽管在自然语言处理(NLP)任务中引入了线性注意以减轻类似问题,但直接将现有的线性注意力应用于视觉变压器可能不会导致令人满意的结果。我们研究了这个问题,发现与NLP任务相比,计算机视觉任务更多地关注本地信息。基于这一观察结果,我们提出了附近的关注,该关注引入了具有线性复杂性的视觉变压器的局部性偏见。具体而言,对于每个图像补丁,我们根据其相邻贴片测量的2D曼哈顿距离调整了注意力重量。在这种情况下,相邻的补丁比遥远的补丁会受到更大的关注。此外,由于我们的附近注意力要求令牌长度比特征维度大得多,以显示其效率优势,因此我们进一步提出了一个新的附近视觉变压器(VVT)结构,以减少特征维度而不脱离准确性。我们在CIFAR100,ImagEnet1k和ADE20K数据集上进行了广泛的实验,以验证我们方法的有效性。当输入分辨率增加时,与以前的基于变压器和基于卷积的网络相比,GFLOP的增长率较慢。特别是,我们的方法达到了最新的图像分类精度,其参数比以前的方法少50%。
translated by 谷歌翻译
提出了基于视觉变压器(VLT)的新型场景文本识别器。受NLP领域的Levenshtein Transformer的启发,提出的方法(命名为Levenshtein OCR和Short Levocr)探索了一种自动从裁剪自然图像中自动转录文本内容的替代方法。具体而言,我们将场景文本识别的问题视为迭代序列完善过程。由纯视觉模型产生的初始预测序列被编码并馈送到跨模式变压器中,以与视觉特征相互作用并融合,以逐渐近似地面真理。改进过程是通过两个基本字符级操作完成的:删除和插入,它们是通过模仿学习来学习的,并允许并行解码,动态长度变化和良好的解释性。定量实验清楚地表明,Levocr在标准基准上实现最新性能,定性分析验证了拟议的Levocr算法的有效性和优势。代码将很快发布。
translated by 谷歌翻译