在低资源方案中的手写文本识别(例如具有稀有字母的手稿)是一个具有挑战性的问题。主要困难来自很少的注释数据和有限的语言信息(例如词典和语言模型)。因此,我们提出了一些基于学习的手写识别方法,该方法大大降低了人类劳动注释过程,只需要每个字母符号的图像很少。该方法包括检测文本图像中给定字母的所有符号,并解码获得的相似性得分与转录符号的最终顺序。我们的模型首先是在与目标域不同的任何字母内生成的合成线图像上预估计的。然后应用第二个训练步骤以减少源数据和目标数据之间的差距。由于这种重新训练将需要数千个手写符号以及其边界框的注释,因此我们建议通过无监督的渐进学习方法避免这种人类的努力,从而自动将伪标签分配给非宣布数据。对不同手稿数据集的评估表明,我们的模型可以导致竞争成果,而人类努力大大减少。该代码将在此存储库中公开可用:\ url {https://github.com/dali92002/htrbymatching}
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
几十年来,手写的中文文本识别(HCTR)一直是一个活跃的研究主题。但是,大多数以前的研究仅关注裁剪文本图像的识别,而忽略了实际应用程序中文本线检测引起的错误。尽管近年来已经提出了一些针对页面文本识别的方法,但它们要么仅限于简单布局,要么需要非常详细的注释,包括昂贵的线条级别甚至角色级边界框。为此,我们建议Pagenet端到端弱监督的页面级HCTR。 Pagenet检测并识别角色并预测其之间的阅读顺序,在处理复杂的布局(包括多方向和弯曲的文本线路)时,这更健壮和灵活。利用所提出的弱监督学习框架,Pagenet只需要对真实数据进行注释。但是,它仍然可以在字符和线级别上输出检测和识别结果,从而避免标记字符和文本线条的界限框的劳动和成本。在五个数据集上进行的广泛实验证明了Pagenet优于现有的弱监督和完全监督的页面级方法。这些实验结果可能会引发进一步的研究,而不是基于连接主义时间分类或注意力的现有方法的领域。源代码可在https://github.com/shannanyinxiang/pagenet上获得。
translated by 谷歌翻译
实例对象检测在智能监视,视觉导航,人机交互,智能服务和其他字段中扮演重要作用。灵感来自深度卷积神经网络(DCNN)的巨大成功,基于DCNN的实例对象检测已成为一个有前途的研究主题。为了解决DCNN始终需要大规模注释数据集来监督其培训的问题,而手动注释是耗尽和耗时的,我们提出了一种基于共同训练的新框架,称为克自我标记和检测(Gram-SLD) 。建议的克拉姆-SLD可以自动注释大量数据,具有非常有限的手动标记的关键数据并实现竞争性能。在我们的框架中,克朗损失被定义并用于构造两个完全冗余和独立的视图和一个关键的样本选择策略以及自动注释策略,可以全面考虑精度并回忆,以产生高质量的伪标签。 Public Gmu厨房数据集的实验,活动视觉数据集和自制的Bhid-Item DataSetDemonstrite,只有5%的标记训练数据,我们的克斯LLD比较了对象检测中的竞争性能(少于2%的地图丢失)通过完全监督的方法。在具有复杂和变化环境的实际应用中,所提出的方法可以满足实例对象检测的实时和准确性要求。
translated by 谷歌翻译
尽管最近的自动文本识别取得了进步,但在历史手稿方面,该性能仍然保持温和。这主要是因为缺乏可用的标记数据来训练渴望数据的手写文本识别(HTR)模型。由于错误率的降低,关键字发现系统(KWS)提供了HTR的有效替代方案,但通常仅限于封闭的参考词汇。在本文中,我们提出了一些学习范式,用于发现几个字符(n-gram)的序列,这些序列需要少量标记的训练数据。我们表明,对重要的n-gram的认识可以减少系统对词汇的依赖。在这种情况下,输入手写线图像中的vocabulary(OOV)单词可能是属于词典的n-gram序列。对我们提出的多代表方法进行了广泛的实验评估。
translated by 谷歌翻译
无约束的手写文本识别是一项具有挑战性的计算机视觉任务。传统上,它是通过两步方法来处理的,结合了线细分,然后是文本线识别。我们第一次为手写文档识别任务提出了无端到端的无分段体系结构:文档注意网络。除文本识别外,该模型还接受了使用类似XML的方式使用开始和结束标签标记文本零件的训练。该模型由用于特征提取的FCN编码器和用于复发令牌预测过程的变压器解码器层组成。它将整个文本文档作为输入和顺序输出字符以及逻辑布局令牌。与现有基于分割的方法相反,该模型是在不使用任何分割标签的情况下进行训练的。我们在页面级别的Read 2016数据集以及CER分别为3.43%和3.70%的双页级别上获得了竞争成果。我们还为Rimes 2009数据集提供了页面级别的结果,达到CER的4.54%。我们在https://github.com/factodeeplearning/dan上提供所有源代码和预训练的模型权重。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
无约束的手写文本识别仍然具有挑战性的计算机视觉系统。段落识别传统上由两个模型实现:第一个用于线分割和用于文本线路识别的第二个。我们提出了一个统一的端到端模型,使用混合注意力来解决这项任务。该模型旨在迭代地通过线路进行段落图像线。它可以分为三个模块。编码器从整个段落图像生成特征映射。然后,注意力模块循环生成垂直加权掩模,使能专注于当前的文本线特征。这样,它执行一种隐式线分割。对于每个文本线特征,解码器模块识别关联的字符序列,导致整个段落的识别。我们在三个流行的数据集赛中达到最先进的字符错误率:ribs的1.91%,IAM 4.45%,读取2016年3.59%。我们的代码和培训的模型重量可在HTTPS:// GitHub上获得.com / fefodeeplearning / watermentattentocroc。
translated by 谷歌翻译
布局分析(LA)阶段对光学音乐识别(OMR)系统的正确性能至关重要。它标识了感兴趣的区域,例如Staves或歌词,然后必须处理,以便转录它们的内容。尽管存在基于深度学习的现代方法,但在不同模型的精度,它们对不同领域的概括或更重要的是,它们尚未开展对OMR的详尽研究,或者更重要的是,它们对后续阶段的影响管道。这项工作侧重于通过对不同神经结构,音乐文档类型和评估方案的实验研究填补文献中的这种差距。培训数据的需求也导致了一种新的半合成数据生成技术的提议,这使得LA方法在真实情况下能够有效适用性。我们的结果表明:(i)该模型的选择及其性能对于整个转录过程至关重要; (ii)(ii)常用于评估LA阶段的指标并不总是与OMR系统的最终性能相关,并且(iii)所提出的数据生成技术使最先进的结果能够以有限的限制实现标记数据集。
translated by 谷歌翻译
For best performance, today's semantic segmentation methods use large and carefully labeled datasets, requiring expensive annotation budgets. In this work, we show that coarse annotation is a low-cost but highly effective alternative for training semantic segmentation models. Considering the urban scene segmentation scenario, we leverage cheap coarse annotations for real-world captured data, as well as synthetic data to train our model and show competitive performance compared with finely annotated real-world data. Specifically, we propose a coarse-to-fine self-training framework that generates pseudo labels for unlabeled regions of the coarsely annotated data, using synthetic data to improve predictions around the boundaries between semantic classes, and using cross-domain data augmentation to increase diversity. Our extensive experimental results on Cityscapes and BDD100k datasets demonstrate that our method achieves a significantly better performance vs annotation cost tradeoff, yielding a comparable performance to fully annotated data with only a small fraction of the annotation budget. Also, when used as pretraining, our framework performs better compared to the standard fully supervised setting.
translated by 谷歌翻译
This paper explores semi-supervised training for sequence tasks, such as Optical Character Recognition or Automatic Speech Recognition. We propose a novel loss function $\unicode{x2013}$ SoftCTC $\unicode{x2013}$ which is an extension of CTC allowing to consider multiple transcription variants at the same time. This allows to omit the confidence based filtering step which is otherwise a crucial component of pseudo-labeling approaches to semi-supervised learning. We demonstrate the effectiveness of our method on a challenging handwriting recognition task and conclude that SoftCTC matches the performance of a finely-tuned filtering based pipeline. We also evaluated SoftCTC in terms of computational efficiency, concluding that it is significantly more efficient than a na\"ive CTC-based approach for training on multiple transcription variants, and we make our GPU implementation public.
translated by 谷歌翻译
半弱监督和监督的学习最近在对象检测文献中引起了很大的关注,因为它们可以减轻成功训练深度学习模型所需的注释成本。半监督学习的最先进方法依赖于使用多阶段过程训练的学生老师模型,并大量数据增强。为弱监督的设置开发了自定义网络,因此很难适应不同的检测器。在本文中,引入了一种弱半监督的训练方法,以减少这些训练挑战,但通过仅利用一小部分全标记的图像,并在弱标记图像中提供信息来实现最先进的性能。特别是,我们基于通用抽样的学习策略以在线方式产生伪基真实(GT)边界框注释,消除了对多阶段培训的需求和学生教师网络配置。这些伪GT框是根据通过得分传播过程累积的对象建议的分类得分从弱标记的图像中采样的。 PASCAL VOC数据集的经验结果表明,使用VOC 2007作为完全标记的拟议方法可提高性能5.0%,而VOC 2012作为弱标记数据。同样,有了5-10%的完全注释的图像,我们观察到MAP中的10%以上的改善,表明对图像级注释的适度投资可以大大改善检测性能。
translated by 谷歌翻译
Deep learning has emerged as an effective solution for solving the task of object detection in images but at the cost of requiring large labeled datasets. To mitigate this cost, semi-supervised object detection methods, which consist in leveraging abundant unlabeled data, have been proposed and have already shown impressive results. However, most of these methods require linking a pseudo-label to a ground-truth object by thresholding. In previous works, this threshold value is usually determined empirically, which is time consuming, and only done for a single data distribution. When the domain, and thus the data distribution, changes, a new and costly parameter search is necessary. In this work, we introduce our method Adaptive Self-Training for Object Detection (ASTOD), which is a simple yet effective teacher-student method. ASTOD determines without cost a threshold value based directly on the ground value of the score histogram. To improve the quality of the teacher predictions, we also propose a novel pseudo-labeling procedure. We use different views of the unlabeled images during the pseudo-labeling step to reduce the number of missed predictions and thus obtain better candidate labels. Our teacher and our student are trained separately, and our method can be used in an iterative fashion by replacing the teacher by the student. On the MS-COCO dataset, our method consistently performs favorably against state-of-the-art methods that do not require a threshold parameter, and shows competitive results with methods that require a parameter sweep search. Additional experiments with respect to a supervised baseline on the DIOR dataset containing satellite images lead to similar conclusions, and prove that it is possible to adapt the score threshold automatically in self-training, regardless of the data distribution.
translated by 谷歌翻译
场景文本检测的具有挑战性的领域需要复杂的数据注释,这是耗时和昂贵的。弱监管等技术可以减少所需的数据量。本文提出了一种薄弱的现场文本检测监控方法,这是利用加强学习(RL)。RL代理收到的奖励由神经网络估算,而不是从地面真理标签推断出来。首先,我们增强了具有多种培训优化的文本检测的现有监督RL方法,允许我们将性能差距缩放到基于回归的算法。然后,我们将拟议的系统在现实世界数据的漏洞和半监督培训中使用。我们的结果表明,在弱监督环境中培训是可行的。但是,我们发现在半监督设置中使用我们的模型,例如,将标记的合成数据与未经发布的实际数据相结合,产生最佳结果。
translated by 谷歌翻译
半监督的几次学习在于培训分类器以适应有限的标记数据和固定数量未标记的数据的新任务。已经开发了许多复杂的方法来解决该问题所包含的挑战。在本文中,我们提出了一种简单但相当有效的方法,可以从间接学习的角度预测未标记数据的准确伪标记,然后增强在几个拍摄分类任务中设置的极其标签受限的支持。我们的方法只能通过仅使用现成的操作来仅在几行代码中实现,但是它能够在四个基准数据集上超越最先进的方法。
translated by 谷歌翻译
自我监督的预审查能够为各种视觉文档理解(VDU)任务产生可转移的表示。但是,尚未研究此类表示在测试时间时适应新分配变化的能力。我们提出了Docta,这是一种用于文档的新型测试时间适应方法,该方法通过掩盖的视觉语言建模来利用交叉模式自我观察学习以及伪标签,以适应\ textit {source}域中学习的模型,以使其{source}域中为一个未标记的\ textit {textit {目标}域在测试时间。我们还使用现有的公共数据集介绍了新的基准测试,用于各种VDU任务,包括实体识别,键值提取和文档视觉问题回答任务,其中Doctta将源模型性能提高到1.79 \%(F1分数),3.43 \%(3.43 \%)(F1得分)和17.68 \%(ANLS得分),同时大大降低了目标数据的校准误差。
translated by 谷歌翻译
强大的海上障碍物检测对于安全导航自动船和及时避免碰撞至关重要。当前的最新技术基于在大型数据集上训练的深度分割网络。但是,此类数据集的每个像素地面真相标签是劳动密集型且昂贵的。我们提出了一个新的脚手架学习制度(SLR),该制度利用薄弱的注释,包括水边缘,地平线和障碍物边界框来训练基于细分的障碍物检测网络,从而将所需的地面真相标记工作减少了21倍。 SLR从弱注释中训练初始模型,然后在重新估计分割伪标签和改进网络参数之间交替。实验表明,在弱标签上使用SLR训练的海上障碍分割网络不仅匹配,而且优于接受密集地面真相标签的相同网络,这是一个了不起的结果。除了提高精度外,SLR还增加了域的概括,可用于较低的手动注释负载,用于域的适应性。代码和预培训模型可在https://github.com/lojzezust/slr上找到。
translated by 谷歌翻译
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code at: https://github.com/msadat3/SSL_for_NLI.
translated by 谷歌翻译
自从17世纪以来,理论上就建立了非语言交流的\ Esquote*{Language}的手势。但是,它与视觉艺术的相关性仅偶尔表达。这可能主要是由于传统上必须手工处理的大量数据。但是,随着数字化的稳定进展,越来越多的历史文物被索引并提供给公众,从而需要自动检索具有类似身体星座或姿势的艺术历史图案。由于艺术领域因其风格差异而与现有的人类姿势估计的现实世界数据集有很大不同,因此提出了新的挑战。在本文中,我们提出了一种新颖的方法来估计艺术历史图像中的人类姿势。与以前试图用预训练模型或通过样式转移弥合域间隙的工作相反,我们建议对对象和关键点检测进行半监督学习。此外,我们引入了一个新颖的特定领域艺术数据集,其中包括人物的边界框和关键点注释。与使用预训练模型或样式转移的方法相比,我们的方法取得了明显更好的结果。
translated by 谷歌翻译
As an important data selection schema, active learning emerges as the essential component when iterating an Artificial Intelligence (AI) model. It becomes even more critical given the dominance of deep neural network based models, which are composed of a large number of parameters and data hungry, in application. Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions. In this paper, we present a review of active learning through deep active learning approaches from the following perspectives: 1) technical advancements in active learning, 2) applications of active learning in computer vision, 3) industrial systems leveraging or with potential to leverage active learning for data iteration, 4) current limitations and future research directions. We expect this paper to clarify the significance of active learning in a modern AI model manufacturing process and to bring additional research attention to active learning. By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies by boosting model production at scale.
translated by 谷歌翻译