深度神经网络变得越来越强大,大大,并且始终需要培训更多标记的数据。但是,由于注释数据是耗时的,因此现在有必要开发在学习有限数据时显示出良好性能的系统。必须正确选择这些数据以获得仍然有效的模型。为此,系统必须能够确定应注释哪些数据以获得最佳结果。在本文中,我们提出了四个估计器来估计对象检测预测的信心。前两个基于蒙特卡洛辍学,第三个基于描述性统计,最后一个是检测器后验概率。在主动学习框架中,与随机选择图像相比,三个第一估计器在检测文档物理页面和文本线的性能方面有显着改善。我们还表明,基于描述性统计的提议估计器可以替代MC辍学,从而降低了计算成本而不会损害性能。
translated by 谷歌翻译
布局分析(LA)阶段对光学音乐识别(OMR)系统的正确性能至关重要。它标识了感兴趣的区域,例如Staves或歌词,然后必须处理,以便转录它们的内容。尽管存在基于深度学习的现代方法,但在不同模型的精度,它们对不同领域的概括或更重要的是,它们尚未开展对OMR的详尽研究,或者更重要的是,它们对后续阶段的影响管道。这项工作侧重于通过对不同神经结构,音乐文档类型和评估方案的实验研究填补文献中的这种差距。培训数据的需求也导致了一种新的半合成数据生成技术的提议,这使得LA方法在真实情况下能够有效适用性。我们的结果表明:(i)该模型的选择及其性能对于整个转录过程至关重要; (ii)(ii)常用于评估LA阶段的指标并不总是与OMR系统的最终性能相关,并且(iii)所提出的数据生成技术使最先进的结果能够以有限的限制实现标记数据集。
translated by 谷歌翻译
无约束的手写文本识别是一项具有挑战性的计算机视觉任务。传统上,它是通过两步方法来处理的,结合了线细分,然后是文本线识别。我们第一次为手写文档识别任务提出了无端到端的无分段体系结构:文档注意网络。除文本识别外,该模型还接受了使用类似XML的方式使用开始和结束标签标记文本零件的训练。该模型由用于特征提取的FCN编码器和用于复发令牌预测过程的变压器解码器层组成。它将整个文本文档作为输入和顺序输出字符以及逻辑布局令牌。与现有基于分割的方法相反,该模型是在不使用任何分割标签的情况下进行训练的。我们在页面级别的Read 2016数据集以及CER分别为3.43%和3.70%的双页级别上获得了竞争成果。我们还为Rimes 2009数据集提供了页面级别的结果,达到CER的4.54%。我们在https://github.com/factodeeplearning/dan上提供所有源代码和预训练的模型权重。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
无约束的手写文本识别仍然具有挑战性的计算机视觉系统。段落识别传统上由两个模型实现:第一个用于线分割和用于文本线路识别的第二个。我们提出了一个统一的端到端模型,使用混合注意力来解决这项任务。该模型旨在迭代地通过线路进行段落图像线。它可以分为三个模块。编码器从整个段落图像生成特征映射。然后,注意力模块循环生成垂直加权掩模,使能专注于当前的文本线特征。这样,它执行一种隐式线分割。对于每个文本线特征,解码器模块识别关联的字符序列,导致整个段落的识别。我们在三个流行的数据集赛中达到最先进的字符错误率:ribs的1.91%,IAM 4.45%,读取2016年3.59%。我们的代码和培训的模型重量可在HTTPS:// GitHub上获得.com / fefodeeplearning / watermentattentocroc。
translated by 谷歌翻译
我们在本文中提出了在循环中建立深度神经网络和人类之间的合作,以迅速获得遥感图像的准确分割图。简而言之,代理商迭代地与网络交互以纠正其最初缺陷的预测。具体地,这些相互作用是代表语义标签的注释。我们的方法论贡献是双重的。首先,我们提出了两个交互式学习计划,将用户输入集成到深神经网络中。第一个将注释连接到其他网络的输入。第二个将注释用作稀疏的地面真相来培训网络。其次,我们提出了一种积极的学习策略,以指导用户对诠释的最相关的领域。为此目的,我们比较不同的最先进的获取功能来评估神经网络不确定性,如Confidnet,熵或odin。通过对三个遥感数据集的实验,我们展示了所提出的方法的有效性。值得注意的是,我们表明基于不确定性估计的主动学习使能够快速引导用户对错误而导致错误,因此它与引导用户干预相关联。
translated by 谷歌翻译
卷积神经网络(CNN)的泛化性能受训练图像的数量,质量和品种的影响。必须注释训练图像,这是耗时和昂贵的。我们工作的目标是减少培训CNN所需的注释图像的数量,同时保持其性能。我们假设通过确保该组训练图像包含大部分难以分类的图像,可以更快地提高CNN的性能。我们的研究目的是使用活动学习方法测试这个假设,可以自动选择难以分类的图像。我们开发了一种基于掩模区域的CNN(掩模R-CNN)的主动学习方法,并命名此方法Maskal。 Maskal涉及掩模R-CNN的迭代训练,之后培训的模型用于选择一组未标记的图像,该模型是不确定的。然后将所选择的图像注释并用于恢复掩模R-CNN,并且重复这一点用于许多采样迭代。在我们的研究中,掩模R-CNN培训由由12个采样迭代选择的2500个硬花甘蓝图像,从训练组14,000个硬花甘蓝图像的训练组中选择了12个采样迭代。对于所有采样迭代,Maskal比随机采样显着更好。此外,在抽样900图像之后,屏蔽具有相同的性能,随着随机抽样在2300张图像之后。与在整个培训集(14,000张图片)上培训的面具R-CNN模型相比,Maskal达到其性能的93.9%,其培训数据的17.9%。随机抽样占其性能的81.9%,占其培训数据的16.4%。我们得出结论,通过使用屏马,可以减少注释工作对于在西兰花的数据集上训练掩模R-CNN。我们的软件可在https://github.com/pieterblok/maskal上找到。
translated by 谷歌翻译
Object detection requires substantial labeling effort for learning robust models. Active learning can reduce this effort by intelligently selecting relevant examples to be annotated. However, selecting these examples properly without introducing a sampling bias with a negative impact on the generalization performance is not straightforward and most active learning techniques can not hold their promises on real-world benchmarks. In our evaluation paper, we focus on active learning techniques without a computational overhead besides inference, something we refer to as zero-cost active learning. In particular, we show that a key ingredient is not only the score on a bounding box level but also the technique used for aggregating the scores for ranking images. We outline our experimental setup and also discuss practical considerations when using active learning for object detection.
translated by 谷歌翻译
The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.
translated by 谷歌翻译
Active learning as a paradigm in deep learning is especially important in applications involving intricate perception tasks such as object detection where labels are difficult and expensive to acquire. Development of active learning methods in such fields is highly computationally expensive and time consuming which obstructs the progression of research and leads to a lack of comparability between methods. In this work, we propose and investigate a sandbox setup for rapid development and transparent evaluation of active learning in deep object detection. Our experiments with commonly used configurations of datasets and detection architectures found in the literature show that results obtained in our sandbox environment are representative of results on standard configurations. The total compute time to obtain results and assess the learning behavior can thereby be reduced by factors of up to 14 when comparing with Pascal VOC and up to 32 when comparing with BDD100k. This allows for testing and evaluating data acquisition and labeling strategies in under half a day and contributes to the transparency and development speed in the field of active learning for object detection.
translated by 谷歌翻译
The deployment of Deep Learning (DL) models is still precluded in those contexts where the amount of supervised data is limited. To answer this issue, active learning strategies aim at minimizing the amount of labelled data required to train a DL model. Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary. These techniques are theoretically sound, but an understanding of the selected samples based on their content is not straightforward, further driving non-experts to consider DL as a black-box. For the first time, here we propose a different approach, taking into consideration common domain-knowledge and enabling non-expert users to train a model with fewer samples. In our Knowledge-driven Active Learning (KAL) framework, rule-based knowledge is converted into logic constraints and their violation is checked as a natural guide for sample selection. We show that even simple relationships among data and output classes offer a way to spot predictions for which the model need supervision. The proposed approach (i) outperforms many active learning strategies in terms of average F1 score, particularly in those contexts where domain knowledge is rich. Furthermore, we empirically demonstrate that (ii) KAL discovers data distribution lying far from the initial training data unlike uncertainty-based strategies, (iii) it ensures domain experts that the provided knowledge is respected by the model on test data, and (iv) it can be employed even when domain-knowledge is not available by coupling it with a XAI technique. Finally, we also show that KAL is also suitable for object recognition tasks and, its computational demand is low, unlike many recent active learning strategies.
translated by 谷歌翻译
详细研究了图像上微生物对象的密度图(DM)方法的统计特性。DM由U $^2 $ -NET给出。使用了深层神经网络的两种统计方法:引导程序和蒙特卡洛(MC)辍学。对DM预测的不确定性的详细分析导致对DM模型的缺陷有了更深入的了解。根据我们的调查,我们提出了网络中的自称模块。改进的网络模型,称为\ textIt {自称密度映射}(SNDM),可以单独校正其输出密度映射,以准确预测图像中对象的总数。SNDM体系结构优于原始模型。此外,两个统计框架(Bootstrap和MC脱落)都对SNDM均具有一致的统计结果,在原始模型中未观察到。SNDM效率与检测器碱模型相当,例如更快和级联R-CNN检测器。
translated by 谷歌翻译
本文介绍了用于文档图像分析的图像数据集的系统文献综述,重点是历史文档,例如手写手稿和早期印刷品。寻找适当的数据集进行历史文档分析是促进使用不同机器学习算法进行研究的关键先决条件。但是,由于实际数据非常多(例如,脚本,任务,日期,支持系统和劣化量),数据和标签表示的不同格式以及不同的评估过程和基准,因此找到适当的数据集是一项艰巨的任务。这项工作填补了这一空白,并在现有数据集中介绍了元研究。经过系统的选择过程(根据PRISMA指南),我们选择了56项根据不同因素选择的研究,例如出版年份,文章中实施的方法数量,所选算法的可靠性,数据集大小和期刊的可靠性出口。我们通过将其分配给三个预定义的任务之一来总结每个研究:文档分类,布局结构或语义分析。我们为每个数据集提供统计,文档类型,语言,任务,输入视觉方面和地面真实信息。此外,我们还提供了这些论文或最近竞争的基准任务和结果。我们进一步讨论了该领域的差距和挑战。我们倡导将转换工具提供到通用格式(例如,用于计算机视觉任务的可可格式),并始终提供一组评估指标,而不仅仅是一种评估指标,以使整个研究的结果可比性。
translated by 谷歌翻译
Furigana是日语写作中使用的发音笔记。能够检测到这些可以帮助提高光学特征识别(OCR)性能,或通过正确显示Furigana来制作日本书面媒体的更准确的数字副本。该项目的重点是在日本书籍和漫画中检测Furigana。尽管已经研究了日本文本的检测,但目前尚无提议检测Furigana的方法。我们构建了一个包含日本书面媒体和Furigana注释的新数据集。我们建议对此类数据的评估度量,该度量与对象检测中使用的评估协议类似,除非它允许对象组通过一个注释标记。我们提出了一种基于数学形态和连接组件分析的Furigana检测方法。我们评估数据集的检测,并比较文本提取的不同方法。我们还分别评估了不同类型的图像,例如书籍和漫画,并讨论每种图像的挑战。所提出的方法在数据集上达到76 \%的F1得分。该方法在常规书籍上表现良好,但在漫画和不规则格式的书籍上的表现较少。最后,我们证明所提出的方法可以在漫画109数据集上提高OCR的性能5 \%。源代码可通过\ texttt {\ url {https://github.com/nikolajkb/furiganadetection}}}
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
接受注释较弱的对象探测器是全面监督者的负担得起的替代方案。但是,它们之间仍然存在显着的性能差距。我们建议通过微调预先训练的弱监督检测器来缩小这一差距,并使用``Box-In-box''(bib'(bib)自动从训练集中自动选择了一些完全注销的样品,这是一种新颖的活跃学习专门针对弱势监督探测器的据可查的失败模式而设计的策略。 VOC07和可可基准的实验表明,围嘴表现优于其他活跃的学习技术,并显着改善了基本的弱监督探测器的性能,而每个类别仅几个完全宣布的图像。围嘴达到了完全监督的快速RCNN的97%,在VOC07上仅10%的全已通量图像。在可可(COCO)上,平均每类使用10张全面通量的图像,或同等的训练集的1%,还减少了弱监督检测器和完全监督的快速RCN之间的性能差距(In AP)以上超过70% ,在性能和数据效率之间表现出良好的权衡。我们的代码可在https://github.com/huyvvo/bib上公开获取。
translated by 谷歌翻译
由于新型神经网络体系结构的设计和大规模数据集的可用性,对象检测方法在过去几年中取得了令人印象深刻的改进。但是,当前的方法有一个重要的限制:他们只能检测到在训练时间内观察到的类,这只是检测器在现实世界中可能遇到的所有类的子集。此外,在训练时间通常不考虑未知类别的存在,从而导致方法甚至无法检测到图像中存在未知对象。在这项工作中,我们解决了检测未知对象的问题,称为开放集对象检测。我们提出了一种名为Unkad的新颖培训策略,能够预测未知的对象,而无需对其进行任何注释,利用训练图像背景中已经存在的非注释对象。特别是,unkad首先利用更快的R-CNN的四步训练策略,识别和伪标签未知对象,然后使用伪通量来训练其他未知类。尽管UNKAD可以直接检测未知的对象,但我们将其与以前未知的检测技术相结合,表明它不成本就可以提高其性能。
translated by 谷歌翻译
估计神经网络的不确定性在安全关键环境中起着基本作用。在对自主驾驶的感知中,测量不确定性意味着向下游任务提供额外的校准信息,例如路径规划,可以将其用于安全导航。在这项工作中,我们提出了一种用于对象检测的新型采样的不确定性估计方法。我们称之为特定网络,它是第一个为每个输出信号提供单独的不确定性:Objectness,类,位置和大小。为实现这一点,我们提出了一种不确定性感知的热图,并利用检测器提供的相邻边界框在推理时间。我们分别评估了不同不确定性估计的检测性能和质量,也具有具有挑战性的域名样本:BDD100K和肾上腺素训练在基蒂培训。此外,我们提出了一种新的指标来评估位置和大小的不确定性。当转移到看不见的数据集时,某些基本上概括了比以前的方法和集合更好,同时是实时和提供高质量和全面的不确定性估计。
translated by 谷歌翻译
As an important data selection schema, active learning emerges as the essential component when iterating an Artificial Intelligence (AI) model. It becomes even more critical given the dominance of deep neural network based models, which are composed of a large number of parameters and data hungry, in application. Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions. In this paper, we present a review of active learning through deep active learning approaches from the following perspectives: 1) technical advancements in active learning, 2) applications of active learning in computer vision, 3) industrial systems leveraging or with potential to leverage active learning for data iteration, 4) current limitations and future research directions. We expect this paper to clarify the significance of active learning in a modern AI model manufacturing process and to bring additional research attention to active learning. By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies by boosting model production at scale.
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译