已知深神经模型对输入噪声的敏感性是一个具有挑战性的问题。在NLP中,模型性能通常与自然发生的噪声恶化,例如拼写错误。要缓解此问题,模型可能会利用人为中断数据。然而,到目前为止已经任意确定产生的噪声的量和类型。因此,我们建议统计从语法纠错的语料库统计上的错误。我们对多种语言的若干先进的NLP系统进行了彻底的评估,其中任务包括句法分析,名为实体识别,神经机翻译,胶水基准和阅读理解的子集。我们还比较两种解决性能下降的方法:a)培训我们框架生成的中断数据的NLP模型;b)减少外部系统进行自然语言校正的输入噪声。代码在https://github.com/ufal/kazitext上发布。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
我们介绍了第一个用于濒危Erzya语言与俄语以及我们为训练和评估它收集的数据集的神经机器翻译系统。BLEU分别分别为Erzya和Russian的BLEU分数分别为17和19,其中一半以上的翻译被以母语为母语的人可以接受。我们还调整了模型以在Erzya和其他10种语言之间转换,但是如果没有其他并行数据,这些方向上的质量仍然很低。我们将翻译模型与收集的文本语料库一起发布,新的语言标识模型以及适合Erzya语言的多语言句子编码器。这些资源将在https://github.com/slone-nlp/myv-nmt上找到。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings.
translated by 谷歌翻译
在本文中,我们分享了我们努力建立能够翻译一千多种语言的实用机器翻译(MT)系统的发现。我们在三个研究领域中描述了结果:(i)通过利用半监督预训练的语言识别和开发数据驱动的过滤技术来构建1500多种语言的清洁,网挖数据集; (ii)通过利用大规模的多语言模型来开发用于服务不足的语言的实用MT模型,该模型训练了有监督的并行数据,以使用100多种高资源语言和单语言数据集,以增加1000多种语言; (iii)研究这些语言的评估指标的局限性,并对我们MT模型的输出进行定性分析,突出显示了这些类型模型的几种频繁误差模式。我们希望我们的工作为旨在为当前研究的语言构建MT系统的从业者提供有用的见解,并突出显示可以补充Data-Sparse设置中大量多语言模型的弱点的研究方向。
translated by 谷歌翻译
即使在高度发达的国家,多达15-30%的人口只能理解使用基本词汇编写的文本。他们对日常文本的理解是有限的,这阻止了他们在社会中发挥积极作用,并就医疗保健,法律代表或民主选择做出明智的决定。词汇简化是一项自然语言处理任务,旨在通过更简单地替换复杂的词汇和表达方式来使每个人都可以理解文本,同时保留原始含义。在过去的20年中,它引起了极大的关注,并且已经针对各种语言提出了全自动词汇简化系统。该领域进步的主要障碍是缺乏用于构建和评估词汇简化系统的高质量数据集。我们提出了一个新的基准数据集,用于英语,西班牙语和(巴西)葡萄牙语中的词汇简化,并提供有关数据选择和注释程序的详细信息。这是第一个可直接比较三种语言的词汇简化系统的数据集。为了展示数据集的可用性,我们将两种具有不同体系结构(神经与非神经)的最先进的词汇简化系统适应所有三种语言(英语,西班牙语和巴西葡萄牙语),并评估他们的表演在我们的新数据集中。为了进行更公平的比较,我们使用多种评估措施来捕获系统功效的各个方面,并讨论其优势和缺点。我们发现,最先进的神经词汇简化系统优于所有三种语言中最先进的非神经词汇简化系统。更重要的是,我们发现最先进的神经词汇简化系统对英语的表现要比西班牙和葡萄牙语要好得多。
translated by 谷歌翻译
This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS).
translated by 谷歌翻译
我们介绍ASNER,这是一种使用基线阿萨姆语NER模型的低资源阿萨姆语言的命名实体注释数据集。该数据集包含大约99k代币,其中包括印度总理和阿萨姆人戏剧演讲中的文字。它还包含个人名称,位置名称和地址。拟议的NER数据集可能是基于深神经的阿萨姆语言处理的重要资源。我们通过训练NER模型进行基准测试数据集并使用最先进的体系结构评估被监督的命名实体识别(NER),例如FastText,Bert,XLM-R,Flair,Muril等。我们实施了几种基线方法,标记BI-LSTM-CRF体系结构的序列。当使用Muril用作单词嵌入方法时,所有基线中最高的F1得分的准确性为80.69%。带注释的数据集和最高性能模型公开可用。
translated by 谷歌翻译
我们提出了一个针对德国医学自然语言处理的统计模型,该模型训练了命名实体识别(NER),作为开放的公开模型。这项工作是我们第一个Gernerm模型的精致继任者,我们的工作大大优于我们的工作。我们证明了结合多种技术的有效性,以通过在预审预测的深度语言模型(LM),单词平衡和神经机器翻译上转移学习的方式来实现实体识别绩效。由于开放的公共医疗实体识别模型在德国文本上的稀疏情况,这项工作为医疗NLP作为基线模型的德国研究社区提供了好处。由于我们的模型基于公共英语数据,因此提供了其权重,而无需法律限制使用和分发。示例代码和统计模型可在以下网址获得:https://github.com/frankkramer-lab/gernermed-pp
translated by 谷歌翻译
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 42 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.
translated by 谷歌翻译
由于当前语法纠错(GEC)任务中缺乏并行数据,基于序列框架的模型不能充分培训以获得更高的性能。我们提出了两个数据合成方法,可以控制误差率和合成数据对误差类型的比率。第一种方法是用固定概率损坏单声道语料库中的每个单词,包括更换,插入和删除。另一种方法是培训误差生成模型并进一步过滤模型的解码结果。对不同合成数据的实验表明,误差率为40%,误差类型的比率相同,可以提高模型性能。最后,我们综合了大约1亿数据并实现了与现有技术的可比性,它使用了我们使用的两倍。
translated by 谷歌翻译
The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
与标准命名实体识别(NER)相比,在历史文本中识别人,位置和组织是一个巨大的挑战。为了获得机器可读的语料库,通常需要扫描历史文本,并且需要执行光学特征识别(OCR)。结果,历史文献包含错误。此外,位置或组织等实体可以随着时间的推移而改变,这构成了另一个挑战。总体而言,历史文本带有几种特殊性,这些特殊性与现代文本有很大不同,并且在该领域几乎无法使用训练神经标记器的大型标记的Corpora。在这项工作中,我们通过培训大型历史语言模型来解决历史,英语,法语,瑞典语和芬兰语的历史文献。我们通过使用未标记的数据预处理语言模型来规避大量标记数据的需求。我们提出了Hmbert,这是一种历史多语言基于BERT的语言模型,并以多种不同大小的版本发布该模型。此外,我们通过解决下游NER作为今年HIPE-2022共享任务的一部分来评估HMBERT的能力,并提供详细的分析和见解。对于多种语言的经典评论粗粒ner挑战,我们的标记者Histeria的表现优于其他团队的三种语言中的其他团队的模型。
translated by 谷歌翻译
本文提出了一个简单的食谱,用于训练最先进的多语言语法误差校正(GEC)模型。我们首先提出一种语言不足的方法来实现这一目标,以生成大量的合成示例。第二个成分是使用大规模的多语言模型(最多11B参数)。一旦对特定于语言的监督集进行了微调,我们就会以四种语言的GEC基准进行以前的最新结果:英语,捷克语,德语和俄语。在为GEC建立了一套新的基线后,我们通过释放Clang-8数据集使结果可以轻松地重现和访问。它是通过使用我们称为GT5的最佳型号来清洁广泛使用但嘈杂的Lang-8数据集的目标而产生的。 Clang-8极大地简化了由多个微调阶段组成的典型GEC训练管道 - 我们证明,使用现成的语言模型在Clang-8上执行单个微调步骤,可以进一步改善已经是顶级的,为英语执行GT5型号。
translated by 谷歌翻译
由于结构化数据通常不足,因此在开发用于临床信息检索和决策支持系统模型时,需要从电子健康记录中的自由文本中提取标签。临床文本中最重要的上下文特性之一是否定,这表明没有发现。我们旨在通过比较荷兰临床注释中的三种否定检测方法来改善标签的大规模提取。我们使用Erasmus医疗中心荷兰临床语料库比较了基于ContextD的基于规则的方法,即使用MEDCAT和(Fineted)基于Roberta的模型的BilstM模型。我们发现,Bilstm和Roberta模型都在F1得分,精度和召回方面始终优于基于规则的模型。此外,我们将每个模型的分类错误系统地分类,这些错误可用于进一步改善特定应用程序的模型性能。在性能方面,将三个模型结合起来并不有益。我们得出的结论是,尤其是基于Bilstm和Roberta的模型在检测临床否定方面非常准确,但是最终,根据手头的用例,这三种方法最终都可以可行。
translated by 谷歌翻译