临床票据是记录患者信息的有效方法,但难以破译非专家的难以破译。自动简化医学文本可以使患者提供有关其健康的有价值的信息,同时节省临床医生。我们提出了一种基于词频率和语言建模的医学文本自动简化的新方法,基于富裕的外行术语的医疗本体。我们发布了一对公开可用的医疗句子的新数据集,并由临床医生简化了它们的版本。此外,我们定义了一种新颖的文本简化公制和评估框架,我们用于对我们对现有技术的方法进行大规模人类评估。我们基于在医学论坛数据上培训的语言模型的方法在保留语法和原始含义时产生更简单的句子,超越现有技术。
translated by 谷歌翻译
即使在高度发达的国家,多达15-30%的人口只能理解使用基本词汇编写的文本。他们对日常文本的理解是有限的,这阻止了他们在社会中发挥积极作用,并就医疗保健,法律代表或民主选择做出明智的决定。词汇简化是一项自然语言处理任务,旨在通过更简单地替换复杂的词汇和表达方式来使每个人都可以理解文本,同时保留原始含义。在过去的20年中,它引起了极大的关注,并且已经针对各种语言提出了全自动词汇简化系统。该领域进步的主要障碍是缺乏用于构建和评估词汇简化系统的高质量数据集。我们提出了一个新的基准数据集,用于英语,西班牙语和(巴西)葡萄牙语中的词汇简化,并提供有关数据选择和注释程序的详细信息。这是第一个可直接比较三种语言的词汇简化系统的数据集。为了展示数据集的可用性,我们将两种具有不同体系结构(神经与非神经)的最先进的词汇简化系统适应所有三种语言(英语,西班牙语和巴西葡萄牙语),并评估他们的表演在我们的新数据集中。为了进行更公平的比较,我们使用多种评估措施来捕获系统功效的各个方面,并讨论其优势和缺点。我们发现,最先进的神经词汇简化系统优于所有三种语言中最先进的非神经词汇简化系统。更重要的是,我们发现最先进的神经词汇简化系统对英语的表现要比西班牙和葡萄牙语要好得多。
translated by 谷歌翻译
This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS).
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation. However, existing human evaluation datasets in text simplification are limited by a lack of annotations, unitary simplification types, and outdated models, making them unsuitable for this approach. To address these issues, we introduce the SIMPEVAL corpus that contains: SIMPEVAL_ASSET, comprising 12K human ratings on 2.4K simplifications of 24 systems, and SIMPEVAL_2022, a challenging simplification benchmark consisting of over 1K human ratings of 360 simplifications including generations from GPT-3.5. Training on SIMPEVAL_ASSET, we present LENS, a Learnable Evaluation Metric for Text Simplification. Extensive empirical results show that LENS correlates better with human judgment than existing metrics, paving the way for future progress in the evaluation of text simplification. To create the SIMPEVAL datasets, we introduce RANK & RATE, a human evaluation framework that rates simplifications from several models in a list-wise manner by leveraging an interactive interface, which ensures both consistency and accuracy in the evaluation process. Our metric, dataset, and annotation toolkit are available at https://github.com/Yao-Dou/LENS.
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings.
translated by 谷歌翻译
法律文本的处理一直是自然语言处理(NLP)的新兴领域的发展。法律文本包含词汇,语义,语法和形态中的独特术语和复杂的语言属性。因此,对于法律领域特定的文本简化(TS)方法的开发对于促进普通人理解法律文本并为主流法律NLP应用程序的高级模型提供投入至关重要。尽管最近的一项研究提出了一种基于规则的TS法律文本方法,但以前尚未考虑法律领域中的基于学习的TS。在这里,我们介绍了一种无监督的法律文本简化方法(USLT)。 USLT通过替换复杂的单词和分裂长句子来执行特定于域的TS。为此,USLT检测句子中的复杂单词,通过掩盖转换器模型生成候选者,并根据等级分数选择替代的候选者。之后,USLT递归将长句子分解为较短的核心和上下文句子的层次结构,同时保留语义含义。我们证明,USLT在文本简单性中优于最先进的域总TS方法,同时保持语义完整。
translated by 谷歌翻译
A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments.
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
我们提出了两种小型无监督方法,用于消除文本中的毒性。我们的第一个方法结合了最近的两个想法:(1)使用小型条件语言模型的生成过程的指导和(2)使用释义模型进行风格传输。我们使用良好的令人措辞的令人愉快的释放器,由风格培训的语言模型引导,以保持文本内容并消除毒性。我们的第二种方法使用BERT用他们的非攻击性同义词取代毒性单词。我们通过使BERT替换具有可变数量的单词的屏蔽令牌来使该方法更灵活。最后,我们介绍了毒性去除任务的风格转移模型的第一个大规模比较研究。我们将模型与许多用于样式传输的方法进行比较。使用无监督的样式传输指标的组合以可参考方式评估该模型。两种方法都建议产生新的SOTA结果。
translated by 谷歌翻译
健康素养是2030年健康人民的主要重点,这是美国国家目标和目标的第五次迭代。健康素养较低的人通常会遵循访问后的说明以及使用处方,这会导致健康结果和严重的健康差异。在这项研究中,我们建议通过自动在给定句子中翻译文盲语言来利用自然语言处理技术来提高患者教育材料的健康素养。我们从四个在线健康信息网站上刮擦了患者教育材料:medlineplus.gov,drugs.com,mayoclinic.org和reddit.com。我们分别在银标准培训数据集和黄金标准测试数据集上培训并测试了最先进的神经机译(NMT)模型。实验结果表明,双向长期记忆(BILSTM)NMT模型的表现超过了来自变压器(BERT)基于NMT模型的双向编码器表示。我们还验证了NMT模型通过比较句子中的健康文盲语言比率来翻译健康文盲语言的有效性。提出的NMT模型能够识别正确的复杂单词并简化为外行语言,同时该模型遭受句子完整性,流利性,可读性的影响,并且难以翻译某些医学术语。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
We report on novel investigations into training models that make sentences concise. We define the task and show that it is different from related tasks such as summarization and simplification. For evaluation, we release two test sets, consisting of 2000 sentences each, that were annotated by two and five human annotators, respectively. We demonstrate that conciseness is a difficult task for which zero-shot setups with large neural language models often do not perform well. Given the limitations of these approaches, we propose a synthetic data generation method based on round-trip translations. Using this data to either train Transformers from scratch or fine-tune T5 models yields our strongest baselines that can be further improved by fine-tuning on an artificial conciseness dataset that we derived from multi-annotator machine translation test sets.
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
获得具有语义注释的文本数据集是一个艰苦的过程,但对于自然语言过程(NLP)的监督培训至关重要。通常,在特定于域的上下文中开发和应用新的NLP管道通常需要定制设计的数据集来以监督机器学习方式解决NLP任务。当使用非英语语言进行医学数据处理时,这会暴露出几个次要和主要的相互联系的问题,例如缺乏任务匹配数据集以及特定于任务的预训练模型。在我们的工作中,我们建议利用审计的语言模型来培训数据获取,以便检索足够大的数据集,以训练更小,更有效的模型,以便使用特定的特定任务。为了证明您的方法的有效性,我们创建了一个自定义数据集,我们用来培训用于德国文本的医学模型,但在原则上我们的方法仍然不依赖语言。我们获得的数据集以及我们的预培训模型可在以下网址公开获取:https://github.com/frankkramer-lab/gptnermed
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
专家员工的文字式传输技术有可能改善科学社区成员与公众之间的沟通。专家制作的高质量信息往往充满了困难的术语外国人,努力了解。这是医疗领域的一个特别值得注意的问题,其中Layman经常在线医学文本混淆。目前,两个瓶颈干扰了建立高质量医学专家外延式转移系统的目标:曾经专家和外行术语的缺点是普及的预押医学域语言模型,缺乏并行的Corpora培训转让任务本身。为了缓解第一个问题,我们提出了一种新颖的语言模型(LM)预测任务,知识基础同化,从自我监督学习期间将来自专家和外行式医学术语术语的边缘的预先训练数据综合为LM的LM。 。要缓解第二个问题,我们使用基于边缘的标准在医学专家 - Layman域中建立大规模并行语料库。我们的实验表明,基于变压器的模型,以知识库同化和其他良好的预先预订任务对我们的新并行语料库进行了微调,这导致专家外部转账基准的相当大,达到了我们人类评估的平均相对改善总体成功率(OSR),达106%。我们释放我们的代码和并行语料库以供未来的研究。
translated by 谷歌翻译
标准自动指标,例如BLEU对于文档级MT评估不可靠。他们既不能区分翻译质量的文档级改进与句子级别的改进,也不能确定引起上下文反应翻译的话语现象。本文介绍了一种新颖的自动公制金发,以扩大自动MT评估的范围,从句子到文档级别。金发女郎通过对与话语相关的跨度进行分类并计算基于相似性的F1分类跨度来考虑话语一致性。我们对新建的数据集BWB进行了广泛的比较。实验结果表明,金发女郎在文档级别具有更好的选择性和可解释性,并且对文档级别的细微差别更为敏感。在一项大规模的人类研究中,与以前的指标相比,金发碧眼的皮尔逊与人类判断的相关性也明显更高。
translated by 谷歌翻译
Sentence simplification aims at making the structure of text easier to read and understand while maintaining its original meaning. This can be helpful for people with disabilities, new language learners, or those with low literacy. Simplification often involves removing difficult words and rephrasing the sentence. Previous research have focused on tackling this task by either using external linguistic databases for simplification or by using control tokens for desired fine-tuning of sentences. However, in this paper we purely use pre-trained transformer models. We experiment with a combination of GPT-2 and BERT models, achieving the best SARI score of 46.80 on the Mechanical Turk dataset, which is significantly better than previous state-of-the-art results. The code can be found at https://github.com/amanbasu/sentence-simplification.
translated by 谷歌翻译