缩写和收缩通常在不同领域的文本中发现。例如,医生的笔记包含许多可以根据他们的选择个性化的收缩。现有的拼写校正模型不适合处理扩展,因为单词中的字符减少了很多。在这项工作中,我们提出了一个基于BERT的模型ABB-Bert,该模型涉及包含缩写和收缩的模棱两可的语言。ABB-BERT可以从数千种选项中排名,并设计用于规模。它经过Wikipedia文本的培训,该算法允许它通过很少的计算进行微调,以获得域或人的更好性能。我们将公开发布培训数据集,以缩写从Wikipedia衍生出的缩写和收缩。
translated by 谷歌翻译
建筑聊天禁令的最大挑战是培训数据。所需的数据必须逼真,足以训练聊天禁止。我们创建一个工具,用于从Facebook页面的Facebook Messenger获取实际培训数据。在文本预处理步骤之后,新获得的数据集生成FVNC和示例数据集。我们使用返回越南(Phobert)的伯特来提取文本数据的功能。 K-means和DBSCAN聚类算法用于基于Phobert $ _ {Base} $的输出嵌入式群集任务。我们应用V测量分数和轮廓分数来评估聚类算法的性能。我们还展示了Phobert的效率与样本数据集和Wiki DataSet上的特征提取中的其他模型相比。还提出了一种结合聚类评估的GridSearch算法来找到最佳参数。由于群集如此多的对话,我们节省了大量的时间和精力来构建培训Chatbot的数据和故事情节。
translated by 谷歌翻译
大多数无监督的NLP模型代表了语义空间中单点或单个区域的每个单词,而现有的多感觉单词嵌入物不能代表像素序或句子等更长的单词序列。我们提出了一种用于文本序列(短语或句子)的新型嵌入方法,其中每个序列由一个不同的多模码本嵌入物组表示,以捕获其含义的不同语义面。码本嵌入式可以被视为集群中心,该中心总结了在预训练的单词嵌入空间中的可能共同出现的单词的分布。我们介绍了一个端到端的训练神经模型,直接从测试时间内从输入文本序列预测集群中心集。我们的实验表明,每句话码本嵌入式显着提高无监督句子相似性和提取摘要基准的性能。在短语相似之处实验中,我们发现多面嵌入物提供可解释的语义表示,但不优于单面基线。
translated by 谷歌翻译
我们研究了检查问题的事实,旨在识别给定索赔的真实性。具体而言,我们专注于事实提取和验证(发烧)及其伴随数据集的任务。该任务包括从维基百科检索相关文件(和句子)并验证文件中的信息是否支持或驳斥所索赔的索赔。此任务至关重要,可以是假新闻检测和医疗索赔验证等应用程序块。在本文中,我们以通过以结构化和全面的方式呈现文献来更好地了解任务的挑战。我们通过分析不同方法的技术视角并讨论发热数据集的性能结果,描述了所提出的方法,这是最熟悉的和正式结构化的数据集,就是事实提取和验证任务。我们还迄今为止迄今为止确定句子检索组件的有益损失函数的最大实验研究。我们的分析表明,采样负句对于提高性能并降低计算复杂性很重要。最后,我们描述了开放的问题和未来的挑战,我们激励了未来的任务研究。
translated by 谷歌翻译
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fillin-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
Cloze任务是一种广泛使用的任务,可以评估NLP系统的语言理解能力。然而,大多数现有的渗透任务只需要NLP系统以提供每个输入数据样本的相对最佳预测,而不是在输入域中以一致的方式以一致的方式为所有可能的预测的绝对质量。因此,提出了一种新的任务:预测填充任务中的填充词是一个好的,中立或坏候选者。可以扩展复杂的版本以预测更多离散类或连续分数。我们专注于Semoval 2022任务7的子任务,探讨了一些可能的架构来解决这一新任务,提供了对它们的详细比较,并提出了一种在这项新任务中改进传统模型的集合方法。
translated by 谷歌翻译
通过为患者启用远程医疗服务,远程医疗有助于促进医疗专业人员的机会。随着必要的技术基础设施的出现,这些服务已逐渐流行。自从Covid-19危机开始以来,远程医疗的好处就变得更加明显,因为人们在大流行期间倾向于亲自探望医生。在本文中,我们专注于促进医生和患者之间的聊天课程。我们注意到,随着对远程医疗服务的需求的增加,聊天体验的质量和效率可能至关重要。因此,我们为医学对话开发了一种智能的自动反应生成机制,该机制可帮助医生有效地对咨询请求做出反应,尤其是在繁忙的课程中。我们探索了9个月内收集的医生和患者之间的900,000多个匿名的历史在线信息。我们实施聚类算法,以确定医生最常见的响应,并相应地手动标记数据。然后,我们使用此预处理数据来训练机器学习算法以生成响应。所考虑的算法有两个步骤:过滤(即触发)模型,以滤除不可行的患者消息和一个响应发生器,以建议成功通过触发阶段的响应前3位医生响应。该方法为Precision@3提供了83.28 \%的精度,并显示出其参数的鲁棒性。
translated by 谷歌翻译
即使在高度发达的国家,多达15-30%的人口只能理解使用基本词汇编写的文本。他们对日常文本的理解是有限的,这阻止了他们在社会中发挥积极作用,并就医疗保健,法律代表或民主选择做出明智的决定。词汇简化是一项自然语言处理任务,旨在通过更简单地替换复杂的词汇和表达方式来使每个人都可以理解文本,同时保留原始含义。在过去的20年中,它引起了极大的关注,并且已经针对各种语言提出了全自动词汇简化系统。该领域进步的主要障碍是缺乏用于构建和评估词汇简化系统的高质量数据集。我们提出了一个新的基准数据集,用于英语,西班牙语和(巴西)葡萄牙语中的词汇简化,并提供有关数据选择和注释程序的详细信息。这是第一个可直接比较三种语言的词汇简化系统的数据集。为了展示数据集的可用性,我们将两种具有不同体系结构(神经与非神经)的最先进的词汇简化系统适应所有三种语言(英语,西班牙语和巴西葡萄牙语),并评估他们的表演在我们的新数据集中。为了进行更公平的比较,我们使用多种评估措施来捕获系统功效的各个方面,并讨论其优势和缺点。我们发现,最先进的神经词汇简化系统优于所有三种语言中最先进的非神经词汇简化系统。更重要的是,我们发现最先进的神经词汇简化系统对英语的表现要比西班牙和葡萄牙语要好得多。
translated by 谷歌翻译
Transformers are widely used in NLP tasks. However, current approaches to leveraging transformers to understand language expose one weak spot: Number understanding. In some scenarios, numbers frequently occur, especially in semi-structured data like tables. But current approaches to rich-number tasks with transformer-based language models abandon or lose some of the numeracy information - e.g., breaking numbers into sub-word tokens - which leads to many number-related errors. In this paper, we propose the LUNA framework which improves the numerical reasoning and calculation capabilities of transformer-based language models. With the number plugin of NumTok and NumBed, LUNA represents each number as a whole to model input. With number pre-training, including regression loss and model distillation, LUNA bridges the gap between number and vocabulary embeddings. To the best of our knowledge, this is the first work that explicitly injects numeracy capability into language models using Number Plugins. Besides evaluating toy models on toy tasks, we evaluate LUNA on three large-scale transformer models (RoBERTa, BERT, TabBERT) over three different downstream tasks (TATQA, TabFact, CrediTrans), and observe the performances of language models are constantly improved by LUNA. The augmented models also improve the official baseline of TAT-QA (EM: 50.15 -> 59.58) and achieve SOTA performance on CrediTrans (F1 = 86.17).
translated by 谷歌翻译
目前,用于训练语言模型的最广泛的神经网络架构是所谓的BERT,导致各种自然语言处理(NLP)任务的改进。通常,BERT模型中的参数的数量越大,这些NLP任务中获得的结果越好。不幸的是,内存消耗和训练持续时间随着这些模型的大小而大大增加。在本文中,我们调查了较小的BERT模型的各种训练技术:我们将不同的方法与Albert,Roberta和相对位置编码等其他BERT变体相结合。此外,我们提出了两个新的微调修改,导致更好的性能:类开始终端标记和修改形式的线性链条条件随机字段。此外,我们介绍了整个词的注意力,从而降低了伯特存储器的使用,并导致性能的小幅增加,与古典的多重关注相比。我们评估了这些技术的五个公共德国命名实体识别(NER)任务,其中两条由这篇文章引入了两项任务。
translated by 谷歌翻译
Short text classification is a crucial and challenging aspect of Natural Language Processing. For this reason, there are numerous highly specialized short text classifiers. However, in recent short text research, State of the Art (SOTA) methods for traditional text classification, particularly the pure use of Transformers, have been unexploited. In this work, we examine the performance of a variety of short text classifiers as well as the top performing traditional text classifier. We further investigate the effects on two new real-world short text datasets in an effort to address the issue of becoming overly dependent on benchmark datasets with a limited number of characteristics. Our experiments unambiguously demonstrate that Transformers achieve SOTA accuracy on short text classification tasks, raising the question of whether specialized short text techniques are necessary.
translated by 谷歌翻译
Multiple choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, due to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average 3 distractors out of the 10 shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects & languages and a 77k multilingual pool of distractor vocabulary for future research.
translated by 谷歌翻译
首字母缩略词是通过在文本中使用短语的初始组件构建的短语单元的缩写单元。自动提取文本中的首字母缩略词可以帮助各种自然语言处理任务,如机器翻译,信息检索和文本汇总。本文讨论了缩写式萃取任务的集合方法,利用两种不同的方法提取缩略语及其相应的长形式。第一种方法利用多语言语境语言模型,并进行微调模型以执行任务。第二种方法依赖于卷积神经网络架构,以提取首字母缩略词并将其附加到先前方法的输出。我们还将官方培训数据集增强,其中包含从几个开放式期刊中提取的其他培训样本,以帮助提高任务性能。我们的数据集分析还突出显示当前任务数据集中的噪声。我们的方法在通过任务发布的测试数据上实现了以下宏观F1分数:丹麦语(0.74),英语 - 法律(0.72),英语 - 科学(0.73),法语(0.63),波斯(0.57),西班牙语(0.65) ,越南语(0.65)。我们公开发布我们的代码和模型。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
首字母缩略词和长形式通常在研究文件中发现,更多的资料来自科学和法律领域的文件。在此文件中使用的许多首字母缩略词是特定于域的,很少在正常文本语料库中找到。由于这一点,基于变压器的NLP模型经常检测缩略词令牌的OOV(词汇),特别是对于非英语语言,它们的性能在提取期间将首字母缩略词与它们的长形式联系起来。此外,像BERT这样的预磨削变压器模型不专注于处理科学和法律文件。随着这些积分是这项工作背后的总体动机,我们提出了一种新颖的框架尚非:缩写式提取的字符感知BERT,其考虑文本中的字符序列,并通过屏蔽语言建模进行了科学和法律域。我们进一步使用了一个增强损失功能的目标,将最大损耗和掩码丢失术语添加到培训人物的标准交叉熵损失。我们进一步利用伪标记和对抗性数据生成来提高框架的普遍性。与各种基线相比,实验结果证明了所提出的框架的优越性。此外,我们表明,所提出的框架更适合基线模型,用于对非英语的零拍摄概括,从而加强了我们方法的有效性。我们的Team BackGprop在法国数据集中获得了最高分,丹麦和越南的最高分,在全球排行榜上的英语合法数据集中获得了第三高,用于SDU AAAI-22的Althym提取(AE)共享任务。
translated by 谷歌翻译
自然语言处理的进步(NLP)正在通过实际应用和学术利益的形式传播各个域。本质上,法律域包含大量数据以文本格式。因此,它需要将NLP应用于迎合对域的分析要求苛刻的需求。识别法律案例中的重要句子,事实和论点是法律专业人员这么繁琐的任务。在本研究中,我们探讨了句子嵌入的使用,以确定法律案件中的重要句子,在案件中的主要缔约方的角度。此外,定义了特定于任务的丢失功能,以提高通过分类交叉熵损失的直接使用限制的准确性。
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
translated by 谷歌翻译