This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. At 433k examples, this resource is one of the largest corpora available for natural language inference (a.k.a. recognizing textual entailment), improving upon available resources in both its coverage and difficulty. MultiNLI accomplishes this by offering data from ten distinct genres of written and spoken English, making it possible to evaluate systems on nearly the full complexity of the language, while supplying an explicit setting for evaluating cross-genre domain adaptation. In addition, an evaluation using existing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difficult task than does that corpus, despite the two showing similar levels of inter-annotator agreement. This task will involve reading a line from a non-fiction article and writing three sentences that relate to it. The line will describe a situation or event. Using only this description and what you know about the world:
translated by 谷歌翻译
Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
translated by 谷歌翻译
In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
translated by 谷歌翻译
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems.
translated by 谷歌翻译
为了实现长文档理解的构建和测试模型,我们引入质量,具有中文段的多项选择QA DataSet,具有约5,000个令牌的平均长度,比典型的当前模型更长。与经过段落的事先工作不同,我们的问题是由阅读整个段落的贡献者编写和验证的,而不是依赖摘要或摘录。此外,只有一半的问题是通过在紧缩时间限制下工作的注释器来应答,表明略读和简单的搜索不足以一直表现良好。目前的模型在此任务上表现不佳(55.4%),并且落后于人类性能(93.5%)。
translated by 谷歌翻译
We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.
translated by 谷歌翻译
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.'s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.
translated by 谷歌翻译
众包NLP数据集的反复挑战是,在制作示例时,人类作家通常会依靠重复的模式,从而导致缺乏语言多样性。我们介绍了一种基于工人和AI协作的数据集创建的新方法,该方法汇集了语言模型的生成力量和人类的评估力量。从现有的数据集,自然语言推理(NLI)的Multinli开始,我们的方法使用数据集制图自动识别示例来证明具有挑战性的推理模式,并指示GPT-3撰写具有相似模式的新示例。然后,机器生成的示例会自动过滤,并最终由人类人群工人修订和标记。最终的数据集Wanli由107,885个NLI示例组成,并在现有的NLI数据集上呈现出独特的经验优势。值得注意的是,培训有关Wanli的模型,而不是Multinli($ 4 $ $倍)可改善我们考虑的七个外域测试集的性能,包括汉斯(Hans)的11%和对抗性NLI的9%。此外,将Multinli与Wanli结合起来比将其与其他NLI增强集相结合更有效。我们的结果表明,自然语言生成技术的潜力是策划增强质量和多样性的NLP数据集。
translated by 谷歌翻译
我们研究了自然语言推断(NLI)注释的分歧。我们开发了一种分类来源的分类法,其中10个类别涵盖了3个高级类别。我们发现,某些分歧是由于句子含义的不确定性所致,而另一些分歧是对注释偏见和任务工件的,从而导致对标签分布的不同解释。我们探索了两种用于检测具有潜在分歧的项目的建模方法:除了三个标准NLI标签外,具有“复杂”标签的四向分类以及一种多标签分类方法。我们发现,多标签分类更具表现力,并更好地回忆了数据中可能的解释。
translated by 谷歌翻译
We introduce a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The data contains 107,292 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a pair of photographs. We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language. Qualitative analysis shows the data requires compositional joint reasoning, including about quantities, comparisons, and relations. Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge. * Contributed equally. † Work done as an undergraduate at Cornell University. 1 In parts of this paper, we use the term compositional differently than it is commonly used in linguistics to refer to reasoning that requires composition. This type of reasoning often manifests itself in highly compositional language.2 Appendix G contains license information for all photographs used in this paper. 3 The top example is True, while the bottom is False.
translated by 谷歌翻译
自然语言推理(NLI)和语义文本相似性(STS)是广泛使用的基准任务,用于对预训练的语言模型进行组成评估。尽管对语言普遍性的兴趣越来越大,但大多数NLI/STS研究几乎完全集中在英语上。特别是,日语中没有可用的多语言NLI/STS数据集,它在类型上与英语不同,并且可以阐明语言模型当前有争议的行为,例如对单词顺序和案例粒子的敏感性。在此背景下,我们介绍了日本NLI/STS数据集Jsick,该数据集是从英语数据集病中手动翻译的。我们还提出了一个用于组成推断的应力测试数据集,该数据集是通过转换JSick中句子的句法结构来研究语言模型是否对单词顺序和案例粒子敏感的。我们在不同的预训练语言模型上进行基线实验,并比较应用于日语和其他语言时多语言模型的性能。应力测试实验的结果表明,当前的预训练的语言模型对单词顺序和案例标记不敏感。
translated by 谷歌翻译
新闻文章修订历史为新闻文章中的叙事和事实演变提供了线索。为了促进对这一进化的分析,我们介绍了新闻修订历史记录的第一个公开可用的数据集。我们的数据集是大规模和多语言的;它包含120万篇文章,其中有460万款来自三个国家 /地区的英语和法语报纸来源,涵盖了15年的报道(2006 - 2021年)。我们定义文章级的编辑操作:加法,删除,编辑和重构,并开发高准确性提取算法以识别这些动作。为了强调许多编辑操作的事实性质,我们进行的分析表明,添加和删除的句子更可能包含更新事件,主要内容和报价,而不是不变的句子。最后,为了探索编辑操作是否可以预测,我们介绍了三个旨在预测版本更新过程中执行的动作的新任务。我们表明,这些任务对于人类专业而言是可能的,但对于大型NLP模型而言,这些任务具有挑战性。我们希望这可以刺激叙事框架的研究,并为追逐突发新闻的记者提供预测工具。
translated by 谷歌翻译
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
媒体报道对公众对事件的看法具有重大影响。尽管如此,媒体媒体经常有偏见。偏见新闻文章的一种方法是改变选择一词。通过单词选择对偏见的自动识别是具有挑战性的,这主要是由于缺乏黄金标准数据集和高环境依赖性。本文介绍了Babe,这是由训练有素的专家创建的强大而多样化的数据集,用于媒体偏见研究。我们还分析了为什么专家标签在该域中至关重要。与现有工作相比,我们的数据集提供了更好的注释质量和更高的通知者协议。它由主题和插座之间平衡的3,700个句子组成,其中包含单词和句子级别上的媒体偏见标签。基于我们的数据,我们还引入了一种自动检测新闻文章中偏见的句子的方法。我们最佳性能基于BERT的模型是在由遥远标签组成的较大语料库中进行预训练的。对我们提出的监督数据集进行微调和评估模型,我们达到了0.804的宏F1得分,表现优于现有方法。
translated by 谷歌翻译
尽管最近在机器学习用于自然语言处理的机器学习方面,但自然语言推论(NLI)问题仍然是挑战。为此目的,我们贡献了一个新的数据集,专注于事实现象;但是,我们的任务与其他NLI任务保持相同,即引起的征集,矛盾或中性(ECN)。 DataSet在波兰语中包含完全自然语言话语,收集2,432个动词补充对和309个独特的动词。 DataSet基于国家波兰语(NKJP)的国家语料库,是主要动词频率和其他语言特征的代表性样本(例如,内部否定的发生)。我们发现,基于变压器的基于判决的模型获得了相对良好的结果($ \ \左右89 \%$ F1得分)。尽管使用语言特征实现了更好的结果($ \大约91 \%$ F1得分),但这种模型需要更多人工劳动力(循环中的人类),因为专家语言学家手动制备特征。基于BERT的模型仅消耗输入句子表明,它们捕获了NLI / Factivity的大部分复杂性。现象中的复杂病例 - 例如具有权利(e)和非致命动词的案件 - 仍然是进一步研究的开放问题。
translated by 谷歌翻译
We study politeness phenomena in nine typologically diverse languages. Politeness is an important facet of communication and is sometimes argued to be cultural-specific, yet existing computational linguistic study is limited to English. We create TyDiP, a dataset containing three-way politeness annotations for 500 examples in each language, totaling 4.5K examples. We evaluate how well multilingual models can identify politeness levels -- they show a fairly robust zero-shot transfer ability, yet fall short of estimated human accuracy significantly. We further study mapping the English politeness strategy lexicon into nine languages via automatic translation and lexicon induction, analyzing whether each strategy's impact stays consistent across languages. Lastly, we empirically study the complicated relationship between formality and politeness through transfer experiments. We hope our dataset will support various research questions and applications, from evaluating multilingual models to constructing polite multilingual agents.
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
例如,查询是一个众所周知的信息检索任务,其中由用户选择文档作为搜索查询,目标是从大集合中检索相关文档。但是,文档通常涵盖主题的多个方面。要解决此方案,我们将通过示例介绍面位查询的任务,其中用户还可以指定除输入查询文档之外的更精细的粗体方面。我们专注于在科学文献搜索中的应用。我们设想能够沿着专门选择的修辞结构元素作为对此问题的一种解决方案来检索类似于查询科学纸的科学论文。在这项工作中,我们称之为方面的修辞结构元素,表明了科学论文的目标,方法或结果。我们介绍并描述了一个专家注释的测试集合,以评估培训的型号以执行此任务。我们的测试收集包括一个不同的50套英文查询文件,从计算语言学和机器学习场所绘制。我们仔细遵循TREC用于深度-K池(k = 100或250)使用的注释指南,结果数据收集包括具有高注释协议的分级相关性分数。在我们的数据集中评估的最先进模型显示出进一步的工作中的显着差距。可以在此处访问我们的数据集:https://github.com/iesl/csfcube
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译