在本文中,我们专注于研究中国问题匹配的鲁棒性评估。以前的大多数关于分析鲁棒性问题的工作专注于只有一种或几种类型的人工对抗例。相反,我们认为有必要制定关于自然文本模型语言能力的综合评估。为此目的,我们创建了一个中国数据集即duqm,其中包含具有语言扰动的自然问题,以评估问题匹配模型的鲁棒性。Duqm包含3个类别和13个子类别,具有32个语言扰动。广泛的实验表明,DUQM具有更好的区分不同模型的能力。重要的是,DuQM中语言现象评估的详细分类有助于我们轻松诊断不同模型的强度和弱点。此外,我们的实验结果表明,人工对抗实例的影响不适用于自然文本。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
大规模的预训练语言模型在广泛的自然语言理解(NLU)任务中取得了巨大的成功,甚至超过人类性能。然而,最近的研究表明,这些模型的稳健性可能受到精心制作的文本对抗例子的挑战。虽然已经提出了几个单独的数据集来评估模型稳健性,但仍缺少原则和全面的基准。在本文中,我们呈现对抗性胶水(AdvGlue),这是一个新的多任务基准,以定量和彻底探索和评估各种对抗攻击下现代大规模语言模型的脆弱性。特别是,我们系统地应用14种文本对抗的攻击方法来构建一个粘合的援助,这是由人类进一步验证的可靠注释。我们的调查结果总结如下。 (i)大多数现有的对抗性攻击算法容易发生无效或暧昧的对手示例,其中大约90%的含量改变原始语义含义或误导性的人的注册人。因此,我们执行仔细的过滤过程来策划高质量的基准。 (ii)我们测试的所有语言模型和强大的培训方法在AdvGlue上表现不佳,差价远远落后于良性准确性。我们希望我们的工作能够激励开发新的对抗攻击,这些攻击更加隐身,更加统一,以及针对复杂的对抗性攻击的新强大语言模型。 Advglue在https://adversarialglue.github.io提供。
translated by 谷歌翻译
自然语言推理(NLI)和语义文本相似性(STS)是广泛使用的基准任务,用于对预训练的语言模型进行组成评估。尽管对语言普遍性的兴趣越来越大,但大多数NLI/STS研究几乎完全集中在英语上。特别是,日语中没有可用的多语言NLI/STS数据集,它在类型上与英语不同,并且可以阐明语言模型当前有争议的行为,例如对单词顺序和案例粒子的敏感性。在此背景下,我们介绍了日本NLI/STS数据集Jsick,该数据集是从英语数据集病中手动翻译的。我们还提出了一个用于组成推断的应力测试数据集,该数据集是通过转换JSick中句子的句法结构来研究语言模型是否对单词顺序和案例粒子敏感的。我们在不同的预训练语言模型上进行基线实验,并比较应用于日语和其他语言时多语言模型的性能。应力测试实验的结果表明,当前的预训练的语言模型对单词顺序和案例标记不敏感。
translated by 谷歌翻译
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems.
translated by 谷歌翻译
尽管预训练的语言模型(LMS)在许多NLP任务中都取得了重大改进,但人们越来越关注探索LMS的能力并解释其预测。但是,现有作品通常仅着眼于某些下游任务的特定功能。缺乏直接评估蒙版单词预测性能和预训练LMS的解释性的数据集。为了填补空白,我们提出了一个新颖的评估基准,以提供英语和中文注释的数据。它在多个维度(即语法,语义,知识,推理和计算)中测试LMS能力。此外,它提供了满足足够和紧凑性的仔细注释的令牌级别的理由。它包含每个原始实例的扰动实例,以便将扰动下的基本原理一致性用作忠实的指标,即解释性的观点。我们在几个广泛使用的预训练的LMS上进行实验。结果表明,他们在知识和计算的维度上表现较差。而且它们在所有维度上的合理性远非令人满意,尤其是当理由缩短时。此外,我们评估的预训练的LMS在语法感知数据上并不强大。我们将以\ url {http:// xyz}发布此评估基准,并希望它可以促进预训练的LMS的研究进度。
translated by 谷歌翻译
事实证明,将先验知识纳入预训练的语言模型中对知识驱动的NLP任务有效,例如实体键入和关系提取。当前的培训程序通常通过使用知识掩盖,知识融合和知识更换将外部知识注入模型。但是,输入句子中包含的事实信息尚未完全开采,并且尚未严格检查注射的外部知识。结果,无法完全利用上下文信息,并将引入额外的噪音,或者注入的知识量受到限制。为了解决这些问题,我们提出了MLRIP,该MLRIP修改了Ernie-Baidu提出的知识掩盖策略,并引入了两阶段的实体替代策略。进行全面分析的广泛实验说明了MLRIP在军事知识驱动的NLP任务中基于BERT的模型的优势。
translated by 谷歌翻译
最近的研究表明,预训练的语言模型(LMS)容易受到文本对抗性攻击的影响。但是,现有的攻击方法要么遭受低攻击成功率,要么无法在指数级的扰动空间中有效搜索。我们提出了一个有效有效的框架Semattack,以通过构建不同的语义扰动函数来生成自然的对抗文本。特别是,Semattack优化了对通用语义空间约束的生成的扰动,包括错字空间,知识空间(例如WordNet),上下文化的语义空间(例如,BERT群集的嵌入空间)或这些空间的组合。因此,生成的对抗文本在语义上更接近原始输入。广泛的实验表明,最新的(SOTA)大规模LMS(例如Deberta-V2)和国防策略(例如Freelb)仍然容易受到Semattack的影响。我们进一步证明,Semattack是一般的,并且能够为具有较高攻击成功率的不同语言(例如英语和中文)生成自然的对抗文本。人类评估还证实,我们产生的对抗文本是自然的,几乎不会影响人类的表现。我们的代码可在https://github.com/ai-secure/semattack上公开获取。
translated by 谷歌翻译
Dialect differences caused by regional, social, and economic barriers cause performance discrepancies for many groups of users of language technology. Fair, inclusive, and equitable language technology must critically be dialect invariant, meaning that performance remains constant over dialectal shifts. Current English systems often fall significantly short of this ideal since they are designed and tested on a single dialect: Standard American English. We introduce Multi-VALUE -- a suite of resources for evaluating and achieving English dialect invariance. We build a controllable rule-based translation system spanning 50 English dialects and a total of 189 unique linguistic features. Our translation maps Standard American English text to synthetic form of each dialect, which uses an upper-bound on the natural density of features in that dialect. First, we use this system to build stress tests for question answering, machine translation, and semantic parsing tasks. Stress tests reveal significant performance disparities for leading models on non-standard dialects. Second, we use this system as a data augmentation technique to improve the dialect robustness of existing systems. Finally, we partner with native speakers of Chicano and Indian English to release new gold-standard variants of the popular CoQA task.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT
translated by 谷歌翻译
Relation extraction (RE) is a sub-discipline of information extraction (IE) which focuses on the prediction of a relational predicate from a natural-language input unit (such as a sentence, a clause, or even a short paragraph consisting of multiple sentences and/or clauses). Together with named-entity recognition (NER) and disambiguation (NED), RE forms the basis for many advanced IE tasks such as knowledge-base (KB) population and verification. In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE by encoding structured information about the sentences' principal units, such as subjects, objects, verbal phrases, and adverbials, into various forms of vectorized (and hence unstructured) representations of the sentences. Our main conjecture is that the decomposition of long and possibly convoluted sentences into multiple smaller clauses via OpenIE even helps to fine-tune context-sensitive language models such as BERT (and its plethora of variants) for RE. Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models compared to existing RE approaches. Our best results reach 92% and 71% of F1 score for KnowledgeNet and FewRel, respectively, proving the effectiveness of our approach on competitive benchmarks.
translated by 谷歌翻译
Datasets serve as crucial training resources and model performance trackers. However, existing datasets have exposed a plethora of problems, inducing biased models and unreliable evaluation results. In this paper, we propose a model-agnostic dataset evaluation framework for automatic dataset quality evaluation. We seek the statistical properties of the datasets and address three fundamental dimensions: reliability, difficulty, and validity, following a classical testing theory. Taking the Named Entity Recognition (NER) datasets as a case study, we introduce $9$ statistical metrics for a statistical dataset evaluation framework. Experimental results and human evaluation validate that our evaluation framework effectively assesses various aspects of the dataset quality. Furthermore, we study how the dataset scores on our statistical metrics affect the model performance, and appeal for dataset quality evaluation or targeted dataset improvement before training or testing models.
translated by 谷歌翻译
现在,通过复杂的神经网络模型(例如蒙版的神经语言模型(MNLM))学习了许多上下文化的单词表示形式,这些模型由巨大的神经网络结构组成,并经过训练以恢复蒙面文本。这样的表示表明在某些阅读理解(RC)任务中表现出超人的表现,这些任务在给出问题的上下文中提取了适当的答案。但是,由于许多模型参数,确定在MNLM中训练的详细知识是具有挑战性的。本文提供了有关MNLMS中包含的常识性知识的新见解和经验分析。首先,我们使用诊断测试来评估常识性知识是否在MNLMS中进行了适当的培训。我们观察到,在MNLMS中没有适当训练很多常识性知识,并且MNLMS并不经常准确地理解关系的语义含义。此外,我们发现基于MNLM的RC模型仍然容易受到需要常识知识的语义变化的影响。最后,我们发现了未经训练的知识的基本原因。我们进一步建议,利用外常识性知识存储库可以是一个有效的解决方案。我们说明了通过在受控实验中以外常识性知识存储库来丰富文本的经文,以克服基于MNLM的RC模型的局限性的可能性。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
微调预训练模型在标准的自然语言处理基准上取得了令人印象深刻的性能。然而,所产生的模型概括性仍然明确地理解。例如,我们不知道,性能如何导致泛化模型的完善。在这项研究中,我们使用关系提取来分析来自不同观点的微调BERT模型。我们还根据我们提出的改进来表征泛化技术的差异。从经验实验中,我们发现BERT通过随机化,对抗性和反事实试验以及偏差(即选择和语义)遭受鲁棒性而遭受瓶颈。这些发现突出了未来改进的机会。我们的开放式测试平台诊断为\ url {https://github.com/zjunlp/diagnosere}。
translated by 谷歌翻译
The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.
translated by 谷歌翻译
尽管现有的机器阅读理解模型在许多数据集上取得了迅速的进展,但它们远非强劲。在本文中,我们提出了一个面向理解的机器阅读理解模型,以解决三种鲁棒性问题,这些问题过于敏感,稳定性和泛化。具体而言,我们首先使用自然语言推理模块来帮助模型了解输入问题的准确语义含义,以解决过度敏感性和稳定性的问题。然后,在机器阅读理解模块中,我们提出了一种记忆引导的多头注意方法,该方法可以进一步很好地理解输入问题和段落的语义含义。第三,我们提出了一种多语言学习机制来解决概括问题。最后,这些模块与基于多任务学习的方法集成在一起。我们在三个旨在衡量模型稳健性的基准数据集上评估了我们的模型,包括Dureader(健壮)和两个与小队相关的数据集。广泛的实验表明,我们的模型可以很好地解决上述三种鲁棒性问题。而且,即使在某些极端和不公平的评估下,它也比所有这些数据集中所有这些数据集的最先进模型的结果要好得多。我们工作的源代码可在以下网址获得:https://github.com/neukg/robustmrc。
translated by 谷歌翻译