We introduce a linguistically enhanced combination of pre-training methods for transformers. The pre-training objectives include POS-tagging, synset prediction based on semantic knowledge graphs, and parent prediction based on dependency parse trees. Our approach achieves competitive results on the Natural Language Inference task, compared to the state of the art. Specifically for smaller models, the method results in a significant performance boost, emphasizing the fact that intelligent pre-training can make up for fewer parameters and help building more efficient models. Combining POS-tagging and synset prediction yields the overall best results.
translated by 谷歌翻译
Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT
translated by 谷歌翻译
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more taskspecific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
translated by 谷歌翻译
理解和生成语言的能力使人类认知与其他已知的生命形式不同。我们研究一种在语义解析的任务中,将两种最成功的途径(统计语言模型和象征性语义形式主义)梳理到语言的意义。我们基于基于过渡的抽象表示(AMR)解析器,AMREAGER,我们探索了融合预验证的上下文感知的单词嵌入的实用性 - 例如Bert和Roberta,在AMR解析的问题中,我们为新的解析器做出了贡献。 Dub作为Amrberger。实验发现,与非上下文对应物相比,这些丰富的词汇特征对改善解析器的总体表现并不特别有助于改善解析器的整体性能,而其他概念信息则赋予了系统以优于基准的能力。通过病变研究,我们发现上下文嵌入的使用有助于使系统更强大,以消除显式句法特征。这些发现揭示了上下文嵌入的优势和劣势,并以当前形式揭示了语言模型,并激发了更深入的理解。
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
BERT,ROBERTA或GPT-3等复杂的基于注意力的语言模型的外观已允许在许多场景中解决高度复杂的任务。但是,当应用于特定域时,这些模型会遇到相当大的困难。诸如Twitter之类的社交网络就是这种情况,Twitter是一种不断变化的信息流,以非正式和复杂的语言编写的信息流,鉴于人类的重要作用,每个信息都需要仔细评估,即使人类也需要理解。通过自然语言处理解决该领域的任务涉及严重的挑战。当将强大的最先进的多语言模型应用于这种情况下,特定语言的细微差别用来迷失翻译。为了面对这些挑战,我们提出了\ textbf {bertuit},这是迄今为止针对西班牙语提出的较大变压器,使用Roberta Optimization进行了230m西班牙推文的大规模数据集进行了预培训。我们的动机是提供一个强大的资源,以更好地了解西班牙Twitter,并用于专注于该社交网络的应用程序,特别强调致力于解决该平台中错误信息传播的解决方案。对Bertuit进行了多个任务评估,并与M-Bert,XLM-Roberta和XLM-T进行了比较,该任务非常具有竞争性的多语言变压器。在这种情况下,使用应用程序显示了我们方法的实用性:一种可视化骗局和分析作者群体传播虚假信息的零击方法。错误的信息在英语以外的其他语言等平台上疯狂地传播,这意味着在英语说话之外转移时,变形金刚的性能可能会受到影响。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
自然语言推理(NLI)和语义文本相似性(STS)是广泛使用的基准任务,用于对预训练的语言模型进行组成评估。尽管对语言普遍性的兴趣越来越大,但大多数NLI/STS研究几乎完全集中在英语上。特别是,日语中没有可用的多语言NLI/STS数据集,它在类型上与英语不同,并且可以阐明语言模型当前有争议的行为,例如对单词顺序和案例粒子的敏感性。在此背景下,我们介绍了日本NLI/STS数据集Jsick,该数据集是从英语数据集病中手动翻译的。我们还提出了一个用于组成推断的应力测试数据集,该数据集是通过转换JSick中句子的句法结构来研究语言模型是否对单词顺序和案例粒子敏感的。我们在不同的预训练语言模型上进行基线实验,并比较应用于日语和其他语言时多语言模型的性能。应力测试实验的结果表明,当前的预训练的语言模型对单词顺序和案例标记不敏感。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
因果检测在自然语言处理和语言学研究领域吸引了很多关注。它具有信息检索,事件预测,问题回答,财务分析和市场研究的基本应用。在本研究中,我们探讨了几种方法来使用变压器识别和提取金融文件中的原因对。为此目的,我们提出了一种与BIO方案结合POS标记的方法,可以与现代变压器模型集成,以解决识别给定文本中的因果关系的这一挑战。我们的最佳方法学达到0.9551的F1分,在Fincausal-2021在Fincausal-2021研讨会上的盲试验中精确匹配得分为0.8777。
translated by 谷歌翻译
多字表达式(MWES)呈现单词组,其中整体的含义不是源于其部分的含义。处理MWE的任务在许多自然语言处理(NLP)应用中至关重要,包括机器翻译和术语提取。因此,检测MWE是一个流行的研究主题。在本文中,我们在检测MWES的任务中探索了最新的神经变压器。我们在数据集中凭经验评估了Semeval-2016任务10:检测最小的语义单元及其含义(DIMSUM)。我们表明,变压器模型的表现优于先前基于长期记忆(LSTM)的神经模型。该代码和预培训模型将免费提供给社区。
translated by 谷歌翻译
NLP是与计算机或机器理解和解释人类语言的能力有关的人工智能和机器学习的一种形式。语言模型在文本分析和NLP中至关重要,因为它们允许计算机解释定性输入并将其转换为可以在其他任务中使用的定量数据。从本质上讲,在转移学习的背景下,语言模型通常在大型通用语料库上进行培训,称为预训练阶段,然后对特定的基本任务进行微调。结果,预训练的语言模型主要用作基线模型,该模型包含了对上下文的广泛掌握,并且可以进一步定制以在新的NLP任务中使用。大多数预训练的模型都经过来自Twitter,Newswire,Wikipedia和Web等通用领域的Corpora培训。在一般文本中训练的现成的NLP模型可能在专业领域效率低下且不准确。在本文中,我们提出了一个名为Securebert的网络安全语言模型,该模型能够捕获网络安全域中的文本含义,因此可以进一步用于自动化,用于许多重要的网络安全任务,否则这些任务将依靠人类的专业知识和繁琐的手动努力。 Securebert受到了我们从网络安全和一般计算域的各种来源收集和预处理的大量网络安全文本培训。使用我们提出的令牌化和模型权重调整的方法,Securebert不仅能够保留对一般英语的理解,因为大多数预训练的语言模型都可以做到,而且在应用于具有网络安全含义的文本时也有效。
translated by 谷歌翻译
介词经常出现多元化词。歧义歧义在语义角色标记,问题应答,文本征报和名词复合释义中,歧义是至关重要的。在本文中,我们提出了一种新颖的介词意义消费者(PSD)方法,其不使用任何语言工具。在监督设置中,机器学习模型提出有句子,其中介词已经用感测量注释。这些感官是ID所谓的介词项目(TPP)。我们使用预先训练的BERT和BERT VARIANTS的隐藏层表示。然后使用多层Perceptron将潜在的表示分为正确的感测ID。用于此任务的数据集来自Semeval-2007任务-6。我们的方法理解为86.85%,比最先进的更好。
translated by 谷歌翻译
自然语言推论(NLI)是自然语言处理中的热门话题研究,句子之间的矛盾检测是NLI的特殊情况。这被认为是一项困难的NLP任务,当在许多NLP应用程序中添加为组件时,其影响很大,例如问答系统,文本摘要。阿拉伯语是由于其丰富的词汇,语义歧义而检测矛盾的最具挑战性的低资源语言之一。我们创建了一个超过12K句子的数据集并命名为Arnli,这将是公开可用的。此外,我们采用了一种新的模型,该模型受到斯坦福大学矛盾检测的启发,提出了有关英语的解决方案。我们提出了一种方法,以使用矛盾向量与语言模型向量作为机器学习模型的输入来检测阿拉伯语对句子之间的矛盾。我们分析了不同传统的机器学习分类器的结果,并比较了他们在创建的数据集(Arnli)和Pheme,病态的英语数据集的自动翻译上进行了比较。使用随机森林分类器,精度为99%,60%和75%的Pheme,Sick和Arnli的最佳结果。
translated by 谷歌翻译
大型文本语料库培训的基于变压器的语言模型在自然语言处理社区中享有巨大的普及,并且通常用作下游任务的起点。虽然这些模型是不可否认的,但这是一种挑战,以量化超出传统准确度指标的性能。在本文中,我们通过在培训过程的顺序阶段的获取知识快照来比较基于BERT的语言模型。可以通过查询具有探测任务的屏蔽语言模型来发现来自培训语料库的结构化关系。我们提出了一种通过在罗伯塔早期训练的各个阶段的CLOZE“填空”陈述中产生知识图表提取物来揭示知识收集时间表的方法。我们将该分析扩展到BERT模型(Distilbert,Bert-Base,Roberta)的预磨损变体进行比较。这项工作提出了通过知识图提取(GED,Graph2VEC)来比较语言模型的定量框架,并且展示了语音分析(波动)以确定每个模型变体的语言强度。使用这些指标,机器学习从业者可以比较模型,诊断其模型的行为优势和劣势,并确定新的目标数据集以提高模型性能。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
我们在佩恩 - 赫尔辛基解析的早期现代英语(PPCEME)中的第一个解析结果,是一个190万字的TreeBank,这是句法变化研究的重要资源。我们描述了PPCEME的关键特征,使其成为解析的挑战,包括比Penn TreeBank中更大且更多样化的功能标签。我们使用伯克利神经解析器的修改版本为此语料库提出了结果,以及Gabbard等人的功能标签恢复的方法(2006)。尽管其简单性,这种方法令人惊讶地令人惊讶地令人惊讶的是,建议可以以足够的准确度恢复原始结构,以支持语言应用(例如,寻找涉及的句法结构)。然而,对于函数标签的子集(例如,指示直接演讲的标签),需要额外的工作,我们讨论了这种方法的一些进一步限制。由此产生的解析器将用于在网上解析早期英语书籍,一个11亿字形的语料库,其实用性对于句法变化的效用将大大增加,加入准确的解析树。
translated by 谷歌翻译