随着时间的推移,保持语言技术的性能是很好的实际兴趣。在这里,我们在涉及系统性能的时间效果,建立更细微的术语,用于讨论该主题和适当的实验设计,以支持有关观察到的现象的效果的调查。我们提出了一系列与由大型神经预磨削表示的系统进行用于英语的系统,证明{\ EM时间模型恶化}并不像较大的关注,有一些模型实际上在从稍后的时间段绘制的数据上进行测试时改善。然而,{\ EM时间域自适应}是有益的,当系统在时间上训练时,可以更好地进行给定时间段的性能更好。我们的实验表明,在预磨削表示时,时间模型劣化和时间域适应之间的区别变得突出。最后,我们研究了两种方法对时间域适应的效果,没有人为的新数据的注释,自我标签证明是优于持续的预训练。值得注意的是,对于命名实体识别,自我标签导致比人类注释更好的时间适应。
translated by 谷歌翻译
当NLP模型从一个时间段进行文本数据培训并从另一个时间进行测试或部署或部署时,产生的时间未对准可能会降低结束任务性能。在这项工作中,我们在不同域名(社交媒体,科学论文,新闻和评论和评论)和时间(跨越五年或更长时间)的时间内建立了八个不同的任务套件,以量化时间未对准的影响。我们的研究专注于普遍存在的环境,其中佩戴的模型可选择通过持续的域特异性预测来改编,然后是特定于任务的FineTuning。我们在多个域中建立了一套任务,以研究现代NLP系统中的时间错位。我们发现对任务性能的时间不对准而不是先前报告的更强烈影响。我们还发现,虽然通过续预先训练的时间适应可以帮助,但与目标时间段中的数据上的任务特定的FineTuning相比,这些收益很小。我们的研究结果激励了提高NLP模型的时间稳健性的持续研究。
translated by 谷歌翻译
在文本分类模型由于数据变化而随着时间的变化而下降的情况下,其持续时间持续时间的模型的开发很重要。预测模型随着时间的推移能力的能力可以帮助设计模型,这些模型可以在更长的时间内有效使用。在本文中,我们通过评估各种语言模型和分类算法随着时间的推移持续存在的能力,以及数据集特性如何帮助预测不同模型的时间稳定性,从而研究了这个问题。我们在跨越6到19年的三个数据集上执行纵向分类实验,并涉及各种任务和类型的数据。我们发现,人们可以根据(i)模型在限制时间段内的性能及其外推到更长的时间段,以及(ii)数据集的语言特征,以及(ii)数据集的语言特征,如何估算模型如何在时间上保持其性能。例如不同年份的子集之间的熟悉程度。这些实验的发现对文本分类模型的设计具有重要意义,目的是保留随着时间的推移性能。
translated by 谷歌翻译
Language use changes over time, and this impacts the effectiveness of NLP systems. This phenomenon is even more prevalent in social media data during crisis events where meaning and frequency of word usage may change over the course of days. Contextual language models fail to adapt temporally, emphasizing the need for temporal adaptation in models which need to be deployed over an extended period of time. While existing approaches consider data spanning large periods of time (from years to decades), shorter time spans are critical for crisis data. We quantify temporal degradation for this scenario and propose methods to cope with performance loss by leveraging techniques from domain adaptation. To the best of our knowledge, this is the first effort to explore effects of rapid language change driven by adversarial adaptations, particularly during natural and human-induced disasters. Through extensive experimentation on diverse crisis datasets, we analyze under what conditions our approaches outperform strong baselines while highlighting the current limitations of temporal adaptation methods in scenarios where access to unlabeled data is scarce.
translated by 谷歌翻译
Named Entity Recognition (NER) is an important and well-studied task in natural language processing. The classic CoNLL-2003 English dataset, published almost 20 years ago, is commonly used to train and evaluate named entity taggers. The age of this dataset raises the question of how well these models perform when applied to modern data. In this paper, we present CoNLL++, a new annotated test set that mimics the process used to create the original CoNLL-2003 test set as closely as possible, except with data collected from 2020. Using CoNLL++, we evaluate the generalization of 20+ different models to modern data. We observe that different models have very different generalization behavior. F\textsubscript{1} scores of large transformer-based models which are pre-trained on recent data dropped much less than models using static word embeddings, and RoBERTa-based and T5 models achieve comparable F\textsubscript{1} scores on both CoNLL-2003 and CoNLL++. Our experiments show that achieving good generalizability requires a combined effort of developing larger models and continuing pre-training with in-domain and recent data. These results suggest standard evaluation methodology may have under-estimated progress on named entity recognition over the past 20 years; in addition to improving performance on the original CoNLL-2003 dataset, we have also improved the ability of our models to generalize to modern data.
translated by 谷歌翻译
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining indomain (domain-adaptive pretraining) leads to performance gains, under both high-and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multiphase adaptive pretraining offers large gains in task performance.
translated by 谷歌翻译
Understanding customer feedback is becoming a necessity for companies to identify problems and improve their products and services. Text classification and sentiment analysis can play a major role in analyzing this data by using a variety of machine and deep learning approaches. In this work, different transformer-based models are utilized to explore how efficient these models are when working with a German customer feedback dataset. In addition, these pre-trained models are further analyzed to determine if adapting them to a specific domain using unlabeled data can yield better results than off-the-shelf pre-trained models. To evaluate the models, two downstream tasks from the GermEval 2017 are considered. The experimental results show that transformer-based models can reach significant improvements compared to a fastText baseline and outperform the published scores and previous models. For the subtask Relevance Classification, the best models achieve a micro-averaged $F1$-Score of 96.1 % on the first test set and 95.9 % on the second one, and a score of 85.1 % and 85.3 % for the subtask Polarity Classification.
translated by 谷歌翻译
最近的工作表明,在适应新域时,域名语言模型可以提高性能。但是,与培训前提出的成本提出了一个重要问题:给出了固定预算,NLP从业者应该采取哪些步骤来最大限度地提高绩效?在本文中,我们在预算限制下研究域适应,并将其作为数据注释和预培训之间的客户选择问题。具体而言,我们测量三个程序文本数据集的注释成本以及三种域语言模型的预培训成本。然后,我们评估不同预算限制下的预训练和数据注释的不同组合的效用,以评估哪种组合策略最佳效果。我们发现,对于小预算,支出所有资金都会导致最佳表现;一旦预算变得足够大,数据注释和域内预训练的组合更优先。因此,我们建议任务特定的数据注释应该是在将NLP模型调整到新域时的经济策略的一部分。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
对于大多数自然语言处理任务,主要的实践是使用较小的下游数据集对大型预验证变压器模型(例如BERT)。尽管这种方法取得了成功,但尚不清楚这些收益在多大程度上归因于用于预处理而不是训练预处理的目标本身所采用的大量背景语料库。本文介绍了一项大规模的自我预测研究,其中相同的(下游)训练数据都用于预训练和填充。在解决Electra和Roberta型号以及10个不同下游数据集的实验中,我们观察到在BookWiki语料库上进行自我预测的竞争对手标准预告片(尽管使用了$ 10 \ times $ $ -500 \ times $ -500 \ times $少的数据),在7美元上以7美元的价格优于$ 7 $和$ 5 $数据集。令人惊讶的是,这些特定于任务的预预性模型通常在其他任务(包括胶水基准)上表现良好。我们的结果表明,在许多情况下,可归因于预处理的绩效收益主要是由预处理目标本身驱动的,并不总是归因于大规模数据集的合并。考虑到网络规模预处理数据中对知识产权和进攻内容的担忧,这些发现尤其重要。
translated by 谷歌翻译
经过审计的语言模型(PTLMS)通常是通过大型静态语料库学习的,并针对各种下游任务进行了微调。但是,当部署在现实世界中时,基于PTLM的模型必须处理偏离PTLM最初培训的数据分布。在本文中,我们研究了一个终身语言模型预处理挑战,其中不断更新PTLM以适应新兴数据。在域内收入的研究纸流和按时间顺序排序的推文流上,我们从具有不同持续学习算法的PTLM逐渐预处理PTLM,并跟踪下游任务性能(经过微调之后)。我们评估了PTLM在保留早期语料库中学习知识的同时适应新语料库的能力。我们的实验表明,基于蒸馏的方法最有效地在早期域中保持下游性能。该算法还可以改善知识传递,从而使模型能够比最新数据实现更好的下游性能,并在由于时间而在培训和评估之间存在分配差距时改善时间概括。我们认为,我们的问题制定,方法和分析将激发未来的研究朝着语言模型的持续预处理。
translated by 谷歌翻译
循证医学,医疗保健专业人员在做出决定时提到最佳证据的实践,形成现代医疗保健的基础。但是,它依赖于劳动密集型系统评论,其中域名专家必须从数千个出版物中汇总和提取信息,主要是随机对照试验(RCT)结果转化为证据表。本文通过对两个语言处理任务分解的问题来调查自动化证据表生成:\ texit {命名实体识别},它标识文本中的关键实体,例如药物名称,以及\ texit {关系提取},它会映射它们的关系将它们分成有序元组。我们专注于发布的RCT摘要的句子的自动制表,报告研究结果的结果。使用转移学习和基于变压器的语言表示的原则,开发了两个深度神经网络模型作为联合提取管道的一部分。为了培训和测试这些模型,开发了一种新的金标语,包括来自六种疾病区域的近600个结果句。这种方法表现出显着的优势,我们的系统在多种自然语言处理任务和疾病区域中表现良好,以及在训练期间不均匀地展示疾病域。此外,我们显示这些结果可以通过培训我们的模型仅在200个例句中培训。最终系统是一个概念证明,即证明表的产生可以是半自动的,代表全自动系统评论的一步。
translated by 谷歌翻译
注释数据是用于培训和评估机器学习模型的自然语言处理中的重要成分。因此,注释具有高质量是非常理想的。但是,最近的工作表明,几个流行的数据集包含令人惊讶的注释错误或不一致之处。为了减轻此问题,多年来已经设计了许多注释错误检测方法。尽管研究人员表明他们的方法在新介绍的数据集上效果很好,但他们很少将其方法与以前的工作或同一数据集进行比较。这引起了人们对方法的一般表现的强烈关注,并且使他们的优势和劣势很难解决。因此,我们重新实现18种检测潜在注释错误的方法,并在9个英语数据集上对其进行评估,以进行文本分类以及令牌和跨度标签。此外,我们定义了统一的评估设置,包括注释错误检测任务,评估协议和一般最佳实践的新形式化。为了促进未来的研究和可重复性,我们将数据集和实施释放到易于使用和开源软件包中。
translated by 谷歌翻译
Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches.
translated by 谷歌翻译
近年来,预制语言模型彻底改变了NLP世界,同时在各种下游任务中实现了最先进的性能。但是,在许多情况下,当标记数据稀缺时,这些模型不会表现良好,并且预计模型将在零或几秒钟内执行。最近,有几项工作表明,与下游任务更好地对准的预先预测或执行第二阶段,可以导致改进的结果,尤其是在稀缺数据设置中。在此,我们建议利用携带的情绪话语标记来产生大规模的弱标记数据,这又可以用于适应语言模型进行情感分析。广泛的实验结果显示了我们在各种基准数据集中的方法的价值,包括金融域。在https://github.com/ibm/tslm-discourse-markers上提供代码,模型和数据。
translated by 谷歌翻译
NLP是与计算机或机器理解和解释人类语言的能力有关的人工智能和机器学习的一种形式。语言模型在文本分析和NLP中至关重要,因为它们允许计算机解释定性输入并将其转换为可以在其他任务中使用的定量数据。从本质上讲,在转移学习的背景下,语言模型通常在大型通用语料库上进行培训,称为预训练阶段,然后对特定的基本任务进行微调。结果,预训练的语言模型主要用作基线模型,该模型包含了对上下文的广泛掌握,并且可以进一步定制以在新的NLP任务中使用。大多数预训练的模型都经过来自Twitter,Newswire,Wikipedia和Web等通用领域的Corpora培训。在一般文本中训练的现成的NLP模型可能在专业领域效率低下且不准确。在本文中,我们提出了一个名为Securebert的网络安全语言模型,该模型能够捕获网络安全域中的文本含义,因此可以进一步用于自动化,用于许多重要的网络安全任务,否则这些任务将依靠人类的专业知识和繁琐的手动努力。 Securebert受到了我们从网络安全和一般计算域的各种来源收集和预处理的大量网络安全文本培训。使用我们提出的令牌化和模型权重调整的方法,Securebert不仅能够保留对一般英语的理解,因为大多数预训练的语言模型都可以做到,而且在应用于具有网络安全含义的文本时也有效。
translated by 谷歌翻译
Many prior language modeling efforts have shown that pre-training on an in-domain corpus can significantly improve performance on downstream domain-specific NLP tasks. However, the difficulties associated with collecting enough in-domain data might discourage researchers from approaching this pre-training task. In this paper, we conducted a series of experiments by pre-training Bidirectional Encoder Representations from Transformers (BERT) with different sizes of biomedical corpora. The results demonstrate that pre-training on a relatively small amount of in-domain data (4GB) with limited training steps, can lead to better performance on downstream domain-specific NLP tasks compared with fine-tuning models pre-trained on general corpora.
translated by 谷歌翻译
与标准命名实体识别(NER)相比,在历史文本中识别人,位置和组织是一个巨大的挑战。为了获得机器可读的语料库,通常需要扫描历史文本,并且需要执行光学特征识别(OCR)。结果,历史文献包含错误。此外,位置或组织等实体可以随着时间的推移而改变,这构成了另一个挑战。总体而言,历史文本带有几种特殊性,这些特殊性与现代文本有很大不同,并且在该领域几乎无法使用训练神经标记器的大型标记的Corpora。在这项工作中,我们通过培训大型历史语言模型来解决历史,英语,法语,瑞典语和芬兰语的历史文献。我们通过使用未标记的数据预处理语言模型来规避大量标记数据的需求。我们提出了Hmbert,这是一种历史多语言基于BERT的语言模型,并以多种不同大小的版本发布该模型。此外,我们通过解决下游NER作为今年HIPE-2022共享任务的一部分来评估HMBERT的能力,并提供详细的分析和见解。对于多种语言的经典评论粗粒ner挑战,我们的标记者Histeria的表现优于其他团队的三种语言中的其他团队的模型。
translated by 谷歌翻译
近年来,大型的语言模型(LM)彻底改变了自然语言处理(NLP)的领域。但是,虽然对通用语言进行了预测,但已证明对通用语言非常有效,但已经观察到利基语言会带来问题。特别是,与气候相关的文本包括常见LM无法准确表示的特定语言。我们认为,当今LMS的这种缺点限制了现代NLP对与气候相关文本的文本处理的广泛领域的适用性。作为一种补救措施,我们提出了Climatebert,这是一种基于变压器的语言模型,该模型在超过160万段的气候相关文本中进一步审议,这些文本涉及各种来源,例如普通新闻,研究文章和公司的气候报告。我们发现,在蒙版语言模型目标上,ClimateBertleads提高了46%的改善,这反过来又导致各种与气候相关的下游任务(如文本分类,情感分析和事实检查)的错误率降低了3.57%至35.71%。
translated by 谷歌翻译