The application of Natural Language Processing (NLP) to specialized domains, such as the law, has recently received a surge of interest. As many legal services rely on processing and analyzing large collections of documents, automating such tasks with NLP tools emerges as a key challenge. Many popular language models, such as BERT or RoBERTa, are general-purpose models, which have limitations on processing specialized legal terminology and syntax. In addition, legal documents may contain specialized vocabulary from other domains, such as medical terminology in personal injury text. Here, we propose LegalRelectra, a legal-domain language model that is trained on mixed-domain legal and medical corpora. We show that our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our training architecture implements the Electra framework, but utilizes Reformer instead of BERT for its generator and discriminator. We show that this improves the model's performance on processing long passages and results in better long-range text comprehension.
translated by 谷歌翻译
Many prior language modeling efforts have shown that pre-training on an in-domain corpus can significantly improve performance on downstream domain-specific NLP tasks. However, the difficulties associated with collecting enough in-domain data might discourage researchers from approaching this pre-training task. In this paper, we conducted a series of experiments by pre-training Bidirectional Encoder Representations from Transformers (BERT) with different sizes of biomedical corpora. The results demonstrate that pre-training on a relatively small amount of in-domain data (4GB) with limited training steps, can lead to better performance on downstream domain-specific NLP tasks compared with fine-tuning models pre-trained on general corpora.
translated by 谷歌翻译
在法律文本中预先培训的基于变压器的预训练语言模型(PLM)的出现,法律领域中的自然语言处理受益匪浅。有经过欧洲和美国法律文本的PLM,最著名的是Legalbert。但是,随着印度法律文件的NLP申请量的迅速增加以及印度法律文本的区别特征,也有必要在印度法律文本上预先培训LMS。在这项工作中,我们在大量的印度法律文件中介绍了基于变压器的PLM。我们还将这些PLM应用于印度法律文件的几个基准法律NLP任务,即从事实,法院判决的语义细分和法院判决预测中的法律法规识别。我们的实验证明了这项工作中开发的印度特定PLM的实用性。
translated by 谷歌翻译
NLP是与计算机或机器理解和解释人类语言的能力有关的人工智能和机器学习的一种形式。语言模型在文本分析和NLP中至关重要,因为它们允许计算机解释定性输入并将其转换为可以在其他任务中使用的定量数据。从本质上讲,在转移学习的背景下,语言模型通常在大型通用语料库上进行培训,称为预训练阶段,然后对特定的基本任务进行微调。结果,预训练的语言模型主要用作基线模型,该模型包含了对上下文的广泛掌握,并且可以进一步定制以在新的NLP任务中使用。大多数预训练的模型都经过来自Twitter,Newswire,Wikipedia和Web等通用领域的Corpora培训。在一般文本中训练的现成的NLP模型可能在专业领域效率低下且不准确。在本文中,我们提出了一个名为Securebert的网络安全语言模型,该模型能够捕获网络安全域中的文本含义,因此可以进一步用于自动化,用于许多重要的网络安全任务,否则这些任务将依靠人类的专业知识和繁琐的手动努力。 Securebert受到了我们从网络安全和一般计算域的各种来源收集和预处理的大量网络安全文本培训。使用我们提出的令牌化和模型权重调整的方法,Securebert不仅能够保留对一般英语的理解,因为大多数预训练的语言模型都可以做到,而且在应用于具有网络安全含义的文本时也有效。
translated by 谷歌翻译
Laws and their interpretations, legal arguments and agreements\ are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
translated by 谷歌翻译
Identifying named entities such as a person, location or organization, in documents can highlight key information to readers. Training Named Entity Recognition (NER) models requires an annotated data set, which can be a time-consuming labour-intensive task. Nevertheless, there are publicly available NER data sets for general English. Recently there has been interest in developing NER for legal text. However, prior work and experimental results reported here indicate that there is a significant degradation in performance when NER methods trained on a general English data set are applied to legal text. We describe a publicly available legal NER data set, called E-NER, based on legal company filings available from the US Securities and Exchange Commission's EDGAR data set. Training a number of different NER algorithms on the general English CoNLL-2003 corpus but testing on our test collection confirmed significant degradations in accuracy, as measured by the F1-score, of between 29.4\% and 60.4\%, compared to training and testing on the E-NER collection.
translated by 谷歌翻译
Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We demonstrate that domain-specific pre-training offers performance improvements across all tasks. We release the benchmark to encourage future research in this domain.
translated by 谷歌翻译
自然语言处理领域(NLP)最近看到使用预先接受训练的语言模型来解决几乎任何任务的大量变化。尽管对各种任务的基准数据集显示了很大的改进,但这些模型通常在非标准域中对临床领域的临床域进行次优,其中观察到预训练文件和目标文件之间的巨大差距。在本文中,我们的目标是通过对语言模型的域特定培训结束这种差距,我们调查其对多种下游任务和设置的影响。我们介绍了预先训练的Clin-X(临床XLM-R)语言模型,并展示了Clin-X如何通过两种语言的十个临床概念提取任务的大幅度优于其他预先训练的变压器模型。此外,我们展示了如何通过基于随机分裂和交叉句子上下文的集合来利用我们所提出的任务和语言 - 无人机模型架构进一步改善变压器模型。我们在低资源和转移设置中的研究显​​示,尽管只有250个标记的句子,但在只有250个标记的句子时,缺乏带注释数据的稳定模型表现。我们的结果突出了专业语言模型作为非标准域中的概念提取的Clin-X的重要性,但也表明我们的任务 - 无人机模型架构跨越测试任务和语言是强大的,以便域名或任务特定的适应不需要。 Clin-Xlanguage模型和用于微调和传输模型的源代码在https://github.com/boschresearch/clin\_x/和Huggingface模型集线器上公开使用。
translated by 谷歌翻译
自论证挖掘领域成立以来,在法律话语中识别,分类和分析的论点一直是研究的重要领域。但是,自然语言处理(NLP)研究人员的模型模型与法院决策中的注释论点与法律专家理解和分析法律论证的方式之间存在重大差异。尽管计算方法通常将论点简化为通用的前提和主张,但法律研究中的论点通常表现出丰富的类型,对于获得一般法律的特定案例和应用很重要。我们解决了这个问题,并做出了一些实质性的贡献,以推动该领域的前进。首先,我们在欧洲人权法院(ECHR)诉讼中为法律论点设计了新的注释计划,该计划深深植根于法律论证研究的理论和实践中。其次,我们编译和注释了373项法院判决(230万令牌和15K注释的论点跨度)的大量语料库。最后,我们训练一个论证挖掘模型,该模型胜过法律NLP领域中最先进的模型,并提供了彻底的基于专家的评估。所有数据集和源代码均可在https://github.com/trusthlt/mining-legal-arguments的开放lincenses下获得。
translated by 谷歌翻译
In the era of digital healthcare, the huge volumes of textual information generated every day in hospitals constitute an essential but underused asset that could be exploited with task-specific, fine-tuned biomedical language representation models, improving patient care and management. For such specialized domains, previous research has shown that fine-tuning models stemming from broad-coverage checkpoints can largely benefit additional training rounds over large-scale in-domain resources. However, these resources are often unreachable for less-resourced languages like Italian, preventing local medical institutions to employ in-domain adaptation. In order to reduce this gap, our work investigates two accessible approaches to derive biomedical language models in languages other than English, taking Italian as a concrete use-case: one based on neural machine translation of English resources, favoring quantity over quality; the other based on a high-grade, narrow-scoped corpus natively written in Italian, thus preferring quality over quantity. Our study shows that data quantity is a harder constraint than data quality for biomedical adaptation, but the concatenation of high-quality data can improve model performance even when dealing with relatively size-limited corpora. The models published from our investigations have the potential to unlock important research opportunities for Italian hospitals and academia. Finally, the set of lessons learned from the study constitutes valuable insights towards a solution to build biomedical language models that are generalizable to other less-resourced languages and different domain settings.
translated by 谷歌翻译
我们提出了一种新颖的基准和相关的评估指标,用于评估文本匿名方法的性能。文本匿名化定义为编辑文本文档以防止个人信息披露的任务,目前遭受了面向隐私的带注释的文本资源的短缺,因此难以正确评估各种匿名方法提供的隐私保护水平。本文介绍了标签(文本匿名基准),这是一种新的开源注释语料库,以解决此短缺。该语料库包括欧洲人权法院(ECHR)的1,268个英语法院案件,并充满了有关每个文档中出现的个人信息的全面注释,包括其语义类别,标识符类型,机密属性和共同参考关系。与以前的工作相比,TAB语料库旨在超越传统的识别(仅限于检测预定义的语义类别),并且明确标记了这些文本跨越的标记,这些文本应该被掩盖,以掩盖该人的身份受到保护。除了介绍语料库及其注释层外,我们还提出了一套评估指标,这些指标是针对衡量文本匿名性的性能而定制的,无论是在隐私保护和公用事业保护方面。我们通过评估几个基线文本匿名模型的经验性能来说明基准和提议的指标的使用。完整的语料库及其面向隐私的注释准则,评估脚本和基线模型可在以下网址提供:
translated by 谷歌翻译
用于探索美国国家航空航天局的搜索工具(广告)可以相当丰富和赋予(例如,类似和趋势的运营商),但研究人员尚未允许完全杠杆语义搜索。例如,对“普朗克任务的结果”查询应该能够区分普朗克(人,任务,常量,机构和更多)的所有各种含义,而无需从用户进一步澄清。在广告中,我们正在将现代机器学习和自然语言处理技术应用于我们最近的天文出版物的数据集,以培训Astrobert,这是一种基于Google研究的深刻语境语言模型。使用AstrBert,我们的目标是丰富广告数据集并提高其可发现性,特别是我们正在开发自己的命名实体识别工具。我们在这里展示我们初步的结果和经验教训。
translated by 谷歌翻译
来自变压器(BERT)的双向编码器表示显示了各种NLP任务的奇妙改进,并且已经提出了其连续的变体来进一步提高预先训练的语言模型的性能。在本文中,我们的目标是首先介绍中国伯特的全文掩蔽(WWM)策略,以及一系列中国预培训的语言模型。然后我们还提出了一种简单但有效的型号,称为Macbert,这在几种方面提高了罗伯塔。特别是,我们提出了一种称为MLM作为校正(MAC)的新掩蔽策略。为了展示这些模型的有效性,我们创建了一系列中国预先培训的语言模型,作为我们的基线,包括BERT,Roberta,Electra,RBT等。我们对十个中国NLP任务进行了广泛的实验,以评估创建的中国人托管语言模型以及提议的麦克白。实验结果表明,Macbert可以在许多NLP任务上实现最先进的表演,我们还通过几种可能有助于未来的研究的调查结果来消融细节。我们开源我们的预先培训的语言模型,以进一步促进我们的研究界。资源可用:https://github.com/ymcui/chinese-bert-wwm
translated by 谷歌翻译
动机:生物医学研究人员和临床从业者的常年挑战是随着出版物和医疗票据的快速增长而待的。自然语言处理(NLP)已成为驯服信息超载的有希望的方向。特别是,大型神经语言模型通过预先绘制的文本预测,通过各种NLP应用中的BERT模型的成功示例,便于通过预先绘制的预先来进行学习。然而,用于结束任务的微调此类模型仍然具有挑战性,特别是具有小标记数据集,这些数据集是生物医学NLP的常见。结果:我们对生物医学NLP的微调稳定性进行了系统研究。我们表明FineTuning性能可能对预先预订的设置敏感,尤其是在低资源域中。大型型号有可能获得更好的性能,但越来越多的模型大小也加剧了FineTuning不稳定性。因此,我们对解决微调不稳定的技术进行了全面的探索。我们表明,这些技术可以大大提高低源生物医学NLP应用的微调性能。具体地,冻结下层有助于标准伯特基型号,而完整的衰减对于BERT-LARD和Electra型号更有效。对于低资源文本相似性任务,如生物,重新初始化顶层是最佳策略。总体而言,占星型词汇和预制促进更强大的微调模型。基于这些调查结果,我们在广泛的生物医学NLP应用方面建立了新的技术。可用性和实施​​:为了促进生物医学NLP的进展,我们释放了我们最先进的预订和微调模型:https://aka.ms/blurb。
translated by 谷歌翻译
许多专业域都保留了深度学习,因为大型标记数据集需要昂贵的专家注释器。我们通过介绍合同理解Atticus DataSet(CUAD),法律合同审查的新数据集来解决法律领域内的这个瓶颈。CUAD由来自Atticus项目的数十名法律专家创建,包括超过13,000多个注释。该任务是突出对人类审查很重要的合同的突出部分。我们发现变压器模型具有新的性能,但这种性能受模型设计和培训数据集大小的强烈影响。尽管结果有很有希望的结果,但仍有实质性的改进空间。作为专家注释的唯一大型专业的NLP基准之一,CUAD可以作为更广泛的NLP社区担任具有挑战性的研究基准。
translated by 谷歌翻译
由于临床实践所需的放射学报告和研究是在自由文本叙述中编写和存储的,因此很难提取相对信息进行进一步分析。在这种情况下,自然语言处理(NLP)技术可以促进自动信息提取和自由文本格式转换为结构化数据。近年来,基于深度学习(DL)的模型已适用于NLP实验,并具有令人鼓舞的结果。尽管基于人工神经网络(ANN)和卷积神经网络(CNN)的DL模型具有显着潜力,但这些模型仍面临临床实践中实施的一些局限性。变形金刚是另一种新的DL体系结构,已越来越多地用于改善流程。因此,在这项研究中,我们提出了一种基于变压器的细粒命名实体识别(NER)架构,以进行临床信息提取。我们以自由文本格式收集了88次腹部超声检查报告,并根据我们开发的信息架构进行了注释。文本到文本传输变压器模型(T5)和covive是T5模型的预训练域特异性适应性,用于微调来提取实体和关系,并将输入转换为结构化的格式。我们在这项研究中基于变压器的模型优于先前应用的方法,例如基于Rouge-1,Rouge-2,Rouge-L和BLEU分别为0.816、0.668、0.528和0.743的ANN和CNN模型,同时提供了一个分数可解释的结构化报告。
translated by 谷歌翻译
社会科学的学术文献是记录人类文明并研究人类社会问题的文献。随着这种文献的大规模增长,快速找到有关相关问题的现有研究的方法已成为对研究人员的紧迫需求。先前的研究,例如SCIBERT,已经表明,使用特定领域的文本进行预训练可以改善这些领域中自然语言处理任务的性能。但是,没有针对社会科学的预训练的语言模型,因此本文提出了关于社会科学引文指数(SSCI)期刊上许多摘要的预培训模型。这些模型可在GitHub(https://github.com/s-t-full-text-knowledge-mining/ssci-bert)上获得,在学科分类和带有社会科学文学的抽象结构 - 功能识别任务方面表现出色。
translated by 谷歌翻译
Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.
translated by 谷歌翻译
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
时间是文档的重要方面,用于一系列NLP和IR任务。在这项工作中,我们研究了在预训练期间合并时间信息的方法,以进一步提高与时间相关的任务的性能。与Bert相比,使用同步文档收集(BooksCorpus和English Wikipedia)作为培训语料库相比,我们使用长跨度的时间新闻文章集合来构建单词表示。我们介绍了Timebert,这是一种新颖的语言表示模型,该模型通过两项新的预训练任务培训了新闻文章的临时收集,这些任务利用了两个不同的时间信号来构建时间认识的语言表示。实验结果表明,TimeBert始终胜过BERT和其他现有的预训练模型,在不同的下游NLP任务或应用程序上,时间很高的时间很重要。
translated by 谷歌翻译