尽管变压器语言模型(LMS)是信息提取的最新技术,但长文本引入了需要次优的预处理步骤或替代模型体系结构的计算挑战。稀疏注意的LMS可以代表更长的序列,克服性能障碍。但是,目前尚不清楚如何解释这些模型的预测,因为并非所有令牌都在自我发项层中相互参加,而在运行时,长序列对可解释性算法提出了计算挑战,而当运行时取决于文档长度。这些挑战在文档可能很长的医学环境中是严重的,机器学习(ML)模型必须是审核和值得信赖的。我们介绍了一种新颖的蒙版抽样程序(MSP),以识别有助于预测的文本块,将MSP应用于预测医学文本诊断的背景下,并通过两位临床医生的盲目审查来验证我们的方法。我们的方法比以前的最先进的临床信息块高约1.7倍,速度更快100倍,并且可用于生成重要的短语对。 MSP特别适合长LMS,但可以应用于任何文本分类器。我们提供了MSP的一般实施。
translated by 谷歌翻译
Transformer models have achieved great success across many NLP problems. However, previous studies in automated ICD coding concluded that these models fail to outperform some of the earlier solutions such as CNN-based models. In this paper we challenge this conclusion. We present a simple and scalable method to process long text with the existing transformer models such as BERT. We show that this method significantly improves the previous results reported for transformer models in ICD coding, and is able to outperform one of the prominent CNN-based methods.
translated by 谷歌翻译
我们提出了一种三级等级变压器网络(3级),用于在临床笔记上建模长期依赖性,以患者级预测的目的。该网络配备了三个级别的基于变压器的编码器,以逐步地从单词中学到句子,句子票据,最后给患者注释。单词到句子的第一级直接将预先训练的BERT模型应用为完全可训练的组件。虽然第二和第三级实现了一堆基于变压器的编码器,但在最终患者表示进入临床预测的分类层之前。与传统的BERT模型相比,我们的模型将512个令牌的最大输入长度增加到适合建模大量临床笔记的更长的序列。我们经验检查不同的超参数,以识别给定的计算资源限制的最佳权衡。我们的实验结果对不同预测任务的模拟-III数据集表明,所提出的等级变压器网络优于以前的最先进的模型,包括但不限于BigBird。
translated by 谷歌翻译
自然语言处理(NLP)是一个人工智能领域,它应用信息技术来处理人类语言,在一定程度上理解并在各种应用中使用它。在过去的几年中,该领域已经迅速发展,现在采用了深层神经网络的现代变体来从大型文本语料库中提取相关模式。这项工作的主要目的是调查NLP在药理学领域的最新使用。正如我们的工作所表明的那样,NLP是药理学高度相关的信息提取和处理方法。它已被广泛使用,从智能搜索到成千上万的医疗文件到在社交媒体中找到对抗性药物相互作用的痕迹。我们将覆盖范围分为五个类别,以调查现代NLP方法论,常见的任务,相关的文本数据,知识库和有用的编程库。我们将这五个类别分为适当的子类别,描述其主要属性和想法,并以表格形式进行总结。最终的调查介绍了该领域的全面概述,对从业者和感兴趣的观察者有用。
translated by 谷歌翻译
将电子健康记录(EHR)自动分为诊断代码对NLP社区的挑战。最先进的方法将此问题视为多标签分类问题,并提出了各种架构来对此问题进行建模。但是,这些系统并未利用验证的语言模型的出色性能,这在自然语言理解任务上实现了出色的性能。先前的工作表明,经常使用的填充方案在此任务上表现不佳。因此,本文旨在分析表现不佳的原因,并通过验证的语言模型为自动编码开发一个框架。我们通过实验发现了三个主要问题:1)大标签空间,2)长输入序列和3)域预读和微调之间的域不匹配。我们提出了PLMICD,该框架通过各种策略来应对挑战。实验结果表明,我们提出的框架可以在基准模拟数据上以多个指标来克服挑战和实现最新性能。源代码可从https://github.com/miulab/plm-icd获得
translated by 谷歌翻译
多标签学习在考虑标签相关的同时,从给定标签设置的标签中的一个子集。具有多标签分类的已知挑战是标签的长尾分布。许多研究侧重于改善模型的整体预测,从而不优先考虑尾端标签。改善医学文本的多标签分类中的尾端标签预测使得能够更好地了解患者并改善护理。一个或多个不频繁标签所获得的知识可能会影响医学决策和治疗计划的原因。本研究介绍了包括多生物传感器的级联特定语言模型的变化,以实现两个主要目标。首先,在多标签问题上改善F1罕见标签,特别是长尾标签;其次,要处理长医疗文本和多源电子健康记录(EHRS),对于旨在在短输入序列上工作的标准变压器的具有挑战性的任务。本研究的重要贡献是使用变换器XL获得的新的最先进的(SOTA)结果,以预测医学代码。在医疗信息MART进行各种实验,用于重症监护(MIMIC-III)数据库。结果表明,连接的生物化变压器在整体微观和宏F1分数和尾端标签的单独F1分数方面优于标准变压器,而不是对长输入序列的现有变压器的解决方案产生较低的训练时间。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Background: Encouraged by the success of pretrained Transformer models in many natural language processing tasks, their use for International Classification of Diseases (ICD) coding tasks is now actively being explored. In this study, we investigate three types of Transformer-based models, aiming to address the extreme label set and long text classification challenges that are posed by automated ICD coding tasks. Methods: The Transformer-based model PLM-ICD achieved the current state-of-the-art (SOTA) performance on the ICD coding benchmark dataset MIMIC-III. It was chosen as our baseline model to be further optimised. XR-Transformer, the new SOTA model in the general extreme multi-label text classification domain, and XR-LAT, a novel adaptation of the XR-Transformer model, were also trained on the MIMIC-III dataset. XR-LAT is a recursively trained model chain on a predefined hierarchical code tree with label-wise attention, knowledge transferring and dynamic negative sampling mechanisms. Results: Our optimised PLM-ICD model, which was trained with longer total and chunk sequence lengths, significantly outperformed the current SOTA PLM-ICD model, and achieved the highest micro-F1 score of 60.8%. The XR-Transformer model, although SOTA in the general domain, did not perform well across all metrics. The best XR-LAT based model obtained results that were competitive with the current SOTA PLM-ICD model, including improving the macro-AUC by 2.1%. Conclusion: Our optimised PLM-ICD model is the new SOTA model for automated ICD coding on the MIMIC-III dataset, while our novel XR-LAT model performs competitively with the previous SOTA PLM-ICD model.
translated by 谷歌翻译
Pretrained transformer models have achieved state-of-the-art results in many tasks and benchmarks recently. Many state-of-the-art Language Models (LMs), however, do not scale well above the threshold of 512 input tokens. In specialized domains though (such as legal, scientific or biomedical), models often need to process very long text (sometimes well above 10000 tokens). Even though many efficient transformers have been proposed (such as Longformer, BigBird or FNet), so far, only very few such efficient models are available for specialized domains. Additionally, since the pretraining process is extremely costly in general - but even more so as the sequence length increases - it is often only in reach of large research labs. One way of making pretraining cheaper is the Replaced Token Detection (RTD) task, by providing more signal during training, since the loss can be computed over all tokens. In this work, we train Longformer models with the efficient RTD task on legal data to showcase that pretraining efficient LMs is possible using much less compute. We evaluate the trained models on challenging summarization tasks requiring the model to summarize long texts to show to what extent the models can achieve good performance on downstream tasks. We find that both the small and base models outperform their baselines on the in-domain BillSum and out-of-domain PubMed tasks in their respective parameter range. We publish our code and models for research purposes.
translated by 谷歌翻译
Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedure using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infer ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.
translated by 谷歌翻译
长期以来,共同基金或交易所交易基金(ETF)的分类已为财务分析师提供服务,以进行同行分析,以从竞争对手分析开始到量化投资组合多元化。分类方法通常依赖于从n-1a表格中提取的结构化格式的基金组成数据。在这里,我们启动一项研究,直接从使用自然语言处理(NLP)的表格中描绘的非结构化数据中学习分类系统。将输入数据仅作为表格中报告的投资策略描述,而目标变量是Lipper全球类别,并且使用各种NLP模型,我们表明,分类系统确实可以通过高准确率。我们讨论了我们发现的含义和应用,以及现有的预培训架构的局限性在应用它们以学习基金分类时。
translated by 谷歌翻译
Laws and their interpretations, legal arguments and agreements\ are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
translated by 谷歌翻译
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
translated by 谷歌翻译
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
translated by 谷歌翻译
建模法检索和检索作为预测问题最近被出现为法律智能的主要方法。专注于法律文章检索任务,我们展示了一个名为Lamberta的深度学习框架,该框架被设计用于民法代码,并在意大利民法典上专门培训。为了我们的知识,这是第一项研究提出了基于伯特(来自变压器的双向编码器表示)学习框架的意大利法律制度对意大利法律制度的高级法律文章预测的研究,最近引起了深度学习方法的增加,呈现出色的有效性在几种自然语言处理和学习任务中。我们通过微调意大利文章或其部分的意大利预先训练的意大利预先训练的伯爵来定义Lamberta模型,因为法律文章作为分类任务检索。我们Lamberta框架的一个关键方面是我们构思它以解决极端的分类方案,其特征在于课程数量大,少量学习问题,以及意大利法律预测任务的缺乏测试查询基准。为了解决这些问题,我们为法律文章的无监督标签定义了不同的方法,原则上可以应用于任何法律制度。我们提供了深入了解我们Lamberta模型的解释性和可解释性,并且我们对单一标签以及多标签评估任务进行了广泛的查询模板实验分析。经验证据表明了Lamberta的有效性,以及对广泛使用的深度学习文本分类器和一些构思的几次学习者来说,其优越性是对属性感知预测任务的优势。
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
医疗保健自动化的机会可以改善临床医生的吞吐量。一个这样的例子是辅助工具记录诊断代码时,当临床医生写笔记时。我们使用课程学习研究了医学法规预测的自动化,这是机器学习模型的培训策略,可逐渐将学习任务的硬度从易于到困难提高。课程学习的挑战之一是课程的设计 - 即,在逐渐增加难度的任务设计中。我们提出了分层课程学习(HICU),这是一种在输出空间中使用图形结构的算法,以设计用于多标签分类的课程。我们为多标签分类模型创建课程,以预测患者自然语言描述的ICD诊断和程序代码。通过利用ICD代码的层次结构,该层次基于人体的各种器官系统进行诊断代码,我们发现我们的建议课程改善了基于反复,卷积和基于变压器的体系结构的基于神经网络的预测模型的概括。我们的代码可在https://github.com/wren93/hicu-icd上找到。
translated by 谷歌翻译
非结构化数据,尤其是文本,在各个领域继续迅速增长。特别是,在金融领域,有大量累积的非结构化财务数据,例如公司定期向监管机构提交的文本披露文件,例如证券和交易委员会(SEC)。这些文档通常很长,并且倾向于包含有关公司绩效的宝贵信息。因此,从这些长文本文档中学习预测模型是非常兴趣的,尤其是用于预测数值关键绩效指标(KPI)。尽管在训练有素的语言模型(LMS)中取得了长足的进步,这些模型从大量的文本数据中学习,但他们仍然在有效的长期文档表示方面挣扎。我们的工作满足了这种批判性需求,即如何开发更好的模型来从长文本文档中提取有用的信息,并学习有效的功能,这些功能可以利用软件财务和风险信息来进行文本回归(预测)任务。在本文中,我们提出并实施了一个深度学习框架,该框架将长文档分为大块,并利用预先训练的LMS处理和将块汇总为矢量表示,然后进行自我关注以提取有价值的文档级特征。我们根据美国银行的10-K公共披露报告以及美国公司提交的另一个报告数据集评估了模型。总体而言,我们的框架优于文本建模的强大基线方法以及仅使用数值数据的基线回归模型。我们的工作提供了更好的见解,即如何利用预先训练的域特异性和微调的长输入LMS来表示长文档可以提高文本数据的表示质量,从而有助于改善预测分析。
translated by 谷歌翻译
随着越来越多的可用文本数据,能够自动分析,分类和摘要这些数据的算法的开发已成为必需品。在本研究中,我们提出了一种用于关键字识别的新颖算法,即表示给定文档的关键方面的一个或多字短语的提取,称为基于变压器的神经标记器,用于关键字识别(TNT-KID)。通过将变压器架构适用于手头的特定任务并利用域特定语料库上的预先磨损的语言模型,该模型能够通过提供竞争和强大的方式克服监督和无监督的最先进方法的缺陷在各种不同的数据集中的性能,同时仅需要最佳执行系统所需的手动标记的数据。本研究还提供了彻底的错误分析,具有对模型内部运作的有价值的见解和一种消融研究,测量关键字识别工作流程的特定组分对整体性能的影响。
translated by 谷歌翻译