Objective: We aim to develop an open-source natural language processing (NLP) package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models to extract social determinants of health (SDoH) for cancer patients, examine the generalizability of SODA to a new disease domain (i.e., opioid use), and evaluate the extraction rate of SDoH using cancer populations. Methods: We identified SDoH categories and attributes and developed an SDoH corpus using clinical notes from a general cancer cohort. We compared four transformer-based NLP models to extract SDoH, examined the generalizability of NLP models to a cohort of patients prescribed with opioids, and explored customization strategies to improve performance. We applied the best NLP model to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804), and colorectal cancer (n=6,240) cohorts. Results and Conclusion: We developed a corpus of 629 cancer patients notes with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH. The Bidirectional Encoder Representations from Transformers (BERT) model achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts. Fine-tuning the NLP models using new annotations from opioid use patients improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH could be extracted from >70% of cancer patients, but 9 SDoH had a low extraction rate (<70% of cancer patients). The SODA package with pre-trained transformer models is publicly available at https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
translated by 谷歌翻译
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
translated by 谷歌翻译
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
translated by 谷歌翻译
Objective: Social Determinants of Health (SDOH) influence personal health outcomes and health systems interactions. Health systems capture SDOH information through structured data and unstructured clinical notes; however, clinical notes often contain a more comprehensive representation of several key SDOH. The objective of this work is to assess the SDOH information gain achievable by extracting structured semantic representations of SDOH from the clinical narrative and combining these extracted representations with available structured data. Materials and Methods: We developed a natural language processing (NLP) information extraction model for SDOH that utilizes a deep learning entity and relation extraction architecture. In an electronic health record (EHR) case study, we applied the SDOH extractor to a large existing clinical data set with over 200,000 patients and 400,000 notes and compared the extracted information with available structured data. Results: The SDOH extractor achieved 0.86 F1 on a withheld test set. In the EHR case study, we found 19\% of current tobacco users, 10\% of drug users, and 32\% of homeless patients only include documentation of these risk factors in the clinical narrative. Conclusions: Patients who are at-risk for negative health outcomes due to SDOH may be better served if health systems are able to identify SDOH risk factors and associated social needs. Structured semantic representations of text-encoded SDOH information can augment existing structured, and this more comprehensive SDOH representation can assist health systems in identifying and addressing social needs.
translated by 谷歌翻译
循证医学,医疗保健专业人员在做出决定时提到最佳证据的实践,形成现代医疗保健的基础。但是,它依赖于劳动密集型系统评论,其中域名专家必须从数千个出版物中汇总和提取信息,主要是随机对照试验(RCT)结果转化为证据表。本文通过对两个语言处理任务分解的问题来调查自动化证据表生成:\ texit {命名实体识别},它标识文本中的关键实体,例如药物名称,以及\ texit {关系提取},它会映射它们的关系将它们分成有序元组。我们专注于发布的RCT摘要的句子的自动制表,报告研究结果的结果。使用转移学习和基于变压器的语言表示的原则,开发了两个深度神经网络模型作为联合提取管道的一部分。为了培训和测试这些模型,开发了一种新的金标语,包括来自六种疾病区域的近600个结果句。这种方法表现出显着的优势,我们的系统在多种自然语言处理任务和疾病区域中表现良好,以及在训练期间不均匀地展示疾病域。此外,我们显示这些结果可以通过培训我们的模型仅在200个例句中培训。最终系统是一个概念证明,即证明表的产生可以是半自动的,代表全自动系统评论的一步。
translated by 谷歌翻译
Raredis Corpus含有超过5,000个罕见疾病,近6,000个临床表现都是注释。此外,跨候注释协议评估表明,相对高的协议(F1措施等于实体的完全匹配标准,与关系的81.3%等于83.5%)。基于这些结果,该毒品具有高质量,假设该领域的重要步骤由于稀缺具有稀有疾病的可用语料库。这可以将门打开到进一步的NLP应用,这将促进这些罕见疾病的诊断和治疗,因此将大大提高这些患者的生活质量。
translated by 谷歌翻译
放射学报告含有在其解释图像中被放射科学家记录的多样化和丰富的临床异常。放射发现的综合语义表示将使广泛的次要使用应用来支持诊断,分类,结果预测和临床研究。在本文中,我们提出了一种新的放射学报告语料库,注释了临床调查结果。我们的注释模式捕获了可观察到的病理发现的详细说明(“病变”)和其他类型的临床问题(“医学问题”)。该模式使用了基于事件的表示来捕获细粒细节,包括断言,解剖学,特征,大小,计数等。我们的黄金标准语料库包含总共500个注释的计算机断层扫描(CT)报告。我们利用两个最先进的深度学习架构提取了触发器和论证实体,包括伯特。然后,我们使用基于BERT的关系提取模型预测触发器和参数实体(称为参数角色)之间的连接。我们使用预先从我们的机构的300万放射学报告预先培训的BERT模型实现了最佳提取性能:90.9%-93.4%f1用于查找触发器的触发器72.0%-85.6%f1,用于参数角色。为了评估型号的概括性,我们使用了从模拟胸部X射线(MIMIC-CXR)数据库中随机采样的外部验证。该验证集的提取性能为95.6%,用于发现触发器和参数角色的79.1%-89.7%,表明模型与具有不同的成像模型的跨机构数据一致。我们从模拟CXR数据库中的所有放射学报告中提取了查找事件,并为研究界提供了提取。
translated by 谷歌翻译
Electronic Health Records (EHRs) hold detailed longitudinal information about each patient's health status and general clinical history, a large portion of which is stored within the unstructured text. Temporal modelling of this medical history, which considers the sequence of events, can be used to forecast and simulate future events, estimate risk, suggest alternative diagnoses or forecast complications. While most prediction approaches use mainly structured data or a subset of single-domain forecasts and outcomes, we processed the entire free-text portion of EHRs for longitudinal modelling. We present Foresight, a novel GPT3-based pipeline that uses NER+L tools (i.e. MedCAT) to convert document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events such as disorders, medications, symptoms and interventions. Since large portions of EHR data are in text form, such an approach benefits from a granular and detailed view of a patient while introducing modest additional noise. On tests in two large UK hospitals (King's College Hospital, South London and Maudsley) and the US MIMIC-III dataset precision@10 of 0.80, 0.81 and 0.91 was achieved for forecasting the next biomedical concept. Foresight was also validated on 34 synthetic patient timelines by 5 clinicians and achieved relevancy of 97% for the top forecasted candidate disorder. Foresight can be easily trained and deployed locally as it only requires free-text data (as a minimum). As a generative model, it can simulate follow-on disorders, medications and interventions for as many steps as required. Foresight is a general-purpose model for biomedical concept modelling that can be used for real-world risk estimation, virtual trials and clinical research to study the progression of diseases, simulate interventions and counterfactuals, and for educational purposes.
translated by 谷歌翻译
计算文本表型是从临床注释中鉴定出患有某些疾病和特征的患者的实践。由于很少有用于机器学习的案例和域专家的数据注释需求,因此难以识别的罕见疾病要确定。我们提出了一种使用本体论和弱监督的方法,并具有来自双向变压器(例如BERT)的最新预训练的上下文表示。基于本体的框架包括两个步骤:(i)文本到umls,通过上下文将提及与统一医学语言系统(UMLS)中的概念链接到命名的实体识别和链接(NER+L)工具,SemeHR中提取表型。 ,以及具有自定义规则和上下文提及表示的弱监督; (ii)UMLS-to-to-ordo,将UMLS概念与孤子罕见疾病本体论(ORDO)中的罕见疾病相匹配。提出了弱监督的方法来学习一个表型确认模型,以改善链接的文本对umls,而没有域专家的注释数据。我们评估了来自美国和英国两个机构的三个出院摘要和放射学报告的临床数据集的方法。我们最好的弱监督方法获得了81.4%的精度和91.4%的召回,从模仿III出院摘要中提取罕见疾病UMLS表型。总体管道处理临床笔记可以表面罕见疾病病例,其中大部分在结构化数据(手动分配的ICD代码)中没有受到平衡。关于模仿III和NHS Tayside的放射学报告的结果与放电摘要一致。我们讨论了弱监督方法的有用性,并提出了未来研究的方向。
translated by 谷歌翻译
由于结构化数据通常不足,因此在开发用于临床信息检索和决策支持系统模型时,需要从电子健康记录中的自由文本中提取标签。临床文本中最重要的上下文特性之一是否定,这表明没有发现。我们旨在通过比较荷兰临床注释中的三种否定检测方法来改善标签的大规模提取。我们使用Erasmus医疗中心荷兰临床语料库比较了基于ContextD的基于规则的方法,即使用MEDCAT和(Fineted)基于Roberta的模型的BilstM模型。我们发现,Bilstm和Roberta模型都在F1得分,精度和召回方面始终优于基于规则的模型。此外,我们将每个模型的分类错误系统地分类,这些错误可用于进一步改善特定应用程序的模型性能。在性能方面,将三个模型结合起来并不有益。我们得出的结论是,尤其是基于Bilstm和Roberta的模型在检测临床否定方面非常准确,但是最终,根据手头的用例,这三种方法最终都可以可行。
translated by 谷歌翻译
背景:在信息提取和自然语言处理域中,可访问的数据集对于复制和比较结果至关重要。公开可用的实施和工具可以用作基准,并促进更复杂的应用程序的开发。但是,在临床文本处理的背景下,可访问数据集的数量很少 - 现有工具的数量也很少。主要原因之一是数据的敏感性。对于非英语语言,这个问题更为明显。方法:为了解决这种情况,我们介绍了一个工作台:德国临床文本处理模型的集合。这些模型接受了德国肾脏病报告的识别语料库的培训。结果:提出的模型为内域数据提供了有希望的结果。此外,我们表明我们的模型也可以成功应用于德语的其他生物医学文本。我们的工作台公开可用,因此可以开箱即用,或转移到相关问题上。
translated by 谷歌翻译
医疗领域通常会受到信息超负荷的约束。医疗保健的数字化,在线医疗存储库的不断更新以及生物医学数据集的可用性增加使得有效分析数据变得具有挑战性。这为严重依赖医疗数据的医疗专业人员创造了其他工作,以完成研究并咨询患者。本文旨在展示不同的文本突出显示技术如何捕获相关的医疗环境。这将通过促进更快的决定,从而改善在线医疗服务的整体质量,从而减少医生对患者的认知负担和反应时间。实施和评估了三个不同的单词级文本突出显示方法。第一个方法使用TF-IDF分数直接突出文本的重要部分。第二种方法是TF-IDF分数的组合以及将局部可解释的模型 - 静态解释应用于分类模型。第三种方法直接使用神经网络来预测是否应突出显示单词。我们的实验结果表明,神经网络方法成功地突出了医学上的术语,并且随着输入段的大小的增加,其性能得到了提高。
translated by 谷歌翻译
人们经常利用在线媒体(例如Facebook,reddit)作为表达心理困扰并寻求支持的平台。最先进的NLP技术表现出强大的潜力,可以自动从文本中检测到心理健康问题。研究表明,心理健康问题反映在人类选择中所表明的情绪(例如悲伤)中。因此,我们开发了一种新颖的情绪注释的心理健康语料库(Emoment),由2802个Facebook帖子(14845个句子)组成,该帖子从两个南亚国家(斯里兰卡和印度)提取。三名临床心理学研究生参与了将这些职位注释分为八​​类,包括“精神疾病”(例如抑郁症)和情绪(例如,“悲伤”,“愤怒”)。 Emoment语料库达到了98.3%的“非常好”的跨通道协议(即有两个或更多协议),而Fleiss的Kappa为0.82。我们基于罗伯塔的模型的F1得分为0.76,第一个任务的宏观平均F1得分为0.77(即,从职位预测心理健康状况)和第二任务(即相关帖子与定义的类别的关联程度在我们的分类法中)。
translated by 谷歌翻译
与生物医学命名实体识别任务有关的挑战是:现有方法考虑了较少数量的生物医学实体(例如疾病,症状,蛋白质,基因);这些方法不考虑健康的社会决定因素(年龄,性别,就业,种族),这是与患者健康有关的非医学因素。我们提出了一条机器学习管道,该管道通过以下方式改善了以前的努力:首先,它认识到标准类型以外的许多生物医学实体类型;其次,它考虑了与患者健康有关的非临床因素。该管道还包括阶段,例如预处理,令牌化,映射嵌入查找和命名实体识别任务,以从自由文本中提取生物医学命名实体。我们提出了一个新的数据集,我们通过策划COVID-19案例报告来准备。所提出的方法的表现优于五个基准数据集上的基线方法,其宏观和微平均F1得分约为90,而我们的数据集则分别为95.25和93.18的宏观和微平均F1得分。
translated by 谷歌翻译
专门的基于变形金刚的模型(例如生物Biobert和Biomegatron)适用于基于公共可用的生物医学语料库的生物医学领域。因此,它们有可能编码大规模的生物学知识。我们研究了这些模型中生物学知识的编码和表示,及其支持癌症精度医学推断的潜在实用性 - 即,对基因组改变的临床意义的解释。我们比较不同变压器基线的性能;我们使用探测来确定针对不同实体的编码的一致性;我们使用聚类方法来比较和对比基因,变异,药物和疾病的嵌入的内部特性。我们表明,这些模型确实确实编码了生物学知识,尽管其中一些模型在针对特定任务的微调中丢失了。最后,我们分析了模型在数据集中的偏见和失衡方面的行为。
translated by 谷歌翻译
Importance: Social determinants of health (SDOH) are known to be associated with increased risk of suicidal behaviors, but few studies utilized SDOH from unstructured electronic health record (EHR) notes. Objective: To investigate associations between suicide and recent SDOH, identified using structured and unstructured data. Design: Nested case-control study. Setting: EHR data from the US Veterans Health Administration (VHA). Participants: 6,122,785 Veterans who received care in the US VHA between October 1, 2010, and September 30, 2015. Exposures: Occurrence of SDOH over a maximum span of two years compared with no occurrence of SDOH. Main Outcomes and Measures: Cases of suicide deaths were matched with 4 controls on birth year, cohort entry date, sex, and duration of follow-up. We developed an NLP system to extract SDOH from unstructured notes. Structured data, NLP on unstructured data, and combining them yielded seven, eight and nine SDOH respectively. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) were estimated using conditional logistic regression. Results: In our cohort, 8,821 Veterans committed suicide during 23,725,382 person-years of follow-up (incidence rate 37.18 /100,000 person-years). Our cohort was mostly male (92.23%) and white (76.99%). Across the six common SDOH as covariates, NLP-extracted SDOH, on average, covered 84.38% of all SDOH occurrences. All SDOH, measured by structured data and NLP, were significantly associated with increased risk of suicide. The SDOH with the largest effects was legal problems (aOR=2.67, 95% CI=2.46-2.89), followed by violence (aOR=2.26, 95% CI=2.11-2.43). NLP-extracted and structured SDOH were also associated with suicide. Conclusions and Relevance: NLP-extracted SDOH were always significantly associated with increased risk of suicide among Veterans, suggesting the potential of NLP in public health studies.
translated by 谷歌翻译
虽然罕见疾病的特征在于患病率低,但大约3亿人受到罕见疾病的影响。对这些条件的早期和准确诊断是一般从业者的主要挑战,没有足够的知识来识别它们。除此之外,罕见疾病通常会显示各种表现形式,这可能会使诊断更加困难。延迟的诊断可能会对患者的生命产生负面影响。因此,迫切需要增加关于稀有疾病的科学和医学知识。自然语言处理(NLP)和深度学习可以帮助提取有关罕见疾病的相关信息,以促进其诊断和治疗。本文探讨了几种深度学习技术,例如双向长期内存(BILSTM)网络或基于来自变压器(BERT)的双向编码器表示的深层语境化词表示,以识别罕见疾病及其临床表现(症状和症状) Raredis语料库。该毒品含有超过5,000名罕见疾病和近6,000个临床表现。 Biobert,基于BERT和培训的生物医学Corpora培训的域特定语言表示,获得了最佳结果。特别是,该模型获得罕见疾病的F1分数为85.2%,表现优于所有其他模型。
translated by 谷歌翻译
健康素养是2030年健康人民的主要重点,这是美国国家目标和目标的第五次迭代。健康素养较低的人通常会遵循访问后的说明以及使用处方,这会导致健康结果和严重的健康差异。在这项研究中,我们建议通过自动在给定句子中翻译文盲语言来利用自然语言处理技术来提高患者教育材料的健康素养。我们从四个在线健康信息网站上刮擦了患者教育材料:medlineplus.gov,drugs.com,mayoclinic.org和reddit.com。我们分别在银标准培训数据集和黄金标准测试数据集上培训并测试了最先进的神经机译(NMT)模型。实验结果表明,双向长期记忆(BILSTM)NMT模型的表现超过了来自变压器(BERT)基于NMT模型的双向编码器表示。我们还验证了NMT模型通过比较句子中的健康文盲语言比率来翻译健康文盲语言的有效性。提出的NMT模型能够识别正确的复杂单词并简化为外行语言,同时该模型遭受句子完整性,流利性,可读性的影响,并且难以翻译某些医学术语。
translated by 谷歌翻译
The increasing reliance on online communities for healthcare information by patients and caregivers has led to the increase in the spread of misinformation, or subjective, anecdotal and inaccurate or non-specific recommendations, which, if acted on, could cause serious harm to the patients. Hence, there is an urgent need to connect users with accurate and tailored health information in a timely manner to prevent such harm. This paper proposes an innovative approach to suggesting reliable information to participants in online communities as they move through different stages in their disease or treatment. We hypothesize that patients with similar histories of disease progression or course of treatment would have similar information needs at comparable stages. Specifically, we pose the problem of predicting topic tags or keywords that describe the future information needs of users based on their profiles, traces of their online interactions within the community (past posts, replies) and the profiles and traces of online interactions of other users with similar profiles and similar traces of past interaction with the target users. The result is a variant of the collaborative information filtering or recommendation system tailored to the needs of users of online health communities. We report results of our experiments on an expert curated data set which demonstrate the superiority of the proposed approach over the state of the art baselines with respect to accurate and timely prediction of topic tags (and hence information sources of interest).
translated by 谷歌翻译
近年来,人们对使用电子病历(EMR)进行次要目的特别感兴趣,以增强医疗保健提供的质量和安全性。 EMR倾向于包含大量有价值的临床笔记。学习嵌入是一种将笔记转换为使其可比性的格式的方法。基于变压器的表示模型最近取得了巨大的飞跃。这些模型在大型在线数据集上进行了预训练,以有效地了解自然语言文本。学习嵌入的质量受临床注释如何用作表示模型的输入的影响。临床注释有几个部分具有不同水平的信息价值。医疗保健提供者通常使用不同的表达方式来实现同一概念也很常见。现有方法直接使用临床注释或初始预处理作为表示模型的输入。但是,要学习良好的嵌入,我们确定了最重要的临床笔记部分。然后,我们将提取的概念从选定部分映射到统一医学语言系统(UMLS)中的标准名称。我们使用与唯一概念相对应的标准短语作为临床模型的输入。我们进行了实验,以测量在公共可用的医疗信息集市(MIMIC-III)数据集的子集中,在医院死亡率预测的任务中,学到的嵌入向量的实用性。根据实验,与其他输入格式相比,基于临床变压器的表示模型通过提取的独特概念的标准名称产生的输入产生了更好的结果。表现最好的模型分别是Biobert,PubMedbert和Umlsbert。
translated by 谷歌翻译