已经开发出各种机器学习模型,包括深神经网络模型,以预测错义(非同义)突变的有害性。尽管如此,使用更复杂的自适应机器学习方法对生物学问题的新审查可能会受益于当前最新水平的潜在改进。自然语言处理领域的最新进展显示了变压器模型 - 一种深神经网络类型,在与上下文依赖性建模序列信息方面特别有力。在这项研究中,我们介绍了Mutformer,这是一种基于变压器的模型,用于预测有害错义突变。 Mutformer使用人类基因组中的参考和突变蛋白序列作为主要特征。它结合了自我发项层和卷积层的结合,以学习蛋白质序列中氨基酸突变之间的远距离依赖性和短期依赖性。我们在参考蛋白序列和突变蛋白序列上预先训练融合剂,该蛋白质序列是由于人类种群中观察到的常见遗传变异而产生的。接下来,我们检查了不同的微调方法,以成功地将模型应用于错义突变的有害性预测。最后,我们在多个测试数据集上评估了杂货商的性能。我们发现,在各种现有工具中,杂种器表现出相似或改进的性能,包括使用常规机器学习方法的工具(例如,支持向量机,卷积神经网络,经常性神经网络)。我们得出的结论是,杂货商成功考虑了以前研究中未探索的序列特征,并且可能会补充现有的计算预测或经验产生的功能分数,以提高我们对疾病变异的理解。
translated by 谷歌翻译
The prediction of protein structures from sequences is an important task for function prediction, drug design, and related biological processes understanding. Recent advances have proved the power of language models (LMs) in processing the protein sequence databases, which inherit the advantages of attention networks and capture useful information in learning representations for proteins. The past two years have witnessed remarkable success in tertiary protein structure prediction (PSP), including evolution-based and single-sequence-based PSP. It seems that instead of using energy-based models and sampling procedures, protein language model (pLM)-based pipelines have emerged as mainstream paradigms in PSP. Despite the fruitful progress, the PSP community needs a systematic and up-to-date survey to help bridge the gap between LMs in the natural language processing (NLP) and PSP domains and introduce their methodologies, advancements and practical applications. To this end, in this paper, we first introduce the similarities between protein and human languages that allow LMs extended to pLMs, and applied to protein databases. Then, we systematically review recent advances in LMs and pLMs from the perspectives of network architectures, pre-training strategies, applications, and commonly-used protein databases. Next, different types of methods for PSP are discussed, particularly how the pLM-based architectures function in the process of protein folding. Finally, we identify challenges faced by the PSP community and foresee promising research directions along with the advances of pLMs. This survey aims to be a hands-on guide for researchers to understand PSP methods, develop pLMs and tackle challenging problems in this field for practical purposes.
translated by 谷歌翻译
严重的急性呼吸综合征冠状病毒2(SARS-COV-2)导致持续的大流行感染了21900万人的10/19/21,死亡率为3.6%。自然选择可以产生有利的突变,具有改善的健身优势;然而,所识别的冠状病毒可能是冰山的尖端,并且可能会随着时间的推移出现潜在的致命变体(VOC)。理解可能导致功能或免疫逃逸的新出现VOC和预测突变的模式是迫切需要的。在这里,我们开发了Phylotransformer,一种基于变压器的辨别模型,其与多头自我关注机制接合以模拟可能导致病毒生殖优势的基因突变。为了识别每个输入序列的元件之间的复杂依赖性,Phylotransformer利用高级建模技术,包括从Performer的正交随机特征方法(Hibl +)以及来自双向编码器表示的屏蔽语言模型(MLM)的新颖快速关注变压器(伯特)。从全球倡议检索的1,765,297次遗传序列培训,从全球范围内检测到所有流感数据(GISAID)数据库。首先,我们使用广泛的基线模型比较了新型突变和新颖组合的预测准确性;我们发现,这种具有统计显着性的每个基线方法都优势了。其次,我们检查了受体结合基序(RBM)的每个核苷酸中的突变预测,我们发现我们的预测是精确和准确的。第三,我们预测了N-糖基化位点的修饰,以鉴定与在病毒进化期间可能有利的改变的糖基化相关的突变。我们预计Phylotransformer可以引导积极的疫苗设计,以有效靶向未来SARS-COV-2变体。
translated by 谷歌翻译
我们提出了一种三级等级变压器网络(3级),用于在临床笔记上建模长期依赖性,以患者级预测的目的。该网络配备了三个级别的基于变压器的编码器,以逐步地从单词中学到句子,句子票据,最后给患者注释。单词到句子的第一级直接将预先训练的BERT模型应用为完全可训练的组件。虽然第二和第三级实现了一堆基于变压器的编码器,但在最终患者表示进入临床预测的分类层之前。与传统的BERT模型相比,我们的模型将512个令牌的最大输入长度增加到适合建模大量临床笔记的更长的序列。我们经验检查不同的超参数,以识别给定的计算资源限制的最佳权衡。我们的实验结果对不同预测任务的模拟-III数据集表明,所提出的等级变压器网络优于以前的最先进的模型,包括但不限于BigBird。
translated by 谷歌翻译
专门的基于变形金刚的模型(例如生物Biobert和Biomegatron)适用于基于公共可用的生物医学语料库的生物医学领域。因此,它们有可能编码大规模的生物学知识。我们研究了这些模型中生物学知识的编码和表示,及其支持癌症精度医学推断的潜在实用性 - 即,对基因组改变的临床意义的解释。我们比较不同变压器基线的性能;我们使用探测来确定针对不同实体的编码的一致性;我们使用聚类方法来比较和对比基因,变异,药物和疾病的嵌入的内部特性。我们表明,这些模型确实确实编码了生物学知识,尽管其中一些模型在针对特定任务的微调中丢失了。最后,我们分析了模型在数据集中的偏见和失衡方面的行为。
translated by 谷歌翻译
In this paper, we assess the viability of transformer models in end-to-end InfoSec settings, in which no intermediate feature representations or processing steps occur outside the model. We implement transformer models for two distinct InfoSec data formats - specifically URLs and PE files - in a novel end-to-end approach, and explore a variety of architectural designs, training regimes, and experimental settings to determine the ingredients necessary for performant detection models. We show that in contrast to conventional transformers trained on more standard NLP-related tasks, our URL transformer model requires a different training approach to reach high performance levels. Specifically, we show that 1) pre-training on a massive corpus of unlabeled URL data for an auto-regressive task does not readily transfer to binary classification of malicious or benign URLs, but 2) that using an auxiliary auto-regressive loss improves performance when training from scratch. We introduce a method for mixed objective optimization, which dynamically balances contributions from both loss terms so that neither one of them dominates. We show that this method yields quantitative evaluation metrics comparable to that of several top-performing benchmark classifiers. Unlike URLs, binary executables contain longer and more distributed sequences of information-rich bytes. To accommodate such lengthy byte sequences, we introduce additional context length into the transformer by providing its self-attention layers with an adaptive span similar to Sukhbaatar et al. We demonstrate that this approach performs comparably to well-established malware detection models on benchmark PE file datasets, but also point out the need for further exploration into model improvements in scalability and compute efficiency.
translated by 谷歌翻译
尽管变压器语言模型(LMS)是信息提取的最新技术,但长文本引入了需要次优的预处理步骤或替代模型体系结构的计算挑战。稀疏注意的LMS可以代表更长的序列,克服性能障碍。但是,目前尚不清楚如何解释这些模型的预测,因为并非所有令牌都在自我发项层中相互参加,而在运行时,长序列对可解释性算法提出了计算挑战,而当运行时取决于文档长度。这些挑战在文档可能很长的医学环境中是严重的,机器学习(ML)模型必须是审核和值得信赖的。我们介绍了一种新颖的蒙版抽样程序(MSP),以识别有助于预测的文本块,将MSP应用于预测医学文本诊断的背景下,并通过两位临床医生的盲目审查来验证我们的方法。我们的方法比以前的最先进的临床信息块高约1.7倍,速度更快100倍,并且可用于生成重要的短语对。 MSP特别适合长LMS,但可以应用于任何文本分类器。我们提供了MSP的一般实施。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many state-of-the-art computational methods have been proposed in order to efficiently identify enhancers, learning globally contextual features is still one of the challenges for computational methods. Regarding the similarities between biological sequences and natural language sentences, the novel BERT-based language techniques have been applied to extracting complex contextual features in various computational biology tasks such as protein function/structure prediction. To speed up the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Results: In this paper, we propose a multi-scale enhancer identification method (iEnhancer-ELM) based on enhancer language models, which treat enhancer sequences as natural language sentences that are composed of k-mer nucleotides. iEnhancer-ELM can extract contextual information of multi-scale k-mers with positions from raw enhancer sequences. Benefiting from the complementary information of k-mers in multi-scale, we ensemble four iEnhancer-ELM models for improving enhancer identification. The benchmark comparisons show that our model outperforms state-of-the-art methods. By the interpretable attention mechanism, we finds 30 biological patterns, where 40% (12/30) are verified by a widely used motif tool (STREME) and a popular dataset (JASPAR), demonstrating our model has a potential ability to reveal the biological mechanism of enhancer. Availability: The source code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn and junjie.chen.hit@gmail.com; Supplementary information: Supplementary data are available at Bioinformatics online.
translated by 谷歌翻译
流感病毒迅速变异,可能对公共卫生构成威胁,尤其是对弱势群体的人。在整个历史中,流感A病毒在不同物种之间引起了大流行病。重要的是要识别病毒的起源,以防止爆发的传播。最近,人们对使用机器学习算法来为病毒序列提供快速准确的预测一直引起人们的兴趣。在这项研究中,使用真实的测试数据集和各种评估指标用于评估不同分类学水平的机器学习算法。由于血凝素是免疫反应中的主要蛋白质,因此仅使用血凝素序列并由位置特异性评分基质和单词嵌入来表示。结果表明,5-grams-transformer神经网络是预测病毒序列起源的最有效算法,大约99.54%的AUCPR,98.01%的F1分数和96.60%的MCC,在较高的分类水平上,约94.74%AUCPR,87.41%,87.41%,87.41% %F1分数%和80.79%的MCC在较低的分类水平下。
translated by 谷歌翻译
The development of deep neural networks has improved representation learning in various domains, including textual, graph structural, and relational triple representations. This development opened the door to new relation extraction beyond the traditional text-oriented relation extraction. However, research on the effectiveness of considering multiple heterogeneous domain information simultaneously is still under exploration, and if a model can take an advantage of integrating heterogeneous information, it is expected to exhibit a significant contribution to many problems in the world. This thesis works on Drug-Drug Interactions (DDIs) from the literature as a case study and realizes relation extraction utilizing heterogeneous domain information. First, a deep neural relation extraction model is prepared and its attention mechanism is analyzed. Next, a method to combine the drug molecular structure information and drug description information to the input sentence information is proposed, and the effectiveness of utilizing drug molecular structures and drug descriptions for the relation extraction task is shown. Then, in order to further exploit the heterogeneous information, drug-related items, such as protein entries, medical terms and pathways are collected from multiple existing databases and a new data set in the form of a knowledge graph (KG) is constructed. A link prediction task on the constructed data set is conducted to obtain embedding representations of drugs that contain the heterogeneous domain information. Finally, a method that integrates the input sentence information and the heterogeneous KG information is proposed. The proposed model is trained and evaluated on a widely used data set, and as a result, it is shown that utilizing heterogeneous domain information significantly improves the performance of relation extraction from the literature.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
在三维分子结构上运行的计算方法有可能解决生物学和化学的重要问题。特别地,深度神经网络的重视,但它们在生物分子结构域中的广泛采用受到缺乏系统性能基准或统一工具包的限制,用于与分子数据相互作用。为了解决这个问题,我们呈现Atom3D,这是一个新颖的和现有的基准数据集的集合,跨越几个密钥的生物分子。我们为这些任务中的每一个实施多种三维分子学习方法,并表明它们始终如一地提高了基于单维和二维表示的方法的性能。结构的具体选择对于性能至关重要,具有涉及复杂几何形状的任务的三维卷积网络,在需要详细位置信息的系统中表现出良好的图形网络,以及最近开发的设备越多的网络显示出显着承诺。我们的结果表明,许多分子问题符合三维分子学习的增益,并且有可能改善许多仍然过分曝光的任务。为了降低进入并促进现场进一步发展的障碍,我们还提供了一套全面的DataSet处理,模型培训和在我们的开源ATOM3D Python包中的评估工具套件。所有数据集都可以从https://www.atom3d.ai下载。
translated by 谷歌翻译
自然语言处理(NLP)是一个人工智能领域,它应用信息技术来处理人类语言,在一定程度上理解并在各种应用中使用它。在过去的几年中,该领域已经迅速发展,现在采用了深层神经网络的现代变体来从大型文本语料库中提取相关模式。这项工作的主要目的是调查NLP在药理学领域的最新使用。正如我们的工作所表明的那样,NLP是药理学高度相关的信息提取和处理方法。它已被广泛使用,从智能搜索到成千上万的医疗文件到在社交媒体中找到对抗性药物相互作用的痕迹。我们将覆盖范围分为五个类别,以调查现代NLP方法论,常见的任务,相关的文本数据,知识库和有用的编程库。我们将这五个类别分为适当的子类别,描述其主要属性和想法,并以表格形式进行总结。最终的调查介绍了该领域的全面概述,对从业者和感兴趣的观察者有用。
translated by 谷歌翻译
动机:针对感兴趣的蛋白质的新颖化合物的发展是制药行业中最重要的任务之一。深层生成模型已应用于靶向分子设计,并显示出令人鼓舞的结果。最近,靶标特异性分子的产生被视为蛋白质语言与化学语言之间的翻译。但是,这种模型受相互作用蛋白质配对的可用性的限制。另一方面,可以使用大量未标记的蛋白质序列和化学化合物,并已用于训练学习有用表示的语言模型。在这项研究中,我们提出了利用预审核的生化语言模型以初始化(即温暖的开始)目标分子产生模型。我们研究了两种温暖的开始策略:(i)一种一阶段策略,其中初始化模型是针对靶向分子生成(ii)的两阶段策略进行培训的,该策略包含对分子生成的预处理,然后进行目标特定训练。我们还比较了两种生成化合物的解码策略:光束搜索和采样。结果:结果表明,温暖启动的模型的性能优于从头开始训练的基线模型。相对于基准广泛使用的指标,这两种拟议的温暖启动策略相互取得了相似的结果。然而,对许多新蛋白质生成的化合物进行对接评估表明,单阶段策略比两阶段策略更好地概括了。此外,我们观察到,在对接评估和基准指标中,梁搜索的表现优于采样,用于评估复合质量。可用性和实施​​:源代码可在https://github.com/boun-tabi/biochemical-lms-for-drug-design和材料中获得,并在Zenodo归档,网址为https://doi.org/10.5281/zenodo .6832145
translated by 谷歌翻译
变压器语言模型的大规模自我监督的预培训已经推进了自然语言处理领域,并在跨申请中显示了蛋白质和DNA的生物“语言”的承诺。学习使用大型基因组序列的DNA序列的有效表示可以通过转移学习加速基因调控模型的发展。然而,为了精确模拟特异性细胞类型的基因调节和功能,不仅需要考虑DNA核苷酸序列中包含的信息,这主要是细胞类型之间的不变性,还要考虑局部化学和结构“表观遗传状态”染色体在细胞类型之间变化。这里,我们引入来自变压器(BERT)模型的双向编码器表示,该模型基于DNA序列和配对的表观遗传状态输入来学习表示,我们称之为表观脑栓(或ebert)。我们在整个人类基因组中使用蒙面语言模型目标以及跨越127种细胞类型预先列车。通过与脑系统的合作伙伴关系,第一次培训这种复杂模型,首次通过与脑系统合作,其CS-1系统提供所有预训练实验。我们通过展示细胞类型特定的转录因子绑定预测任务的强大性能来显示Ebert的转移学习潜力。我们的微调模型超过了来自编码梦想基准的13个评估数据集中的4个艺术表现的状态,并在挑战排行榜上获得3号的整体排名。我们探讨了表观遗传数据和任务特定功能增强的如何纳入影响转移学习绩效。
translated by 谷歌翻译
在基因组生物学研究中,调节基因组建模是许多监管下游任务的重要课题,例如推动者分类,交易因子结合位点预测。核心问题是模拟监管元素如何相互交互及其跨不同小区类型的可变性。然而,目前的深度学习方法通​​常专注于建模固定的细胞类型集的基因组序列,并且不考虑多个调节元件之间的相互作用,使它们仅在训练集中的小区类型上表现良好,并且缺乏所需的概括生物学应用。在这项工作中,我们提出了一种简单但有效的方法,用于以多模态和自我监督的方式预先培训基因组数据,我们称之为Genebert。具体而言,我们同时服用1D基因组数据和2D矩阵(转录因子X区)作为输入,其中提出了三项预训练任务,以提高模型的鲁棒性和概括性。我们在ATAC-SEQ数据集上预先培训我们的模型,具有1700万基因组序列。我们在不同细胞类型中评估我们的Genebert关于监管下游任务,包括启动子分类,交易因子结合位点预测,疾病风险估计和剪接部位预测。广泛的实验证明了大型监管基因组学数据的多模态和自我监督的预培训的有效性。
translated by 谷歌翻译
我们利用深度顺序模型来解决预测患者医疗保健利用的问题,这可能有助于政府更好地为未来的医疗保健使用提供资源。具体地,我们研究\纺织{发散亚组}的问题,其中较小的人口小组中的结果分布大大偏离了一般人群的群体。如果亚组的尺寸非常小(例如,稀有疾病),则对不同亚组的专业模型建造专门模型的传统方法可能是有问题的。为了解决这一挑战,我们首先开发一种新的无关注顺序模型,SANSFORMERS,灌输了适合在电子医疗记录中建模临床码的归纳偏差。然后,我们通过在整个健康登记处预先培训每个模型(接近100万名患者)之前,设计了一个特定的自我监督目标,并展示其有效性,特别是稀缺数据设置,特别是在整个健康登记处(接近一百万名患者)进行微调下游任务不同的子组。我们使用两个数据来源与LSTM和变压器模型进行比较新的SANSFARER架构和辅助医疗利用预测的多任务学习目标。凭经验,无关注的Sansformer模型在实验中始终如一地执行,在大多数情况下以至少$ \ SIM 10 $ \%表现出在大多数情况下的基线。此外,在预测医院访问数量时,自我监督的预训练将在整个始终提高性能,例如通过超过$ \ sim 50 $ \%(和高度为800美元\%)。
translated by 谷歌翻译
Motivation: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.
translated by 谷歌翻译
在生物医学语料库中预先培训的语言模型,例如Biobert,最近在下游生物医学任务上显示出令人鼓舞的结果。另一方面,由于嵌入尺寸,隐藏尺寸和层数等因素,许多现有的预训练模型在资源密集型和计算上都是沉重的。自然语言处理(NLP)社区已经制定了许多策略来压缩这些模型,利用修剪,定量和知识蒸馏等技术,从而导致模型更快,更小,随后更易于使用。同样,在本文中,我们介绍了六种轻型模型,即Biodistilbert,Biotinybert,BioMobilebert,Distilbiobert,Tinybiobert和Cmpactactbiobert,并通过掩护的语言在PubMed DataSet上通过掩护数据进行了知识蒸馏而获得的知识蒸馏来获得。建模(MLM)目标。我们在三个生物医学任务上评估了所有模型,并将它们与Biobert-V1.1进行比较,以创建有效的轻量级模型,以与较大的对应物相同。所有模型将在我们的HuggingFace配置文件上公开可用,网址为https://huggingface.co/nlpie,用于运行实验的代码将在https://github.com/nlpie-research/compact-compact-biomedical-transformers上获得。
translated by 谷歌翻译