Neural Networks (GNNs) have revolutionized the molecular discovery to understand patterns and identify unknown features that can aid in predicting biophysical properties and protein-ligand interactions. However, current models typically rely on 2-dimensional molecular representations as input, and while utilization of 2\3- dimensional structural data has gained deserved traction in recent years as many of these models are still limited to static graph representations. We propose a novel approach based on the transformer model utilizing GNNs for characterizing dynamic features of protein-ligand interactions. Our message passing transformer pre-trains on a set of molecular dynamic data based off of physics-based simulations to learn coordinate construction and make binding probability and affinity predictions as a downstream task. Through extensive testing we compare our results with the existing models, our MDA-PLI model was able to outperform the molecular interaction prediction models with an RMSE of 1.2958. The geometric encodings enabled by our transformer architecture and the addition of time series data add a new dimensionality to this form of research.
translated by 谷歌翻译
蛋白质 - 配体相互作用(PLIS)是生化研究的基础,其鉴定对于估计合理治疗设计的生物物理和生化特性至关重要。目前,这些特性的实验表征是最准确的方法,然而,这是非常耗时和劳动密集型的。在这种情况下已经开发了许多计算方法,但大多数现有PLI预测大量取决于2D蛋白质序列数据。在这里,我们提出了一种新颖的并行图形神经网络(GNN),以集成PLI预测的知识表示和推理,以便通过专家知识引导的深度学习,并通过3D结构数据通知。我们开发了两个不同的GNN架构,GNNF是采用不同特种的基础实现,以增强域名认识,而GNNP是一种新颖的实现,可以预测未经分子间相互作用的先验知识。综合评价证明,GNN可以成功地捕获配体和蛋白质3D结构之间的二元相互作用,对于GNNF的测试精度和0.958,用于预测蛋白质 - 配体络合物的活性。这些模型进一步适用于回归任务以预测实验结合亲和力,PIC50对于药物效力和功效至关重要。我们在实验亲和力上达到0.66和0.65的Pearson相关系数,分别在PIC50和GNNP上进行0.50和0.51,优于基于2D序列的模型。我们的方法可以作为可解释和解释的人工智能(AI)工具,用于预测活动,效力和铅候选的生物物理性质。为此,我们通过筛选大型复合库并将我们的预测与实验测量数据进行比较来展示GNNP对SARS-COV-2蛋白靶标的实用性。
translated by 谷歌翻译
Geometric deep learning has recently achieved great success in non-Euclidean domains, and learning on 3D structures of large biomolecules is emerging as a distinct research area. However, its efficacy is largely constrained due to the limited quantity of structural data. Meanwhile, protein language models trained on substantial 1D sequences have shown burgeoning capabilities with scale in a broad range of applications. Nevertheless, no preceding studies consider combining these different protein modalities to promote the representation power of geometric neural networks. To address this gap, we make the foremost step to integrate the knowledge learned by well-trained protein language models into several state-of-the-art geometric networks. Experiments are evaluated on a variety of protein representation learning benchmarks, including protein-protein interface prediction, model quality assessment, protein-protein rigid-body docking, and binding affinity prediction, leading to an overall improvement of 20% over baselines and the new state-of-the-art performance. Strong evidence indicates that the incorporation of protein language models' knowledge enhances geometric networks' capacity by a significant margin and can be generalized to complex tasks.
translated by 谷歌翻译
分子表示学习(MRL)是建立机器学习与化学科学之间联系的关键步骤。特别是,它将分子编码为保留分子结构和特征的数值向量,在其上可以执行下游任务(例如,属性预测)。最近,MRL取得了相当大的进步,尤其是在基于深的分子图学习方法中。在这项调查中,我们系统地回顾了这些基于图的分子表示技术。具体而言,我们首先介绍2D和3D图分子数据集的数据和功能。然后,我们总结了专门为MRL设计的方法,并将其分为四种策略。此外,我们讨论了MRL支持的一些典型化学应用。为了促进该快速发展领域的研究,我们还列出了论文中的基准和常用数据集。最后,我们分享我们对未来研究方向的想法。
translated by 谷歌翻译
The field of geometric deep learning has had a profound impact on the development of innovative and powerful graph neural network architectures. Disciplines such as computer vision and computational biology have benefited significantly from such methodological advances, which has led to breakthroughs in scientific domains such as protein structure prediction and design. In this work, we introduce GCPNet, a new geometry-complete, SE(3)-equivariant graph neural network designed for 3D graph representation learning. We demonstrate the state-of-the-art utility and expressiveness of our method on six independent datasets designed for three distinct geometric tasks: protein-ligand binding affinity prediction, protein structure ranking, and Newtonian many-body systems modeling. Our results suggest that GCPNet is a powerful, general method for capturing complex geometric and physical interactions within 3D graphs for downstream prediction tasks. The source code, data, and instructions to train new models or reproduce our results are freely available on GitHub.
translated by 谷歌翻译
在三维分子结构上运行的计算方法有可能解决生物学和化学的重要问题。特别地,深度神经网络的重视,但它们在生物分子结构域中的广泛采用受到缺乏系统性能基准或统一工具包的限制,用于与分子数据相互作用。为了解决这个问题,我们呈现Atom3D,这是一个新颖的和现有的基准数据集的集合,跨越几个密钥的生物分子。我们为这些任务中的每一个实施多种三维分子学习方法,并表明它们始终如一地提高了基于单维和二维表示的方法的性能。结构的具体选择对于性能至关重要,具有涉及复杂几何形状的任务的三维卷积网络,在需要详细位置信息的系统中表现出良好的图形网络,以及最近开发的设备越多的网络显示出显着承诺。我们的结果表明,许多分子问题符合三维分子学习的增益,并且有可能改善许多仍然过分曝光的任务。为了降低进入并促进现场进一步发展的障碍,我们还提供了一套全面的DataSet处理,模型培训和在我们的开源ATOM3D Python包中的评估工具套件。所有数据集都可以从https://www.atom3d.ai下载。
translated by 谷歌翻译
Artificial intelligence (AI) in the form of deep learning bears promise for drug discovery and chemical biology, $\textit{e.g.}$, to predict protein structure and molecular bioactivity, plan organic synthesis, and design molecules $\textit{de novo}$. While most of the deep learning efforts in drug discovery have focused on ligand-based approaches, structure-based drug discovery has the potential to tackle unsolved challenges, such as affinity prediction for unexplored protein targets, binding-mechanism elucidation, and the rationalization of related chemical kinetic properties. Advances in deep learning methodologies and the availability of accurate predictions for protein tertiary structure advocate for a $\textit{renaissance}$ in structure-based approaches for drug discovery guided by AI. This review summarizes the most prominent algorithmic concepts in structure-based deep learning for drug discovery, and forecasts opportunities, applications, and challenges ahead.
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
基于合并和处理对称信息的神经网络架构的几何深度学习(GDL)已经成为人工智能最近的范式。GDL在分子建模应用中具有特定的承诺,其中存在具有不同对称性和抽象水平的各种分子表示。本综述提供了分子GDL的结构化和协调概述,突出了其在药物发现,化学合成预测和量子化学中的应用。重点是学习的分子特征的相关性及其对成熟的分子描述符的互补性。本综述概述了当前的挑战和机会,并提出了用于分子科学GDL的未来的预测。
translated by 谷歌翻译
抗体设计对于治疗用法和生物学研究很有价值。现有的基于深度学习的方法遇到了几个关键问题:1)互补性区域(CDRS)生成的不完整上下文; 2)无法捕获输入结构的整个3D几何; 3)以自回归方式对CDR序列的效率低下。在本文中,我们提出了多通道等效的注意网络(平均值),这是一个能够共同设计1D序列和CDR的3D结构的端到端模型。要具体,平均值将抗体设计作为条件图翻译问题,通过导入包括靶抗原和抗体的轻链在内的额外组件。然后,平均诉诸于E(3) - 等级信息以及提出的注意机制,以更好地捕获不同组件之间的几何相关性。最后,它通过多轮渐进式完整射击方案来输出1D序列和3D结构,该方案在以前的自动回归方法上具有更高的效率。我们的方法显着超过了序列和结构建模,抗原结合抗体设计和结合亲和力优化的最新模型。具体而言,抗原结合CDR设计的相对改善约为22%,亲和力优化为34%。
translated by 谷歌翻译
学习表达性分子表示对于促进分子特性的准确预测至关重要。尽管图形神经网络(GNNS)在分子表示学习中取得了显着进步,但它们通常面临诸如邻居探索,不足,过度光滑和过度阵列之类的局限性。同样,由于参数数量大,GNN通常具有较高的计算复杂性。通常,当面对相对大尺寸的图形或使用更深的GNN模型体系结构时,这种限制会出现或增加。克服这些问题的一个想法是将分子图简化为小型,丰富且有益的信息,这更有效,更具挑战性的培训GNN。为此,我们提出了一个新颖的分子图粗化框架,名为FUNQG利用函数组,作为分子的有影响力的构件来确定其性质,基于称为商图的图理论概念。通过实验,我们表明所产生的信息图比分子图小得多,因此是训练GNN的良好候选者。我们将FUNQG应用于流行的分子属性预测基准,然后比较所获得的数据集上的GNN体系结构的性能与原始数据集上的几个最先进的基线。通过实验,除了其参数数量和低计算复杂性的急剧减少之外,该方法除了其急剧减少之外,在各种数据集上的表现显着优于先前的基准。因此,FUNQG可以用作解决分子表示学习问题的简单,成本效益且可靠的方法。
translated by 谷歌翻译
蛋白质复合物形成是生物学中的核心问题,参与了大部分细胞的过程,以及对应用是必不可少的,例如,药物设计或蛋白质工程。我们解决刚性体蛋白 - 蛋白质对接,即计算地预测来自个体未结合结构的蛋白质 - 蛋白质复合物的3D结构,假设在结合期间蛋白质内没有构象变化。我们设计一种新的成对独立的SE(3)-Quivariant的图形匹配网络,以预测旋转和翻译,以将其中一个蛋白质放置在右对接位置相对于第二蛋白质。我们在数学上保证了基本原理:无论两个结构的初始位置和方向如何,预测复合物都是相同的。我们的模型,名为Equidock,近似于绑定口袋并通过最佳传输和可分辨率的Kabsch算法实现,实现了使用关键点匹配和对准的对接姿势。凭经验,尽管没有依赖于沉重的候选抽样,结构细化或模板,我们才能实现显着的运行时间改进,并且通常优于现有的对接软件。
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes). They do not, however, consider the spatial direction from one atom to another, despite directional information playing a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions and spherical harmonics to construct theoretically well-founded, orthogonal representations that achieve better performance than the currently prevalent Gaussian radial basis representations while using fewer than 1 /4 of the parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 76 % on MD17 and by 31 % on QM9. Our implementation is available online. 1
translated by 谷歌翻译
我们考虑对具有3D结构的蛋白质的代表性学习。我们基于蛋白质结构构建3D图并开发图形网络以学习其表示形式。根据我们希望捕获的细节级别,可以在不同级别计算蛋白质表示,\ emph {e.g。},氨基酸,骨干或全原子水平。重要的是,不同级别之间存在层次关系。在这项工作中,我们建议开发一个新型的层次图网络(称为pronet)来捕获关系。我们的pronet非常灵活,可用于计算不同水平粒度的蛋白质表示。我们表明,鉴于完整的基本3D图网络,我们的PRONET表示在所有级别上也已完成。为了关闭循环,我们开发了一个完整有效的3D图网络,以用作基本模型,从而使我们的pronet完成。我们对多个下游任务进行实验。结果表明,PRONET优于大多数数据集上的最新方法。此外,结果表明,不同的下游任务可能需要不同级别的表示。我们的代码可作为DIG库的一部分(\ url {https://github.com/divelab/dig})。
translated by 谷歌翻译
分子表示学习有助于多个下游任务,例如分子性质预测和药物设计。为了适当地代表分子,图形对比学习是一个有前途的范式,因为它利用自我监督信号并没有人类注释要求。但是,先前的作品未能将基本域名知识纳入图表语义,因此忽略了具有共同属性的原子之间的相关性,但不通过键连接连接。为了解决这些问题,我们构建化学元素知识图(KG),总结元素之间的微观关联,并提出了一种用于分子代表学习的新颖知识增强的对比学习(KCL)框架。 KCL框架由三个模块组成。第一个模块,知识引导的图形增强,基于化学元素kg增强原始分子图。第二模块,知识意识的图形表示,利用用于原始分子图的公共曲线图编码器和通过神经网络(KMPNN)的知识感知消息来提取分子表示来编码增强分子图中的复杂信息。最终模块是一种对比目标,在那里我们在分子图的这两个视图之间最大化协议。广泛的实验表明,KCL获得了八个分子数据集上的最先进基线的优异性能。可视化实验适当地解释了在增强分子图中从原子和属性中了解的KCL。我们的代码和数据可用于补充材料。
translated by 谷歌翻译
用于预测蛋白质之间的界面触点的计算方法对于药物发现,因此可以显着地推进替代方法的准确性,例如蛋白质 - 蛋白质对接,蛋白质功能分析工具和其他用于蛋白质生物信息学的计算方法。在这项工作中,我们介绍了几何变压器,一种用于旋转的新型几何不变性的曲线图变压器,用于旋转和平移 - 不变的蛋白质接口接触预测,包装在膨胀的端到端预测管道内。 Deepinteract预测伴侣特异性蛋白质界面触点(即,蛋白质残留物 - 残留物接触)给出了两种蛋白质的3D三级结构作为输入。在严格的基准测试中,深入的蛋白质复杂目标来自第13和第14次CASP-CAPRI实验以及对接基准5,实现14%和1.1%顶部L / 5精度(L:蛋白质单位的长度) , 分别。在这样做的情况下,使用几何变压器作为其基于图形的骨干,除了与深度兼容的其他图形的神经网络骨架之外,还优于接口接触预测的现有方法,从而验证了几何变压器学习丰富关系的有效性用于3D蛋白质结构下游任务的-Geometric特征。
translated by 谷歌翻译
学习有效的蛋白质表示在生物学的各种任务中至关重要,例如预测蛋白质功能或结构。现有的方法通常在大量未标记的氨基酸序列上预先蛋白质语言模型,然后在下游任务中使用一些标记的数据来对模型进行修复。尽管基于序列的方法具有有效性,但尚未探索蛋白质性能预测的已知蛋白质结构的预处理功能,尽管蛋白质结构已知是蛋白质功能的决定因素,但尚未探索。在本文中,我们建议根据其3D结构预处理蛋白质。我们首先提出一个简单而有效的编码器,以学习蛋白质的几何特征。我们通过利用多视图对比学习和不同的自我预测任务来预先蛋白质图编码器。对功能预测和折叠分类任务的实验结果表明,我们提出的预处理方法表现优于或与最新的基于最新的序列方法相提并论,同时使用较少的数据。我们的实施可在https://github.com/deepgraphlearning/gearnet上获得。
translated by 谷歌翻译