自我监督学习(SSL)是一种通过利用数据中固有的监督来学习数据表示的方法。这种学习方法是药物领域的焦点,由于耗时且昂贵的实验,缺乏带注释的数据。使用巨大未标记数据的SSL显示出在分子属性预测方面表现出色的性能,但存在一些问题。 (1)现有的SSL模型是大规模的;在计算资源不足的情况下实现SSL有限制。 (2)在大多数情况下,它们不利用3D结构信息进行分子表示学习。药物的活性与药物分子的结构密切相关。但是,大多数当前模型不使用3D信息或部分使用它。 (3)以前对分子进行对比学习的模型使用置换原子和键的增强。因此,具有不同特征的分子可以在相同的阳性样品中。我们提出了一个新颖的对比学习框架,用于分子属性预测的小规模3D图对比度学习(3DGCL),以解决上述问题。 3DGCL通过不改变药物语义的预训练过程来反映分子的结构来学习分子表示。仅使用1,128个样本用于预训练数据和100万个模型参数,我们在四个回归基准数据集中实现了最先进或可比性的性能。广泛的实验表明,基于化学知识的3D结构信息对于用于财产预测的分子表示学习至关重要。
translated by 谷歌翻译
分子表示学习有助于多个下游任务,例如分子性质预测和药物设计。为了适当地代表分子,图形对比学习是一个有前途的范式,因为它利用自我监督信号并没有人类注释要求。但是,先前的作品未能将基本域名知识纳入图表语义,因此忽略了具有共同属性的原子之间的相关性,但不通过键连接连接。为了解决这些问题,我们构建化学元素知识图(KG),总结元素之间的微观关联,并提出了一种用于分子代表学习的新颖知识增强的对比学习(KCL)框架。 KCL框架由三个模块组成。第一个模块,知识引导的图形增强,基于化学元素kg增强原始分子图。第二模块,知识意识的图形表示,利用用于原始分子图的公共曲线图编码器和通过神经网络(KMPNN)的知识感知消息来提取分子表示来编码增强分子图中的复杂信息。最终模块是一种对比目标,在那里我们在分子图的这两个视图之间最大化协议。广泛的实验表明,KCL获得了八个分子数据集上的最先进基线的优异性能。可视化实验适当地解释了在增强分子图中从原子和属性中了解的KCL。我们的代码和数据可用于补充材料。
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
分子表示学习(MRL)是建立机器学习与化学科学之间联系的关键步骤。特别是,它将分子编码为保留分子结构和特征的数值向量,在其上可以执行下游任务(例如,属性预测)。最近,MRL取得了相当大的进步,尤其是在基于深的分子图学习方法中。在这项调查中,我们系统地回顾了这些基于图的分子表示技术。具体而言,我们首先介绍2D和3D图分子数据集的数据和功能。然后,我们总结了专门为MRL设计的方法,并将其分为四种策略。此外,我们讨论了MRL支持的一些典型化学应用。为了促进该快速发展领域的研究,我们还列出了论文中的基准和常用数据集。最后,我们分享我们对未来研究方向的想法。
translated by 谷歌翻译
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译
使用图神经网络(GNN)提取分子的信息表示,对于AI驱动的药物发现至关重要。最近,图形研究界一直在试图复制自然语言处理预处理的成功,并获得了一些成功。但是,我们发现在许多情况下,自我监督预审计对分子数据的益处可以忽略不计。我们对GNN预处理的关键组成部分进行了彻底的消融研究,包括预处理目标,数据拆分方法,输入特征,预处理数据集量表和GNN体系结构,以决定下游任务的准确性。我们的第一个重要发现是,在许多情况下,自我监督的图表预处理没有统计学上的显着优势。其次,尽管可以通过额外的监督预处理可以观察到改进,但通过更丰富或更平衡的数据拆分,改进可能会减少。第三,实验性超参数对下游任务的准确性具有更大的影响,而不是训练训练的任务。我们假设对分子进行预训练的复杂性不足,从而导致下游任务的可转移知识较低。
translated by 谷歌翻译
Recently, graph neural networks (GNNs) have achieved remarkable performances for quantum mechanical problems. However, a graph convolution can only cover a localized region, and cannot capture long-range interactions of atoms. This behavior is contrary to theoretical interatomic potentials, which is a fundamental limitation of the spatial based GNNs. In this work, we propose a novel attention-based framework for molecular property prediction tasks. We represent a molecular conformation as a discrete atomic sequence combined by atom-atom distance attributes, named Geometry-aware Transformer (GeoT). In particular, we adopt a Transformer architecture, which has been widely used for sequential data. Our proposed model trains sequential representations of molecular graphs based on globally constructed attentions, maintaining all spatial arrangements of atom pairs. Our method does not suffer from cost intensive computations, such as angle calculations. The experimental results on several public benchmarks and visualization maps verified that keeping the long-range interatomic attributes can significantly improve the model predictability.
translated by 谷歌翻译
Recently, deep learning approaches have been extensively studied for various problems in chemistry, such as property prediction, virtual screening, de novo molecule design, etc. Despite the impressive successes, separately designed networks for specific tasks are usually required for end-to-end training, so it is often difficult to acquire a unified principle to synergistically combine existing models and training datasets for novel tasks. To address this, here we present a novel multimodal chemical foundation model that can be used for various downstream tasks that require a simultaneous understanding of structure and property. Specifically, inspired by recent advances in pre-trained multi-modal foundation models such as Vision-Language Pretrained models (VLP), we proposed a novel structure-property multi-modal (SPMM) foundation model using the dual-stream transformer with X-shape attention, so that it can align the molecule structure and the chemical properties in a common embedding space. Thanks to the outstanding structure-property unimodal representation, experimental results confirm that SPMM can simultaneously perform molecule generation, property prediction, classification, reaction prediction, etc., which was previously not possible with a single architecture.
translated by 谷歌翻译
我们可以将袖珍配体的相互作用知识注入预训练的模型并共同学习其化学空间吗?近年来,预处理的分子和蛋白质引起了很大的关注,而这些方法中的大多数都集中在学习一个化学空间,并且缺乏注射生物学知识。我们提出一个共同监督预告片(COSP)的框架,以同时学习3D口袋和配体表示。我们使用封闭式的几何消息传递层来对3D口袋和配体进行建模,其中每个节点的化学特征,几何位置和方向都被考虑。为了学习生物学有意义的嵌入,我们通过对比度损失将袖珍配体相互作用知识注入预处理模型。考虑到分子的特异性,我们进一步提出了化学相似性增强的负抽样策略,以提高对比度学习绩效。通过广泛的实验,我们得出的结论是,COSP可以在口袋匹配,分子属性预测和虚拟筛选中获得竞争成果。
translated by 谷歌翻译
没有标签的预处理分子表示模型是各种应用的基础。常规方法主要是处理2D分子图,并仅专注于2D任务,使其预验证的模型无法表征3D几何形状,因此对于下游3D任务有缺陷。在这项工作中,我们从完整而新颖的意义上处理了3D分子预处理。特别是,我们首先提议采用基于能量的模型作为预处理的骨干,该模型具有实现3D空间对称性的优点。然后,我们为力预测开发了节点级预处理损失,在此过程中,我们进一步利用了Riemann-Gaussian分布,以确保损失为E(3) - 不变,从而实现了更多的稳健性。此外,还利用了图形噪声量表预测任务,以进一步促进最终的性能。我们评估了从两个具有挑战性的3D基准:MD17和QM9的大规模3D数据集GEOM-QM9预测的模型。实验结果支持我们方法对当前最新预处理方法的更好疗效,并验证我们设计的有效性。
translated by 谷歌翻译
In recent years, molecular graph representation learning (GRL) has drawn much more attention in molecular property prediction (MPP) problems. The existing graph methods have demonstrated that 3D geometric information is significant for better performance in MPP. However, accurate 3D structures are often costly and time-consuming to obtain, limiting the large-scale application of GRL. It is an intuitive solution to train with 3D to 2D knowledge distillation and predict with only 2D inputs. But some challenging problems remain open for 3D to 2D distillation. One is that the 3D view is quite distinct from the 2D view, and the other is that the gradient magnitudes of atoms in distillation are discrepant and unstable due to the variable molecular size. To address these challenging problems, we exclusively propose a distillation framework that contains global molecular distillation and local atom distillation. We also provide a theoretical insight to justify how to coordinate atom and molecular information, which tackles the drawback of variable molecular size for atom information distillation. Experimental results on two popular molecular datasets demonstrate that our proposed model achieves superior performance over other methods. Specifically, on the largest MPP dataset PCQM4Mv2 served as an "ImageNet Large Scale Visual Recognition Challenge" in the field of graph ML, the proposed method achieved a 6.9% improvement compared with the best works. And we obtained fourth place with the MAE of 0.0734 on the test-challenge set for OGB-LSC 2022 Graph Regression Task. We will release the code soon.
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
尽管人工智能(AI)在理解各个领域的分子方面取得了重大进展,但现有模型通常从单个分子模态中获得单个认知能力。由于分子知识的层次结构是深刻的,即使人类也从不同的方式中学习,包括直觉图和专业文本,以帮助他们的理解。受到这一点的启发,我们提出了一个分子多模式基础模型,该模型是从分子图及其语义相关的文本数据(从发表的科学引用索引论文中爬立)的。该AI模型代表了直接桥接分子图和自然语言的关键尝试。重要的是,通过捕获两种方式的特定和互补信息,我们提出的模型可以更好地掌握分子专业知识。实验结果表明,我们的模型不仅在诸如跨模式检索和分子标题之类的跨模式任务中表现出有希望的性能,而且还可以增强分子属性预测,并具有从自然语言描述中产生有意义的分子图的能力。我们认为,我们的模型将对跨生物学,化学,材料,环境和医学等学科的AI能力领域产生广泛的影响。
translated by 谷歌翻译
由于标记的分子数量有限,预处理分子表示在药物和材料发现中的应用至关重要,但是大多数现有工作都集中在2D分子图上进行预处理。然而,对3D几何结构进行预处理的力量已经较少探索,因此难以找到足够的代理任务,以增强预训练的能力,从而有效地从几何结构中提取基本特征。由3D分子的动态性质激励,其中3D欧几里得空间中分子的连续运动形成平滑的势能表面,我们提出了一个3D坐标,以降级预处理框架来建模这种能量景观。利用SE(3) - 激烈的得分匹配方法,我们提出了SE(3)-DDM,其中坐标定位代理任务有效地归结为分子中成对原子距离的脱氧。我们的全面实验证实了我们提出的方法的有效性和鲁棒性。
translated by 谷歌翻译
Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at: https://github.com/Shen-Lab/GraphCL.
translated by 谷歌翻译
机器学习(ML)已经证明了用于准确和结晶材料的准确性能预测的承诺。为了化学结构的高度精确的ML型号的化学结构属性预测,需要具有足够样品的数据集。然而,获得昂贵的化学性质的获得和充分数据可以是昂贵的令人昂贵的,这大大限制了ML模型的性能。通过计算机视觉和黑暗语言处理中数据增强的成功,我们开发了奥古里希姆:数据八级化图书馆化学结构。引入了弃头晶系统和分子的增强方法,其可以对基于指纹的ML模型和图形神经网络(GNNS)进行脱颖而出。我们表明,使用我们的增强策略意义地提高了ML模型的性能,特别是在使用GNNS时,我们开发的增强件在训练期间可以用作广告插件模块,并在用不同的GNN实施时证明了有效性。模型通过Theauglichem图书馆。基于Python的封装我们实现了EugliChem:用于化学结构的数据增强库,可公开获取:https://github.com/baratilab/auglichem.1
translated by 谷歌翻译
图对比度学习已被证明是图形神经网络(GNN)预训练的有效任务。但是,一个关键问题可能会严重阻碍现有作品中的代表权:当前方法创建的积极实例通常会错过图表的关键信息,甚至会错过非法实例(例如分子生成中的非化学意识图)。为了解决此问题,我们建议直接从训练集中的现有图中选择正图实例,该实例最终保持与目标图的合法性和相似性。我们的选择基于某些特定于域的成对相似性测量以及从层次图编码图中的相似性关系的采样。此外,我们开发了一种自适应节点级预训练方法,以动态掩盖节点在图中均匀分布。我们对来自各个域的$ 13 $图形分类和节点分类基准数据集进行了广泛的实验。结果表明,通过我们的策略预先培训的GNN模型可以胜过那些训练有素的从划痕模型以及通过现有方法获得的变体。
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译