分子特性预测在药物发现中起着基本作用,以鉴定具有目标性质的候选分子。然而,分子特性预测基本上是几次射门问题,这使得难以使用普通机器学习模型。在本文中,我们提出了一个属性感知的关系网络(PAR)来处理此问题。与现有的作品相比,我们利用了不同分子特性的相关子结构和关系的事实。我们首先介绍一个属性感知的嵌入功能,将通用分子嵌入的功能转换为与目标属性相关的子结构感知空间。此外,我们设计了一个自适应关系图学习模块,共同估计了分子关系图和优化分子嵌入W.R.T.目标性质,使得有限标签可以有效地在类似的分子之间繁殖。我们采用元学习策略,其中参数在任务中选择性地更新,以便单独模拟通用和属性感知的知识。基准分子特性预测数据集的广泛实验表明,始终如一地优于现有方法,并可以正确获得性能感知分子嵌入和模型分子关系图。
translated by 谷歌翻译
图表分类是一种非常有影响力的任务,在多数世界应用中起着至关重要的作用,例如分子性质预测和蛋白质函数预测。以有限标记的图表处理新课程,几次拍摄图形分类已成为一座桥梁现有图分类解决方案与实际使用。这项工作探讨了基于度量的元学习的潜力,用于解决少量图形分类。我们突出了考虑解决方案结构特征的重要性,并提出了一种明确考虑全球结构的新框架和输入图的局部结构。在两个数据集,Chembl和三角形上测试了名为SMF-GIN的GIN的实施,其中广泛的实验验证了所提出的方法的有效性。 ChemBl构造成填补缺乏几次拍摄图形分类评估的大规模基准的差距,与SMF-GIN的实施一起释放:https://github.com/jiangshunyu/smf-ing。
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
图形神经网络(GNN),图数据上深度神经网络的概括已被广泛用于各个领域,从药物发现到推荐系统。但是,当可用样本很少的情况下,这些应用程序的GNN是有限的。元学习一直是解决机器学习中缺乏样品的重要框架,近年来,研究人员已经开始将元学习应用于GNNS。在这项工作中,我们提供了对涉及GNN的不同元学习方法的综合调查,这些方法在各种图表中显示出使用这两种方法的力量。我们根据提出的架构,共享表示和应用程序分类文献。最后,我们讨论了几个激动人心的未来研究方向和打开问题。
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
分子表示学习(MRL)是建立机器学习与化学科学之间联系的关键步骤。特别是,它将分子编码为保留分子结构和特征的数值向量,在其上可以执行下游任务(例如,属性预测)。最近,MRL取得了相当大的进步,尤其是在基于深的分子图学习方法中。在这项调查中,我们系统地回顾了这些基于图的分子表示技术。具体而言,我们首先介绍2D和3D图分子数据集的数据和功能。然后,我们总结了专门为MRL设计的方法,并将其分为四种策略。此外,我们讨论了MRL支持的一些典型化学应用。为了促进该快速发展领域的研究,我们还列出了论文中的基准和常用数据集。最后,我们分享我们对未来研究方向的想法。
translated by 谷歌翻译
阐明并准确预测分子的吸毒性和生物活性在药物设计和发现中起关键作用,并且仍然是一个开放的挑战。最近,图神经网络(GNN)在基于图的分子属性预测方面取得了显着进步。但是,当前基于图的深度学习方法忽略了分子的分层信息以及特征通道之间的关系。在这项研究中,我们提出了一个精心设计的分层信息图神经网络框架(称为hignn),用于通过利用分子图和化学合成的可见的无限元素片段来预测分子特性。此外,首先在Hignn体系结构中设计了一个插件功能的注意块,以适应消息传递阶段后自适应重新校准原子特征。广泛的实验表明,Hignn在许多具有挑战性的药物发现相关基准数据集上实现了最先进的预测性能。此外,我们设计了一种分子碎片的相似性机制,以全面研究Hignn模型在子图水平上的解释性,表明Hignn作为强大的深度学习工具可以帮助化学家和药剂师识别出设计更好分子的关键分子,以设计更好的分子,以设计出所需的更好分子。属性或功能。源代码可在https://github.com/idruglab/hignn上公开获得。
translated by 谷歌翻译
图表自学学习(GSSL)铺平了无需专家注释的学习图嵌入的方式,这对分子图特别有影响,因为可能的分子数量很大,并且标签昂贵。但是,通过设计,GSSL方法没有经过训练,可以在一个下游任务上表现良好,而是旨在将其转移到许多人方面,从而使评估不那么直接。作为获得具有多种多样且可解释属性的分子图嵌入曲线的一步,我们引入了分子图表示评估(Molgrapheval),这是一组探针任务,分为(i)拓扑 - ,(ii)子结构 - 和(iii)和(iii)嵌入空间属性。通过对现有下游数据集和Molgrapheval上的现有GSSL方法进行基准测试,我们发现单独从现有数据集中得出的结论与更细粒度的探测之间存在令人惊讶的差异,这表明当前的评估协议没有提供整个图片。我们的模块化,自动化的端到端GSSL管道代码将在接受后发布,包括标准化的图形加载,实验管理和嵌入评估。
translated by 谷歌翻译
变压器架构已成为许多域中的主导选择,例如自然语言处理和计算机视觉。然而,与主流GNN变体相比,它对图形水平预测的流行排行榜没有竞争表现。因此,它仍然是一个谜,变形金机如何对图形表示学习表现良好。在本文中,我们通过提出了基于标准变压器架构构建的Gragemer来解决这一神秘性,并且可以在广泛的图形表示学习任务中获得优异的结果,特别是在最近的OGB大规模挑战上。我们在图中利用变压器的关键洞察是有效地将图形的结构信息有效地编码到模型中。为此,我们提出了几种简单但有效的结构编码方法,以帮助Gramemormer更好的模型图形结构数据。此外,我们在数学上表征了Gramemormer的表现力,并展示了我们编码图形结构信息的方式,许多流行的GNN变体都可以被涵盖为GrameRormer的特殊情况。
translated by 谷歌翻译
这项工作考虑了在属性关系图(ARG)上表示表示的任务。 ARG中的节点和边缘都与属性/功能相关联,允许ARG编码在实际应用中广泛观察到的丰富结构信息。现有的图形神经网络提供了有限的能力,可以在局部结构环境中捕获复杂的相互作用,从而阻碍他们利用ARG的表达能力。我们提出了Motif卷积模块(MCM),这是一种新的基于基线的图表表示技术,以更好地利用本地结构信息。处理连续边缘和节点功能的能力是MCM比现有基于基础图案的模型的优势之一。 MCM以无监督的方式构建了一个主题词汇,并部署了一种新型的主题卷积操作,以提取单个节点的局部结构上下文,然后将其用于通过多层perceptron学习高级节点表示,并在图神经网络中传递消息。与其他图形学习方法进行分类的合成图相比,我们的方法在捕获结构环境方面要好得多。我们还通过将其应用于几个分子基准来证明我们方法的性能和解释性优势。
translated by 谷歌翻译
随着各个领域的深度学习的巨大成功,图形神经网络(GNNS)也成为图形分类的主要方法。通过全局读出操作,只会聚合所有节点(或节点群集)表示,现有的GNN分类器获得输入图的图级表示,并使用表示来预测其类标签。但是,这种全局聚合不考虑每个节点的结构信息,这导致全局结构的信息丢失。特别地,它通过对所有节点表示来强制执行分类器的相同权重参数来限制辨别力;在实践中,他们中的每一个都有助于不同于其结构语义的目标类别。在这项工作中,我们提出了结构性语义读数(SSREAD)来总结位置级节点表示,这允许为分类模拟特定位置的权重参数,以及有效地捕获与全局结构相关的图形语义。给定输入图,SSREAD旨在通过使用其节点与结构原型之间的语义对齐来识别结构上有意义的位置,该结构原型编码每个位置的原型特征。结构原型经过优化,以最小化所有训练图的对准成本,而其他GNN参数训练以预测类标签。我们的实验结果表明,SSREAD显着提高了GNN分类器的分类性能和可解释性,同时兼容各种聚合函数,GNN架构和学习框架。
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译
使用图神经网络(GNN)提取分子的信息表示,对于AI驱动的药物发现至关重要。最近,图形研究界一直在试图复制自然语言处理预处理的成功,并获得了一些成功。但是,我们发现在许多情况下,自我监督预审计对分子数据的益处可以忽略不计。我们对GNN预处理的关键组成部分进行了彻底的消融研究,包括预处理目标,数据拆分方法,输入特征,预处理数据集量表和GNN体系结构,以决定下游任务的准确性。我们的第一个重要发现是,在许多情况下,自我监督的图表预处理没有统计学上的显着优势。其次,尽管可以通过额外的监督预处理可以观察到改进,但通过更丰富或更平衡的数据拆分,改进可能会减少。第三,实验性超参数对下游任务的准确性具有更大的影响,而不是训练训练的任务。我们假设对分子进行预训练的复杂性不足,从而导致下游任务的可转移知识较低。
translated by 谷歌翻译
分子表示学习有助于多个下游任务,例如分子性质预测和药物设计。为了适当地代表分子,图形对比学习是一个有前途的范式,因为它利用自我监督信号并没有人类注释要求。但是,先前的作品未能将基本域名知识纳入图表语义,因此忽略了具有共同属性的原子之间的相关性,但不通过键连接连接。为了解决这些问题,我们构建化学元素知识图(KG),总结元素之间的微观关联,并提出了一种用于分子代表学习的新颖知识增强的对比学习(KCL)框架。 KCL框架由三个模块组成。第一个模块,知识引导的图形增强,基于化学元素kg增强原始分子图。第二模块,知识意识的图形表示,利用用于原始分子图的公共曲线图编码器和通过神经网络(KMPNN)的知识感知消息来提取分子表示来编码增强分子图中的复杂信息。最终模块是一种对比目标,在那里我们在分子图的这两个视图之间最大化协议。广泛的实验表明,KCL获得了八个分子数据集上的最先进基线的优异性能。可视化实验适当地解释了在增强分子图中从原子和属性中了解的KCL。我们的代码和数据可用于补充材料。
translated by 谷歌翻译
近年来,图表表示学习越来越多地引起了越来越长的关注,特别是为了在节点和图表水平上学习对分类和建议任务的低维嵌入。为了能够在现实世界中的大规模图形数据上学习表示,许多研究专注于开发不同的抽样策略,以方便培训过程。这里,我们提出了一种自适应图策略驱动的采样模型(GPS),其中通过自适应相关计算实现了本地邻域中每个节点的影响。具体地,邻居的选择是由自适应策略算法指导的,直接贡献到消息聚合,节点嵌入更新和图级读出步骤。然后,我们从各种角度对图表分类任务进行全面的实验。我们所提出的模型在几个重要的基准测试中优于现有的3%-8%,实现了现实世界数据集的最先进的性能。
translated by 谷歌翻译
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.
translated by 谷歌翻译
自我监督学习(SSL)是一种通过利用数据中固有的监督来学习数据表示的方法。这种学习方法是药物领域的焦点,由于耗时且昂贵的实验,缺乏带注释的数据。使用巨大未标记数据的SSL显示出在分子属性预测方面表现出色的性能,但存在一些问题。 (1)现有的SSL模型是大规模的;在计算资源不足的情况下实现SSL有限制。 (2)在大多数情况下,它们不利用3D结构信息进行分子表示学习。药物的活性与药物分子的结构密切相关。但是,大多数当前模型不使用3D信息或部分使用它。 (3)以前对分子进行对比学习的模型使用置换原子和键的增强。因此,具有不同特征的分子可以在相同的阳性样品中。我们提出了一个新颖的对比学习框架,用于分子属性预测的小规模3D图对比度学习(3DGCL),以解决上述问题。 3DGCL通过不改变药物语义的预训练过程来反映分子的结构来学习分子表示。仅使用1,128个样本用于预训练数据和100万个模型参数,我们在四个回归基准数据集中实现了最先进或可比性的性能。广泛的实验表明,基于化学知识的3D结构信息对于用于财产预测的分子表示学习至关重要。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译