事实证明,分子机器学习(ML)对于解决各种分子问题很重要,包括预测蛋白质 - 药物相互作用和血液脑性渗透性。自最近以来,已经为分子ML实施了所谓的图神经网络(GNN),显示出与基于描述符的方法相当或出色的性能。尽管存在各种工具和包装用于将GNN用于分子ML,但新的GNN包装,名为Molgraph(https://github.com/akensert/molgraph),在这项工作中开发了,以创建GNNS与TensorFlow高度兼容的动力和KERAS应用程序编程接口(API)。由于Molgraph专门关注分子ML,因此实施了化学模块,以适应分子图的产生$ \ unicode {x2014} $,然后可以将其输入到GNNS中以用于分子ML。为了验证GNN,它们针对分子数据集以及三个色谱保留时间数据集进行了基准测试。这些基准测试的结果表明,GNN按预期进行。此外,GNN被证明可用于分子识别和改善色谱保留数据的可解释性。
translated by 谷歌翻译
在三维分子结构上运行的计算方法有可能解决生物学和化学的重要问题。特别地,深度神经网络的重视,但它们在生物分子结构域中的广泛采用受到缺乏系统性能基准或统一工具包的限制,用于与分子数据相互作用。为了解决这个问题,我们呈现Atom3D,这是一个新颖的和现有的基准数据集的集合,跨越几个密钥的生物分子。我们为这些任务中的每一个实施多种三维分子学习方法,并表明它们始终如一地提高了基于单维和二维表示的方法的性能。结构的具体选择对于性能至关重要,具有涉及复杂几何形状的任务的三维卷积网络,在需要详细位置信息的系统中表现出良好的图形网络,以及最近开发的设备越多的网络显示出显着承诺。我们的结果表明,许多分子问题符合三维分子学习的增益,并且有可能改善许多仍然过分曝光的任务。为了降低进入并促进现场进一步发展的障碍,我们还提供了一套全面的DataSet处理,模型培训和在我们的开源ATOM3D Python包中的评估工具套件。所有数据集都可以从https://www.atom3d.ai下载。
translated by 谷歌翻译
机器学习(ML)已经证明了用于准确和结晶材料的准确性能预测的承诺。为了化学结构的高度精确的ML型号的化学结构属性预测,需要具有足够样品的数据集。然而,获得昂贵的化学性质的获得和充分数据可以是昂贵的令人昂贵的,这大大限制了ML模型的性能。通过计算机视觉和黑暗语言处理中数据增强的成功,我们开发了奥古里希姆:数据八级化图书馆化学结构。引入了弃头晶系统和分子的增强方法,其可以对基于指纹的ML模型和图形神经网络(GNNS)进行脱颖而出。我们表明,使用我们的增强策略意义地提高了ML模型的性能,特别是在使用GNNS时,我们开发的增强件在训练期间可以用作广告插件模块,并在用不同的GNN实施时证明了有效性。模型通过Theauglichem图书馆。基于Python的封装我们实现了EugliChem:用于化学结构的数据增强库,可公开获取:https://github.com/baratilab/auglichem.1
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
学习表达性分子表示对于促进分子特性的准确预测至关重要。尽管图形神经网络(GNNS)在分子表示学习中取得了显着进步,但它们通常面临诸如邻居探索,不足,过度光滑和过度阵列之类的局限性。同样,由于参数数量大,GNN通常具有较高的计算复杂性。通常,当面对相对大尺寸的图形或使用更深的GNN模型体系结构时,这种限制会出现或增加。克服这些问题的一个想法是将分子图简化为小型,丰富且有益的信息,这更有效,更具挑战性的培训GNN。为此,我们提出了一个新颖的分子图粗化框架,名为FUNQG利用函数组,作为分子的有影响力的构件来确定其性质,基于称为商图的图理论概念。通过实验,我们表明所产生的信息图比分子图小得多,因此是训练GNN的良好候选者。我们将FUNQG应用于流行的分子属性预测基准,然后比较所获得的数据集上的GNN体系结构的性能与原始数据集上的几个最先进的基线。通过实验,除了其参数数量和低计算复杂性的急剧减少之外,该方法除了其急剧减少之外,在各种数据集上的表现显着优于先前的基准。因此,FUNQG可以用作解决分子表示学习问题的简单,成本效益且可靠的方法。
translated by 谷歌翻译
The accurate prediction of physicochemical properties of chemical compounds in mixtures (such as the activity coefficient at infinite dilution $\gamma_{ij}^\infty$) is essential for developing novel and more sustainable chemical processes. In this work, we analyze the performance of previously-proposed GNN-based models for the prediction of $\gamma_{ij}^\infty$, and compare them with several mechanistic models in a series of 9 isothermal studies. Moreover, we develop the Gibbs-Helmholtz Graph Neural Network (GH-GNN) model for predicting $\ln \gamma_{ij}^\infty$ of molecular systems at different temperatures. Our method combines the simplicity of a Gibbs-Helmholtz-derived expression with a series of graph neural networks that incorporate explicit molecular and intermolecular descriptors for capturing dispersion and hydrogen bonding effects. We have trained this model using experimentally determined $\ln \gamma_{ij}^\infty$ data of 40,219 binary-systems involving 1032 solutes and 866 solvents, overall showing superior performance compared to the popular UNIFAC-Dortmund model. We analyze the performance of GH-GNN for continuous and discrete inter/extrapolation and give indications for the model's applicability domain and expected accuracy. In general, GH-GNN is able to produce accurate predictions for extrapolated binary-systems if at least 25 systems with the same combination of solute-solvent chemical classes are contained in the training set and a similarity indicator above 0.35 is also present. This model and its applicability domain recommendations have been made open-source at https://github.com/edgarsmdn/GH-GNN.
translated by 谷歌翻译
阐明并准确预测分子的吸毒性和生物活性在药物设计和发现中起关键作用,并且仍然是一个开放的挑战。最近,图神经网络(GNN)在基于图的分子属性预测方面取得了显着进步。但是,当前基于图的深度学习方法忽略了分子的分层信息以及特征通道之间的关系。在这项研究中,我们提出了一个精心设计的分层信息图神经网络框架(称为hignn),用于通过利用分子图和化学合成的可见的无限元素片段来预测分子特性。此外,首先在Hignn体系结构中设计了一个插件功能的注意块,以适应消息传递阶段后自适应重新校准原子特征。广泛的实验表明,Hignn在许多具有挑战性的药物发现相关基准数据集上实现了最先进的预测性能。此外,我们设计了一种分子碎片的相似性机制,以全面研究Hignn模型在子图水平上的解释性,表明Hignn作为强大的深度学习工具可以帮助化学家和药剂师识别出设计更好分子的关键分子,以设计更好的分子,以设计出所需的更好分子。属性或功能。源代码可在https://github.com/idruglab/hignn上公开获得。
translated by 谷歌翻译
Ionic Liquids (ILs) provide a promising solution for CO$_2$ capture and storage to mitigate global warming. However, identifying and designing the high-capacity IL from the giant chemical space requires expensive, and exhaustive simulations and experiments. Machine learning (ML) can accelerate the process of searching for desirable ionic molecules through accurate and efficient property predictions in a data-driven manner. But existing descriptors and ML models for the ionic molecule suffer from the inefficient adaptation of molecular graph structure. Besides, few works have investigated the explainability of ML models to help understand the learned features that can guide the design of efficient ionic molecules. In this work, we develop both fingerprint-based ML models and Graph Neural Networks (GNNs) to predict the CO$_2$ absorption in ILs. Fingerprint works on graph structure at the feature extraction stage, while GNNs directly handle molecule structure in both the feature extraction and model prediction stage. We show that our method outperforms previous ML models by reaching a high accuracy (MAE of 0.0137, $R^2$ of 0.9884). Furthermore, we take the advantage of GNNs feature representation and develop a substructure-based explanation method that provides insight into how each chemical fragments within IL molecules contribute to the CO$_2$ absorption prediction of ML models. We also show that our explanation result agrees with some ground truth from the theoretical reaction mechanism of CO$_2$ absorption in ILs, which can advise on the design of novel and efficient functional ILs in the future.
translated by 谷歌翻译
Molecular machine learning has been maturing rapidly over the last few years.Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem
translated by 谷歌翻译
蛋白质 - 配体相互作用(PLIS)是生化研究的基础,其鉴定对于估计合理治疗设计的生物物理和生化特性至关重要。目前,这些特性的实验表征是最准确的方法,然而,这是非常耗时和劳动密集型的。在这种情况下已经开发了许多计算方法,但大多数现有PLI预测大量取决于2D蛋白质序列数据。在这里,我们提出了一种新颖的并行图形神经网络(GNN),以集成PLI预测的知识表示和推理,以便通过专家知识引导的深度学习,并通过3D结构数据通知。我们开发了两个不同的GNN架构,GNNF是采用不同特种的基础实现,以增强域名认识,而GNNP是一种新颖的实现,可以预测未经分子间相互作用的先验知识。综合评价证明,GNN可以成功地捕获配体和蛋白质3D结构之间的二元相互作用,对于GNNF的测试精度和0.958,用于预测蛋白质 - 配体络合物的活性。这些模型进一步适用于回归任务以预测实验结合亲和力,PIC50对于药物效力和功效至关重要。我们在实验亲和力上达到0.66和0.65的Pearson相关系数,分别在PIC50和GNNP上进行0.50和0.51,优于基于2D序列的模型。我们的方法可以作为可解释和解释的人工智能(AI)工具,用于预测活动,效力和铅候选的生物物理性质。为此,我们通过筛选大型复合库并将我们的预测与实验测量数据进行比较来展示GNNP对SARS-COV-2蛋白靶标的实用性。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
TensorFlow GNN(TF-GNN)是张量曲线的图形神经网络的可扩展库。它是从自下而上设计的,以支持当今信息生态系统中发生的丰富的异质图数据。Google的许多生产模型都使用TF-GNN,最近已作为开源项目发布。在本文中,我们描述了TF-GNN数据模型,其KERAS建模API以及相关功能,例如图形采样,分布式训练和加速器支持。
translated by 谷歌翻译
图形神经网络(GNN)正在化学工程中出现,以基于分子图的物理化学特性端到端学习。 GNNS的一个关键要素是合并函数,将原子矢量结合到分子指纹中。大多数以前的作品都使用标准池功能来预测各种属性。但是,不合适的合并功能会导致概括不佳的非物理GNN。我们根据有关学习特性的物理知识比较并选择有意义的GNN合并方法。通过量子机械计算计算出的分子特性证明了物理池函数的影响。我们还将结果与最近的SET2Set合并方法进行了比较。我们建议使用总和池来预测取决于分子大小的性能并比较分子大小无关的属性的池函数。总体而言,我们表明物理池功能的使用显着增强了概括。
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
基于1-HOP邻居之间的消息传递(MP)范式交换信息的图形神经网络(GNN),以在每一层构建节点表示。原则上,此类网络无法捕获在图形上学习给定任务的可能或必需的远程交互(LRI)。最近,人们对基于变压器的图的开发产生了越来越多的兴趣,这些方法可以考虑超出原始稀疏结构以外的完整节点连接,从而实现了LRI的建模。但是,仅依靠1跳消息传递的MP-gnn与位置特征表示形式结合使用时通常在几个现有的图形基准中表现得更好,因此,限制了Transferter类似体系结构的感知效用和排名。在这里,我们介绍了5个图形学习数据集的远程图基准(LRGB):Pascalvoc-SP,Coco-SP,PCQM-Contact,Peptides-Func和肽结构,可以说需要LRI推理以在给定的任务中实现强大的性能。我们基准测试基线GNN和Graph Transformer网络,以验证捕获长期依赖性的模型在这些任务上的性能明显更好。因此,这些数据集适用于旨在捕获LRI的MP-GNN和Graph Transformer架构的基准测试和探索。
translated by 谷歌翻译
通过定向消息传递通过方向消息通过的图形神经网络最近在多个分子特性预测任务上设置了最先进的技术。然而,它们依赖于通常不可用的原子位置信息,并获得它通常非常昂贵甚至不可能。在本文中,我们提出了合成坐标,使得能够使用高级GNN而不需要真正的分子配置。我们提出了两个距离作为合成坐标:使用个性化PageRank的对称变体指定分子配置的粗糙范围和基于图的距离的距离界限。为了利用距离和角度信息,我们提出了一种将正常图形神经网络转换为定向MPNN的方法。我们表明,通过这种转变,我们可以将正常图形神经网络的误差减少55%在锌基准。我们还通过在SMP和DimeNet ++模型中纳入合成坐标,在锌和自由QM9上设定了最新技术。我们的实现可在线获取。
translated by 谷歌翻译
这项工作考虑了在属性关系图(ARG)上表示表示的任务。 ARG中的节点和边缘都与属性/功能相关联,允许ARG编码在实际应用中广泛观察到的丰富结构信息。现有的图形神经网络提供了有限的能力,可以在局部结构环境中捕获复杂的相互作用,从而阻碍他们利用ARG的表达能力。我们提出了Motif卷积模块(MCM),这是一种新的基于基线的图表表示技术,以更好地利用本地结构信息。处理连续边缘和节点功能的能力是MCM比现有基于基础图案的模型的优势之一。 MCM以无监督的方式构建了一个主题词汇,并部署了一种新型的主题卷积操作,以提取单个节点的局部结构上下文,然后将其用于通过多层perceptron学习高级节点表示,并在图神经网络中传递消息。与其他图形学习方法进行分类的合成图相比,我们的方法在捕获结构环境方面要好得多。我们还通过将其应用于几个分子基准来证明我们方法的性能和解释性优势。
translated by 谷歌翻译
分子特性预测在药物发现中起着基本作用,以鉴定具有目标性质的候选分子。然而,分子特性预测基本上是几次射门问题,这使得难以使用普通机器学习模型。在本文中,我们提出了一个属性感知的关系网络(PAR)来处理此问题。与现有的作品相比,我们利用了不同分子特性的相关子结构和关系的事实。我们首先介绍一个属性感知的嵌入功能,将通用分子嵌入的功能转换为与目标属性相关的子结构感知空间。此外,我们设计了一个自适应关系图学习模块,共同估计了分子关系图和优化分子嵌入W.R.T.目标性质,使得有限标签可以有效地在类似的分子之间繁殖。我们采用元学习策略,其中参数在任务中选择性地更新,以便单独模拟通用和属性感知的知识。基准分子特性预测数据集的广泛实验表明,始终如一地优于现有方法,并可以正确获得性能感知分子嵌入和模型分子关系图。
translated by 谷歌翻译
变压器架构已成为许多域中的主导选择,例如自然语言处理和计算机视觉。然而,与主流GNN变体相比,它对图形水平预测的流行排行榜没有竞争表现。因此,它仍然是一个谜,变形金机如何对图形表示学习表现良好。在本文中,我们通过提出了基于标准变压器架构构建的Gragemer来解决这一神秘性,并且可以在广泛的图形表示学习任务中获得优异的结果,特别是在最近的OGB大规模挑战上。我们在图中利用变压器的关键洞察是有效地将图形的结构信息有效地编码到模型中。为此,我们提出了几种简单但有效的结构编码方法,以帮助Gramemormer更好的模型图形结构数据。此外,我们在数学上表征了Gramemormer的表现力,并展示了我们编码图形结构信息的方式,许多流行的GNN变体都可以被涵盖为GrameRormer的特殊情况。
translated by 谷歌翻译