自我监督的学习逐渐被出现为一种强大的图形表示学习技术。然而,在图表数据上进行可转换,概括和强大的表示学习仍然是对预训练图形神经网络的挑战。在本文中,我们提出了一种简单有效的自我监督的自我监督的预训练策略,命名为成对半图歧视(PHD),明确地预先在图形级别进行了图形神经网络。 PHD被设计为简单的二进制分类任务,以辨别两个半图是否来自同一源。实验表明,博士学位是一种有效的预训练策略,与最先进的策略相比,在13图分类任务上提供了可比或优越的性能,并在与节点级策略结合时实现了显着的改进。此外,所学习代表的可视化透露,博士策略确实赋予了模型来学习像分子支架等图形级知识。这些结果已将博士学位作为图形级别代表学习中的强大有效的自我监督的学习策略。
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
使用图神经网络(GNN)提取分子的信息表示,对于AI驱动的药物发现至关重要。最近,图形研究界一直在试图复制自然语言处理预处理的成功,并获得了一些成功。但是,我们发现在许多情况下,自我监督预审计对分子数据的益处可以忽略不计。我们对GNN预处理的关键组成部分进行了彻底的消融研究,包括预处理目标,数据拆分方法,输入特征,预处理数据集量表和GNN体系结构,以决定下游任务的准确性。我们的第一个重要发现是,在许多情况下,自我监督的图表预处理没有统计学上的显着优势。其次,尽管可以通过额外的监督预处理可以观察到改进,但通过更丰富或更平衡的数据拆分,改进可能会减少。第三,实验性超参数对下游任务的准确性具有更大的影响,而不是训练训练的任务。我们假设对分子进行预训练的复杂性不足,从而导致下游任务的可转移知识较低。
translated by 谷歌翻译
分子表示学习有助于多个下游任务,例如分子性质预测和药物设计。为了适当地代表分子,图形对比学习是一个有前途的范式,因为它利用自我监督信号并没有人类注释要求。但是,先前的作品未能将基本域名知识纳入图表语义,因此忽略了具有共同属性的原子之间的相关性,但不通过键连接连接。为了解决这些问题,我们构建化学元素知识图(KG),总结元素之间的微观关联,并提出了一种用于分子代表学习的新颖知识增强的对比学习(KCL)框架。 KCL框架由三个模块组成。第一个模块,知识引导的图形增强,基于化学元素kg增强原始分子图。第二模块,知识意识的图形表示,利用用于原始分子图的公共曲线图编码器和通过神经网络(KMPNN)的知识感知消息来提取分子表示来编码增强分子图中的复杂信息。最终模块是一种对比目标,在那里我们在分子图的这两个视图之间最大化协议。广泛的实验表明,KCL获得了八个分子数据集上的最先进基线的优异性能。可视化实验适当地解释了在增强分子图中从原子和属性中了解的KCL。我们的代码和数据可用于补充材料。
translated by 谷歌翻译
图级表示在各种现实世界中至关重要,例如预测分子的特性。但是实际上,精确的图表注释通常非常昂贵且耗时。为了解决这个问题,图形对比学习构造实例歧视任务,将正面对(同一图的增强对)汇总在一起,并将负面对(不同图的增强对)推开,以进行无监督的表示。但是,由于为了查询,其负面因素是从所有图中均匀抽样的,因此现有方法遭受关键采样偏置问题的损失,即,否定物可能与查询具有相同的语义结构,从而导致性能降解。为了减轻这种采样偏见问题,在本文中,我们提出了一种典型的图形对比度学习(PGCL)方法。具体而言,PGCL通过将语义相似的图形群群归为同一组的群集数据的基础语义结构,并同时鼓励聚类的一致性,以实现同一图的不同增强。然后给出查询,它通过从与查询群集不同的群集中绘制图形进行负采样,从而确保查询及其阴性样本之间的语义差异。此外,对于查询,PGCL根据其原型(集群质心)和查询原型之间的距离进一步重新重新重新重新重新享受其负样本,从而使那些具有中等原型距离的负面因素具有相对较大的重量。事实证明,这种重新加权策略比统一抽样更有效。各种图基准的实验结果证明了我们的PGCL比最新方法的优势。代码可在https://github.com/ha-lins/pgcl上公开获取。
translated by 谷歌翻译
自我监督学习(SSL)是一种通过利用数据中固有的监督来学习数据表示的方法。这种学习方法是药物领域的焦点,由于耗时且昂贵的实验,缺乏带注释的数据。使用巨大未标记数据的SSL显示出在分子属性预测方面表现出色的性能,但存在一些问题。 (1)现有的SSL模型是大规模的;在计算资源不足的情况下实现SSL有限制。 (2)在大多数情况下,它们不利用3D结构信息进行分子表示学习。药物的活性与药物分子的结构密切相关。但是,大多数当前模型不使用3D信息或部分使用它。 (3)以前对分子进行对比学习的模型使用置换原子和键的增强。因此,具有不同特征的分子可以在相同的阳性样品中。我们提出了一个新颖的对比学习框架,用于分子属性预测的小规模3D图对比度学习(3DGCL),以解决上述问题。 3DGCL通过不改变药物语义的预训练过程来反映分子的结构来学习分子表示。仅使用1,128个样本用于预训练数据和100万个模型参数,我们在四个回归基准数据集中实现了最先进或可比性的性能。广泛的实验表明,基于化学知识的3D结构信息对于用于财产预测的分子表示学习至关重要。
translated by 谷歌翻译
已经提出了图形神经网络(GNN)预训练方法来增强GNN的能力。具体而言,首先在大规模的未标记图上预先训练GNN,然后在单独的小标记图上进行微调,以用于下游应用程序,例如节点分类。一种流行的预训练方法是掩盖一部分边缘,并接受了GNN的培训以恢复它们。但是,这种生成方法遭受了图不匹配。也就是说,输入到GNN偏离原始图的蒙版图。为了减轻此问题,我们提出了DIP-GNN(图神经网络的歧视性预训练)。具体来说,我们训练一个发电机以恢复蒙版边缘的身份,同时,我们训练一个判别器,以区分生成的边缘与原始图的边缘。在我们的框架中,鉴别器看到的图形更好地匹配原始图,因为生成器可以恢复蒙版边缘的一部分。大规模同质和异质图的广泛实验证明了该框架的有效性。
translated by 谷歌翻译
图表自学学习(GSSL)铺平了无需专家注释的学习图嵌入的方式,这对分子图特别有影响,因为可能的分子数量很大,并且标签昂贵。但是,通过设计,GSSL方法没有经过训练,可以在一个下游任务上表现良好,而是旨在将其转移到许多人方面,从而使评估不那么直接。作为获得具有多种多样且可解释属性的分子图嵌入曲线的一步,我们引入了分子图表示评估(Molgrapheval),这是一组探针任务,分为(i)拓扑 - ,(ii)子结构 - 和(iii)和(iii)嵌入空间属性。通过对现有下游数据集和Molgrapheval上的现有GSSL方法进行基准测试,我们发现单独从现有数据集中得出的结论与更细粒度的探测之间存在令人惊讶的差异,这表明当前的评估协议没有提供整个图片。我们的模块化,自动化的端到端GSSL管道代码将在接受后发布,包括标准化的图形加载,实验管理和嵌入评估。
translated by 谷歌翻译
图对比度学习已被证明是图形神经网络(GNN)预训练的有效任务。但是,一个关键问题可能会严重阻碍现有作品中的代表权:当前方法创建的积极实例通常会错过图表的关键信息,甚至会错过非法实例(例如分子生成中的非化学意识图)。为了解决此问题,我们建议直接从训练集中的现有图中选择正图实例,该实例最终保持与目标图的合法性和相似性。我们的选择基于某些特定于域的成对相似性测量以及从层次图编码图中的相似性关系的采样。此外,我们开发了一种自适应节点级预训练方法,以动态掩盖节点在图中均匀分布。我们对来自各个域的$ 13 $图形分类和节点分类基准数据集进行了广泛的实验。结果表明,通过我们的策略预先培训的GNN模型可以胜过那些训练有素的从划痕模型以及通过现有方法获得的变体。
translated by 谷歌翻译
对比学习已被广​​泛应用于图形表示学习,其中观测发生器在产生有效的对比样本方面发挥着重要作用。大多数现有的对比学习方法采用预定义的视图生成方法,例如节点滴或边缘扰动,这通常不能适应输入数据或保持原始语义结构。为了解决这个问题,我们提出了一份名为自动化图形对比学习(AutoGCL)的小说框架。具体而言,AutoGCL采用一组由自动增强策略协调的一组学习图形视图生成器,其中每个图形视图生成器都会学习输入调节的图形的概率分布。虽然AutoGCL中的图形视图发生器在生成每个对比样本中保留原始图的最代表性结构,但自动增强学会在整个对比学习程序中介绍适当的增强差异的政策。此外,AutoGCL采用联合培训策略,以培训学习的视图发生器,图形编码器和分类器以端到端的方式,导致拓扑异质性,在产生对比样本时的语义相似性。关于半监督学习,无监督学习和转移学习的广泛实验展示了我们在图形对比学习中的最先进的自动支持者框架的优越性。此外,可视化结果进一步证实,与现有的视图生成方法相比,可学习的视图发生器可以提供更紧凑和语义有意义的对比样本。
translated by 谷歌翻译
增强图在正规化图形神经网络(GNNS)方面起着至关重要的作用,该图形以信息传递的形式利用沿图的边缘进行信息交换。由于其有效性,简单的边缘和节点操作(例如,添加和删除)已被广泛用于图表增强中。然而,这种常见的增强技术可以显着改变原始图的语义,从而导致过度侵略性增强,从而在GNN学习中拟合不足。为了解决掉落或添加图形边缘和节点引起的此问题,我们提出了SoftEdge,将随机权重分配给给定图的一部分以进行增强。 SoftEdge生成的合成图保持与原始图相同的节点及其连接性,从而减轻原始图的语义变化。我们从经验上表明,这种简单的方法获得了与流行节点和边缘操纵方法的卓越精度,并且具有明显的弹性,可抵御GNN深度的准确性降解。
translated by 谷歌翻译
无监督的图形表示学习是图形数据的非琐碎主题。在结构化数据的无监督代表学习中对比学习和自我监督学习的成功激发了图表上的类似尝试。使用对比损耗的当前无监督的图形表示学习和预培训主要基于手工增强图数据之间的对比度。但是,由于不可预测的不变性,图数据增强仍然没有很好地探索。在本文中,我们提出了一种新颖的协作图形神经网络对比学习框架(CGCL),它使用多个图形编码器来观察图形。不同视图观察的特征充当了图形编码器之间对比学习的图表增强,避免了任何扰动以保证不变性。 CGCL能够处理图形级和节点级表示学习。广泛的实验表明CGCL在无监督的图表表示学习中的优势以及图形表示学习的手工数据增强组合的非必要性。
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
近年来,自我监督学习(SSL)已广泛探索。特别是,生成的SSL在自然语言处理和其他AI领域(例如BERT和GPT的广泛采用)中获得了新的成功。尽管如此,对比度学习 - 严重依赖结构数据的增强和复杂的培训策略,这是图SSL的主要方法,而迄今为止,生成SSL在图形上的进度(尤其是GAES)尚未达到潜在的潜力。正如其他领域所承诺的。在本文中,我们确定并检查对GAE的发展产生负面影响的问题,包括其重建目标,训练鲁棒性和错误指标。我们提出了一个蒙版的图形自动编码器Graphmae,该图可以减轻这些问题,以预处理生成性自我监督图。我们建议没有重建图形结构,而是提议通过掩盖策略和缩放余弦误差将重点放在特征重建上,从而使GraphMae的强大训练受益。我们在21个公共数据集上进行了大量实验,以实现三个不同的图形学习任务。结果表明,Graphmae-A简单的图形自动编码器具有仔细的设计-CAN始终在对比度和生成性最新基准相比,始终产生优于性的表现。这项研究提供了对图自动编码器的理解,并证明了在图上的生成自我监督预训练的潜力。
translated by 谷歌翻译
最近的作品以自我监督的方式探索学习图表表示。在图形对比学习中,基准方法应用各种图形增强方法。但是,大多数增强方法都是不可学习的,这导致发出不束缚的增强图。这种增强可以缩短曲线图对比学学习方法的表现能力。因此,我们激励我们的方法通过可学习的图形增强器来生成增强图,称为元图形增强器(Mega)。然后,我们阐明了“良好”的图形增强必须在特征级别的实例级别和信息性上具有均匀性。为此,我们提出了一种新颖的方法来学习图形增强者,可以以统一和信息性产生增强。图表增强器的目的是促进我们的特征提取网络,以学习更辨别的特征表示,这激励我们提出元学范式。经验上,多个基准数据集的实验表明,Mega优于图形自我监督学习任务中的最先进的方法。进一步的实验研究证明了巨型术语的有效性。
translated by 谷歌翻译
This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semi-supervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.
translated by 谷歌翻译
Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at: https://github.com/Shen-Lab/GraphCL.
translated by 谷歌翻译
Graph representation learning has emerged as a powerful technique for addressing real-world problems. Various downstream graph learning tasks have benefited from its recent developments, such as node classification, similarity search, and graph classification. However, prior arts on graph representation learning focus on domain specific problems and train a dedicated model for each graph dataset, which is usually non-transferable to out-of-domain data. Inspired by the recent advances in pre-training from natural language processing and computer vision, we design Graph Contrastive Coding (GCC) 1 -a self-supervised graph neural network pre-training framework-to capture the universal network topological properties across multiple networks. We design GCC's pre-training task as subgraph instance discrimination in and across networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations. We conduct extensive experiments on three graph learning tasks and ten graph datasets. The results show that GCC pre-trained on a collection of diverse datasets can achieve competitive or better performance to its task-specific and trained-from-scratch counterparts. This suggests that the pre-training and fine-tuning paradigm presents great potential for graph representation learning.
translated by 谷歌翻译