尽管人工智能(AI)在理解各个领域的分子方面取得了重大进展,但现有模型通常从单个分子模态中获得单个认知能力。由于分子知识的层次结构是深刻的,即使人类也从不同的方式中学习,包括直觉图和专业文本,以帮助他们的理解。受到这一点的启发,我们提出了一个分子多模式基础模型,该模型是从分子图及其语义相关的文本数据(从发表的科学引用索引论文中爬立)的。该AI模型代表了直接桥接分子图和自然语言的关键尝试。重要的是,通过捕获两种方式的特定和互补信息,我们提出的模型可以更好地掌握分子专业知识。实验结果表明,我们的模型不仅在诸如跨模式检索和分子标题之类的跨模式任务中表现出有希望的性能,而且还可以增强分子属性预测,并具有从自然语言描述中产生有意义的分子图的能力。我们认为,我们的模型将对跨生物学,化学,材料,环境和医学等学科的AI能力领域产生广泛的影响。
translated by 谷歌翻译
Recently, deep learning approaches have been extensively studied for various problems in chemistry, such as property prediction, virtual screening, de novo molecule design, etc. Despite the impressive successes, separately designed networks for specific tasks are usually required for end-to-end training, so it is often difficult to acquire a unified principle to synergistically combine existing models and training datasets for novel tasks. To address this, here we present a novel multimodal chemical foundation model that can be used for various downstream tasks that require a simultaneous understanding of structure and property. Specifically, inspired by recent advances in pre-trained multi-modal foundation models such as Vision-Language Pretrained models (VLP), we proposed a novel structure-property multi-modal (SPMM) foundation model using the dual-stream transformer with X-shape attention, so that it can align the molecule structure and the chemical properties in a common embedding space. Thanks to the outstanding structure-property unimodal representation, experimental results confirm that SPMM can simultaneously perform molecule generation, property prediction, classification, reaction prediction, etc., which was previously not possible with a single architecture.
translated by 谷歌翻译
There is increasing adoption of artificial intelligence in drug discovery. However, existing works use machine learning to mainly utilize the chemical structures of molecules yet ignore the vast textual knowledge available in chemistry. Incorporating textual knowledge enables us to realize new drug design objectives, adapt to text-based instructions, and predict complex biological activities. We present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecule's chemical structures and textual descriptions via a contrastive learning strategy. To train MoleculeSTM, we construct the largest multi-modal dataset to date, namely PubChemSTM, with over 280K chemical structure-text pairs. To demonstrate the effectiveness and utility of MoleculeSTM, we design two challenging zero-shot tasks based on text instructions, including structure-text retrieval and molecule editing. MoleculeSTM possesses two main properties: open vocabulary and compositionality via natural language. In experiments, MoleculeSTM obtains the state-of-the-art generalization ability to novel biochemical concepts across various benchmarks.
translated by 谷歌翻译
We present $\textbf{MolT5}$ $-$ a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. $\textbf{MolT5}$ allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since $\textbf{MolT5}$ pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that $\textbf{MolT5}$-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
translated by 谷歌翻译
自我监督学习(SSL)是一种通过利用数据中固有的监督来学习数据表示的方法。这种学习方法是药物领域的焦点,由于耗时且昂贵的实验,缺乏带注释的数据。使用巨大未标记数据的SSL显示出在分子属性预测方面表现出色的性能,但存在一些问题。 (1)现有的SSL模型是大规模的;在计算资源不足的情况下实现SSL有限制。 (2)在大多数情况下,它们不利用3D结构信息进行分子表示学习。药物的活性与药物分子的结构密切相关。但是,大多数当前模型不使用3D信息或部分使用它。 (3)以前对分子进行对比学习的模型使用置换原子和键的增强。因此,具有不同特征的分子可以在相同的阳性样品中。我们提出了一个新颖的对比学习框架,用于分子属性预测的小规模3D图对比度学习(3DGCL),以解决上述问题。 3DGCL通过不改变药物语义的预训练过程来反映分子的结构来学习分子表示。仅使用1,128个样本用于预训练数据和100万个模型参数,我们在四个回归基准数据集中实现了最先进或可比性的性能。广泛的实验表明,基于化学知识的3D结构信息对于用于财产预测的分子表示学习至关重要。
translated by 谷歌翻译
我们在这项研究中的目标是研究一个更现实的环境,在这种环境中,我们可以为细粒度的产品类别进行弱监督的多模式实例级产品检索。我们首先贡献了product1m数据集,并定义了两个实际实例级检索任务,以实现价格比较和个性化建议的评估。对于两个实例级任务,如何准确地指出视觉语言数据中提到的产品目标并有效地降低了无关紧要的内容的影响非常具有挑战性。为了解决这个问题,我们利用训练一个更有效的跨模式与模型,该模型能够自适应地能够通过使用一个实体图,其节点和边缘分别表示实体和相似性,从而可以从多模式数据中合并来自多模式数据的关键概念信息。实体。具体而言,为实例级别的商品检索提出了一种新型的实体图增强的跨模式预处理(EGE-CMP)模型,该模型明确地将基于节点的基于节点的基于节点和子图的方式显式地注入实体知识。自我监管的混合流变压器可以减少不同对象内容之间的混淆,从而有效地指导网络专注于具有真实语义的实体。实验结果很好地验证了我们的EGE-CMP的功效和概括性,表现优于几个SOTA跨模式基线,例如夹子,Uniter和Capture。
translated by 谷歌翻译
分子特性预测是与关键现实影响的深度学习的增长最快的应用之一。包括3D分子结构作为学习模型的输入可以提高它们对许多分子任务的性能。但是,此信息是不可行的,可以以几个现实世界应用程序所需的规模计算。我们建议预先训练模型,以推理仅给予其仅为2D分子图的分子的几何形状。使用来自自我监督学习的方法,我们最大化3D汇总向量和图形神经网络(GNN)的表示之间的相互信息,使得它们包含潜在的3D信息。在具有未知几何形状的分子上进行微调期间,GNN仍然产生隐式3D信息,并可以使用它来改善下游任务。我们表明3D预训练为广泛的性质提供了显着的改进,例如八个量子力学性能的22%的平均MAE。此外,可以在不同分子空间中的数据集之间有效地传送所学习的表示。
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
人工智能(AI)的基本目标是模仿人类的核心认知活动。尽管在AI研究中取得了巨大的成功,但大多数现有方法仅具有单认知能力。为了克服这一局限性并迈出了朝着人工通用智能(AGI)迈出的坚实一步,我们开发了一个通过庞大的多模式数据进行预训练的基础模型,可以快速适应各种下游认知任务。为了实现这一目标,我们建议通过从Internet上拖延的语义相关数据进行自我监督的学习来预先培训我们的基础模型,并表明可以在各种下游任务上获得有希望的结果。特别是,使用开发的模型解剖工具,我们证明了我们的基础模型现在拥有强大的想象力。我们认为,我们的工作从我们的“弱或狭窄AI”的常见实践到“强或广泛的AI”迈出了转变的迈向AGI。
translated by 谷歌翻译
作为人类已知的最直观的界面之一,自然语言有可能调解许多涉及人类计算机互动的任务,尤其是在音乐信息检索等以应用程序为中心的领域。在这项工作中,我们探索了跨模式学习,以试图在音乐领域弥合音频和语言。为此,我们提出了Muscall,这是音乐对比的音频学习框架。我们的方法由双重编码架构组成,该体系结构了解音乐音频和描述性句子对之间的对齐方式,生成可用于文本到原告和音频到文本检索的多模式嵌入。多亏了这个属性,肌肉几乎可以转移到任何可以作为基于文本检索的任务转移到任何任务。我们的实验表明,我们的方法在检索音频时的性能要比基线要好得多,该音频与文本描述匹配,相反,与音频查询匹配的文本。我们还证明,我们的模型的多模式对齐能力可以成功扩展到零摄像转移方案,用于流派分类和在两个公共数据集上自动标记。
translated by 谷歌翻译
分子表示学习有助于多个下游任务,例如分子性质预测和药物设计。为了适当地代表分子,图形对比学习是一个有前途的范式,因为它利用自我监督信号并没有人类注释要求。但是,先前的作品未能将基本域名知识纳入图表语义,因此忽略了具有共同属性的原子之间的相关性,但不通过键连接连接。为了解决这些问题,我们构建化学元素知识图(KG),总结元素之间的微观关联,并提出了一种用于分子代表学习的新颖知识增强的对比学习(KCL)框架。 KCL框架由三个模块组成。第一个模块,知识引导的图形增强,基于化学元素kg增强原始分子图。第二模块,知识意识的图形表示,利用用于原始分子图的公共曲线图编码器和通过神经网络(KMPNN)的知识感知消息来提取分子表示来编码增强分子图中的复杂信息。最终模块是一种对比目标,在那里我们在分子图的这两个视图之间最大化协议。广泛的实验表明,KCL获得了八个分子数据集上的最先进基线的优异性能。可视化实验适当地解释了在增强分子图中从原子和属性中了解的KCL。我们的代码和数据可用于补充材料。
translated by 谷歌翻译
了解产品内容的视觉和语言表示对于电子商务中的搜索和推荐应用程序至关重要。作为在线购物平台的骨干,受到代表学习研究的最新成功的启发,我们提出了一个对比度学习框架,该框架使用未标记的原始产品文本和图像来对齐语言和视觉模型。我们介绍了我们用来培训大规模代表性学习模型的技术,并共享解决特定领域挑战的解决方案。我们使用预先训练的模型作为多种下游任务的骨干进行研究,包括类别分类,属性提取,产品匹配,产品聚类和成人产品识别。实验结果表明,我们所提出的方法在每个下游任务中均优于单个模态和多种方式的基线。
translated by 谷歌翻译
人工智能(AI)在过去十年中一直在改变药物发现的实践。各种AI技术已在广泛的应用中使用,例如虚拟筛选和药物设计。在本调查中,我们首先概述了药物发现,并讨论了相关的应用,可以减少到两个主要任务,即分子性质预测和分子产生。然后,我们讨论常见的数据资源,分子表示和基准平台。此外,为了总结AI在药物发现中的进展情况,我们介绍了在调查的论文中包括模型架构和学习范式的相关AI技术。我们预计本调查将作为有兴趣在人工智能和药物发现界面工作的研究人员的指南。我们还提供了GitHub存储库(HTTPS:///github.com/dengjianyuan/survey_survey_au_drug_discovery),其中包含文件和代码,如适用,作为定期更新的学习资源。
translated by 谷歌翻译
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
使用图神经网络(GNN)提取分子的信息表示,对于AI驱动的药物发现至关重要。最近,图形研究界一直在试图复制自然语言处理预处理的成功,并获得了一些成功。但是,我们发现在许多情况下,自我监督预审计对分子数据的益处可以忽略不计。我们对GNN预处理的关键组成部分进行了彻底的消融研究,包括预处理目标,数据拆分方法,输入特征,预处理数据集量表和GNN体系结构,以决定下游任务的准确性。我们的第一个重要发现是,在许多情况下,自我监督的图表预处理没有统计学上的显着优势。其次,尽管可以通过额外的监督预处理可以观察到改进,但通过更丰富或更平衡的数据拆分,改进可能会减少。第三,实验性超参数对下游任务的准确性具有更大的影响,而不是训练训练的任务。我们假设对分子进行预训练的复杂性不足,从而导致下游任务的可转移知识较低。
translated by 谷歌翻译
最近,跨模式的预训练任务一直是一个热点,因为它在各种下文研究中广泛应用,包括检索,字幕,问题答案等。然而,退出的方法采用单媒体预训练模型来探索进行跨模式检索的联合视觉表示,这很容易遭受计算爆炸的影响。此外,尽管常规的双流结构非常有效,但它们仍然缺乏重要的跨模式相互作用,导致性能低。在这些挑战的激励下,我们提出了一个对比的跨模式知识共享预训练(Cookie),以掌握联合文本图像表示。从结构上讲,Cookie由于可接受的时间消耗而采用了传统的双流结构。为了克服上述双流结构的固有缺陷,我们精心设计了两个有效的模块。具体而言,第一个模块是一个体重共享的变压器,它构建在视觉和文本编码器的头上,旨在将语义对齐文本和图像对齐。该设计使视觉和文本路径集中在相同的语义上。另一个是三个专门设计的对比学习,旨在分享不同模型之间的知识。共享的跨模式知识大大发展了单峰表示的研究,从而促进了单模式检索任务。对多模式匹配研究的广泛实验结果,包括跨模式检索,文本匹配和图像检索揭示了我们的计算效率和我们预训练模型的统计指标的上级。
translated by 谷歌翻译
分子特性预测在药物发现中起着基本作用,以鉴定具有目标性质的候选分子。然而,分子特性预测基本上是几次射门问题,这使得难以使用普通机器学习模型。在本文中,我们提出了一个属性感知的关系网络(PAR)来处理此问题。与现有的作品相比,我们利用了不同分子特性的相关子结构和关系的事实。我们首先介绍一个属性感知的嵌入功能,将通用分子嵌入的功能转换为与目标属性相关的子结构感知空间。此外,我们设计了一个自适应关系图学习模块,共同估计了分子关系图和优化分子嵌入W.R.T.目标性质,使得有限标签可以有效地在类似的分子之间繁殖。我们采用元学习策略,其中参数在任务中选择性地更新,以便单独模拟通用和属性感知的知识。基准分子特性预测数据集的广泛实验表明,始终如一地优于现有方法,并可以正确获得性能感知分子嵌入和模型分子关系图。
translated by 谷歌翻译
Graph representation learning has emerged as a powerful technique for addressing real-world problems. Various downstream graph learning tasks have benefited from its recent developments, such as node classification, similarity search, and graph classification. However, prior arts on graph representation learning focus on domain specific problems and train a dedicated model for each graph dataset, which is usually non-transferable to out-of-domain data. Inspired by the recent advances in pre-training from natural language processing and computer vision, we design Graph Contrastive Coding (GCC) 1 -a self-supervised graph neural network pre-training framework-to capture the universal network topological properties across multiple networks. We design GCC's pre-training task as subgraph instance discrimination in and across networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations. We conduct extensive experiments on three graph learning tasks and ten graph datasets. The results show that GCC pre-trained on a collection of diverse datasets can achieve competitive or better performance to its task-specific and trained-from-scratch counterparts. This suggests that the pre-training and fine-tuning paradigm presents great potential for graph representation learning.
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译