We present $\textbf{MolT5}$ $-$ a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. $\textbf{MolT5}$ allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since $\textbf{MolT5}$ pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that $\textbf{MolT5}$-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
translated by 谷歌翻译
尽管人工智能(AI)在理解各个领域的分子方面取得了重大进展,但现有模型通常从单个分子模态中获得单个认知能力。由于分子知识的层次结构是深刻的,即使人类也从不同的方式中学习,包括直觉图和专业文本,以帮助他们的理解。受到这一点的启发,我们提出了一个分子多模式基础模型,该模型是从分子图及其语义相关的文本数据(从发表的科学引用索引论文中爬立)的。该AI模型代表了直接桥接分子图和自然语言的关键尝试。重要的是,通过捕获两种方式的特定和互补信息,我们提出的模型可以更好地掌握分子专业知识。实验结果表明,我们的模型不仅在诸如跨模式检索和分子标题之类的跨模式任务中表现出有希望的性能,而且还可以增强分子属性预测,并具有从自然语言描述中产生有意义的分子图的能力。我们认为,我们的模型将对跨生物学,化学,材料,环境和医学等学科的AI能力领域产生广泛的影响。
translated by 谷歌翻译
There is increasing adoption of artificial intelligence in drug discovery. However, existing works use machine learning to mainly utilize the chemical structures of molecules yet ignore the vast textual knowledge available in chemistry. Incorporating textual knowledge enables us to realize new drug design objectives, adapt to text-based instructions, and predict complex biological activities. We present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecule's chemical structures and textual descriptions via a contrastive learning strategy. To train MoleculeSTM, we construct the largest multi-modal dataset to date, namely PubChemSTM, with over 280K chemical structure-text pairs. To demonstrate the effectiveness and utility of MoleculeSTM, we design two challenging zero-shot tasks based on text instructions, including structure-text retrieval and molecule editing. MoleculeSTM possesses two main properties: open vocabulary and compositionality via natural language. In experiments, MoleculeSTM obtains the state-of-the-art generalization ability to novel biochemical concepts across various benchmarks.
translated by 谷歌翻译
动机:针对感兴趣的蛋白质的新颖化合物的发展是制药行业中最重要的任务之一。深层生成模型已应用于靶向分子设计,并显示出令人鼓舞的结果。最近,靶标特异性分子的产生被视为蛋白质语言与化学语言之间的翻译。但是,这种模型受相互作用蛋白质配对的可用性的限制。另一方面,可以使用大量未标记的蛋白质序列和化学化合物,并已用于训练学习有用表示的语言模型。在这项研究中,我们提出了利用预审核的生化语言模型以初始化(即温暖的开始)目标分子产生模型。我们研究了两种温暖的开始策略:(i)一种一阶段策略,其中初始化模型是针对靶向分子生成(ii)的两阶段策略进行培训的,该策略包含对分子生成的预处理,然后进行目标特定训练。我们还比较了两种生成化合物的解码策略:光束搜索和采样。结果:结果表明,温暖启动的模型的性能优于从头开始训练的基线模型。相对于基准广泛使用的指标,这两种拟议的温暖启动策略相互取得了相似的结果。然而,对许多新蛋白质生成的化合物进行对接评估表明,单阶段策略比两阶段策略更好地概括了。此外,我们观察到,在对接评估和基准指标中,梁搜索的表现优于采样,用于评估复合质量。可用性和实施​​:源代码可在https://github.com/boun-tabi/biochemical-lms-for-drug-design和材料中获得,并在Zenodo归档,网址为https://doi.org/10.5281/zenodo .6832145
translated by 谷歌翻译
自我监督的神经语言模型最近在有机分子和蛋白质序列的生成设计中发现了广泛的应用,以及用于下游结构分类和功能预测的表示学习。但是,大多数现有的分子设计深度学习模型通常都需要一个大数据集并具有黑盒架构,这使得很难解释其设计逻辑。在这里,我们提出了生成分子变压器(GMTRANSFORMER),这是一种用于分子生成设计的概率神经网络模型。我们的模型建立在最初用于文本处理的空白填充语言模型上,该模型在学习具有高质量生成,可解释性和数据效率的“分子语法”方面具有独特的优势。与其他基线相比,我们的模型在摩西数据集上的基准测试后获得了高新颖性和SCAF。概率生成步骤具有修补分子设计的潜力,因为它们有能力推荐如何通过学习的隐式分子化学指导,并通过解释来修饰现有分子。可以在https://github.com/usccolumbia/gmtransformer上自由访问源代码和数据集
translated by 谷歌翻译
虽然最近在许多科学领域都变得无处不在,但对其评估的关注较少。对于分子生成模型,最先进的是孤立或与其输入有关的输出。但是,它们的生物学和功能特性(例如配体 - 靶标相互作用)尚未得到解决。在这项研究中,提出了一种新型的生物学启发的基准,用于评估分子生成模型。具体而言,设计了三个不同的参考数据集,并引入了与药物发现过程直接相关的一组指标。特别是我们提出了一个娱乐指标,将药物目标亲和力预测和分子对接应用作为评估生成产量的互补技术。虽然所有三个指标均在测试的生成模型中均表现出一致的结果,但对药物目标亲和力结合和分子对接分数进行了更详细的比较,表明单峰预测器可能会导致关于目标结合在分子水平和多模式方法的错误结论,而多模式的方法是错误的结论。因此优选。该框架的关键优点是,它通过明确关注配体 - 靶标相互作用,将先前的物理化学域知识纳入基准测试过程,从而创建了一种高效的工具,不仅用于评估分子生成型输出,而且还用于丰富富含分子生成的输出。一般而言,药物发现过程。
translated by 谷歌翻译
We discover a robust self-supervised strategy tailored towards molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pre-training strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks setting a new state-of-the-art on 11 tasks. We then quantitatively show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and the pretrained model are publicly available.
translated by 谷歌翻译
人工智能(AI)在过去十年中一直在改变药物发现的实践。各种AI技术已在广泛的应用中使用,例如虚拟筛选和药物设计。在本调查中,我们首先概述了药物发现,并讨论了相关的应用,可以减少到两个主要任务,即分子性质预测和分子产生。然后,我们讨论常见的数据资源,分子表示和基准平台。此外,为了总结AI在药物发现中的进展情况,我们介绍了在调查的论文中包括模型架构和学习范式的相关AI技术。我们预计本调查将作为有兴趣在人工智能和药物发现界面工作的研究人员的指南。我们还提供了GitHub存储库(HTTPS:///github.com/dengjianyuan/survey_survey_au_drug_discovery),其中包含文件和代码,如适用,作为定期更新的学习资源。
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.
translated by 谷歌翻译
Recently, deep learning approaches have been extensively studied for various problems in chemistry, such as property prediction, virtual screening, de novo molecule design, etc. Despite the impressive successes, separately designed networks for specific tasks are usually required for end-to-end training, so it is often difficult to acquire a unified principle to synergistically combine existing models and training datasets for novel tasks. To address this, here we present a novel multimodal chemical foundation model that can be used for various downstream tasks that require a simultaneous understanding of structure and property. Specifically, inspired by recent advances in pre-trained multi-modal foundation models such as Vision-Language Pretrained models (VLP), we proposed a novel structure-property multi-modal (SPMM) foundation model using the dual-stream transformer with X-shape attention, so that it can align the molecule structure and the chemical properties in a common embedding space. Thanks to the outstanding structure-property unimodal representation, experimental results confirm that SPMM can simultaneously perform molecule generation, property prediction, classification, reaction prediction, etc., which was previously not possible with a single architecture.
translated by 谷歌翻译
面向目标的生成脚本学习旨在根据目标生成后续步骤,这是帮助机器人进行日常生活的刻板印象活动的重要任务。我们表明,如果历史状态不仅被给人的语言指示捕获,而且还可以增强随附图像提供的其他信息,可以提高此任务的性能。因此,我们提出了一项新任务,多媒体生成脚本学习,以通过跟踪文本和视觉方式中的历史状态,并介绍包含2,338个任务和31,496个步骤的第一个基准,从而生成后续步骤。我们旨在生成视觉状态的脚本,这些脚本是可跟踪的,对看不见的任务的诱导性,并且在各自的步骤中多样化。我们建议通过多媒体选择性编码器编码视觉状态更改,并使用检索仪的解码器从先前观察到的任务中转移知识,并通过优化面向多样性的对比度学习目标来在每个步骤中介绍不同的信息。我们定义指标以评估发电质量和电感质量。实验结果表明,我们的方法明显优于强质基线。
translated by 谷歌翻译
In this work, we propose MEDICO, a Multi-viEw Deep generative model for molecule generation, structural optimization, and the SARS-CoV-2 Inhibitor disCOvery. To the best of our knowledge, MEDICO is the first-of-this-kind graph generative model that can generate molecular graphs similar to the structure of targeted molecules, with a multi-view representation learning framework to sufficiently and adaptively learn comprehensive structural semantics from targeted molecular topology and geometry. We show that our MEDICO significantly outperforms the state-of-the-art methods in generating valid, unique, and novel molecules under benchmarking comparisons. In particular, we showcase the multi-view deep learning model enables us to generate not only the molecules structurally similar to the targeted molecules but also the molecules with desired chemical properties, demonstrating the strong capability of our model in exploring the chemical space deeply. Moreover, case study results on targeted molecule generation for the SARS-CoV-2 main protease (Mpro) show that by integrating molecule docking into our model as chemical priori, we successfully generate new small molecules with desired drug-like properties for the Mpro, potentially accelerating the de novo design of Covid-19 drugs. Further, we apply MEDICO to the structural optimization of three well-known Mpro inhibitors (N3, 11a, and GC376) and achieve ~88% improvement in their binding affinity to Mpro, demonstrating the application value of our model for the development of therapeutics for SARS-CoV-2 infection.
translated by 谷歌翻译
人类利用先验知识来描述图像,并能够使其解释适应特定的上下文信息,即使在上下文信息和图像不匹配时,也可以在发明合理的解释的范围内。在这项工作中,我们提出了通过整合上下文知识来字幕Wikipedia图像的新颖任务。具体而言,我们制作的模型共同推理了Wikipedia文章,Wikimedia图像及其相关描述以产生上下文化的标题。特别是,可以使用类似的Wikimedia图像来说明不同的文章,并且所产生的标题需要适应特定的上下文,因此使我们能够探索模型的限制以调整标题为不同的上下文信息。该领域中的一个特殊挑战性的任务是处理量不多的单词和命名实体。为了解决这个问题,我们提出了一个预训练目标,掩盖了命名实体建模(MNEM),并表明与基线模型相比,此借口任务可以改善。此外,我们验证了Wikipedia中使用MNEM目标预先训练的模型可以很好地推广到新闻字幕数据集。此外,我们根据字幕任务的难度定义了两种不同的测试拆分。我们提供有关每种方式的作用和重要性的见解,并突出我们模型的局限性。接受时,代码,模型和数据拆分可公开可用。
translated by 谷歌翻译
在药物发现中,具有所需生物活性的新分子的合理设计是一项至关重要但具有挑战性的任务,尤其是在治疗新的靶家庭或研究靶标时。在这里,我们提出了PGMG,这是一种用于生物活化分子产生的药效团的深度学习方法。PGMG通过药理的指导提供了一种灵活的策略,以使用训练有素的变异自动编码器在各种情况下生成具有结构多样性的生物活性分子。我们表明,PGMG可以在给定药效团模型的情况下生成匹配的分子,同时保持高度的有效性,独特性和新颖性。在案例研究中,我们证明了PGMG在基于配体和基于结构的药物从头设计以及铅优化方案中生成生物活性分子的应用。总体而言,PGMG的灵活性和有效性使其成为加速药物发现过程的有用工具。
translated by 谷歌翻译
Despite significant progress of generative models in the natural sciences, their controllability remains challenging. One fundamentally missing aspect of molecular or protein generative models is an inductive bias that can reflect continuous properties of interest. To that end, we propose the Regression Transformer (RT), a novel method that abstracts regression as a conditional sequence modeling problem. This introduces a new paradigm of multitask language models which seamlessly bridge sequence regression and conditional sequence generation. We thoroughly demonstrate that, despite using a nominal-scale training objective, the RT matches or surpasses the performance of conventional regression models in property prediction tasks of small molecules, proteins and chemical reactions. Critically, priming the same model with continuous properties yields a highly competitive conditional generative model that outperforms specialized approaches in a substructure-constrained, property-driven molecule generation benchmark. Our dichotomous approach is facilitated by a novel, alternating training scheme that enables the model to decorate seed sequences by desired properties, e.g., to optimize reaction yield. In sum, the RT is the first report of a multitask model that concurrently excels at predictive and generative tasks in biochemistry. This finds particular application in property-driven, local exploration of the chemical or protein space and could pave the road toward foundation models in material design. The code to reproduce all experiments of the paper is available at: https://github.com/IBM/regression-transformer
translated by 谷歌翻译
基础模型由于在广泛的下游应用中的有效性而受到了很多关注。尽管在体系结构方面存在很大的融合,但大多数审慎的模型通常仍用于特定任务或模式。在这项工作中,我们建议将语言模型用作各种基础模型的通用接口。一系列预处理的编码者感知到了多种方式(例如视觉和语言),并与扮演通用任务层角色的语言模型对接。我们提出了一个半伴侣的语言建模目标,以共同确定界面和模块化编码器。我们从因果关系和非因果建模中涵盖了优势和能力,从而结合了两个世界的最佳状态。具体而言,所提出的方法不仅从因果语言建模中继承了内在学习和开放式生成的能力,而且由于双向编码器而有利于填补。更重要的是,我们的方法无缝地解锁了上述功能的组合,例如,通过填充编码器启用了文本学习或指导。各种仅语言和视觉语言基准的实验结果表明,我们的模型表现优于或与鉴定,零弹性概括和几乎没有的学习的专业模型竞争。
translated by 谷歌翻译
分子的深度生成模型在相关数据集上培训,这些模型培训的流行度非常受欢迎,这些模型用于通过化学空间进行搜索。用于新型功能化合物的逆设计的生成模型的下游效用取决于它们学习分子训练分布的能力。最简单的示例是一种语言模型,采用经常性神经网络的形式,并使用串表示生成分子。更复杂的是图形生成模型,其顺序地构建分子图,通常实现最先进的结果。然而,最近的工作表明,语言模型比曾经的想法更具能力,特别是在低数据制度中。在这项工作中,我们调查了简单语言模型学习分子分布的能力。为此目的,我们通过编译特别复杂的分子分布来介绍几个具有挑战性的生成建模任务。在每个任务上,与两个广泛使用的图形生成模型相比,我们评估语言模型的能力。结果表明,语言模型是强大的生成模型,能够熟练地学习复杂的分子分配 - 并产生比图形模型更好的性能。语言模型可以准确地产生:锌15,多模态分子分布以及Pubchem中最大分子的最高评分罚款的分布。
translated by 谷歌翻译
图像字幕是当前的研究任务,用于使用场景中的对象及其关系来描述图像内容。为了应对这项任务,使用了两个重要的研究领域,人为的视觉和自然语言处理。在图像字幕中,就像在任何计算智能任务中一样,性能指标对于知道方法的性能(或坏)至关重要。近年来,已经观察到,基于n-gram的经典指标不足以捕获语义和关键含义来描述图像中的内容。为了衡量或不进行最新指标的集合,在本手稿中,我们对使用众所周知的COCO数据集进行了对几种图像字幕指标的评估以及它们之间的比较。为此,我们设计了两种情况。 1)一组人工构建字幕,以及2)比较某些最先进的图像字幕方法的比较。我们试图回答问题:当前的指标是否有助于制作高质量的标题?实际指标如何相互比较?指标真正测量什么?
translated by 谷歌翻译
有效的缩放和灵活的任务接口使大型语言模型能够在许多任务中表现出色。帕利(Pali)根据视觉和文本输入生成文本,并使用该界面以许多语言执行许多视觉,语言和多模式任务。为了训练帕利,我们利用了大型的编码器语言模型和视觉变压器(VITS)。这使我们能够利用其现有能力,并利用培训它们的大量成本。我们发现,视觉和语言组成部分的联合缩放很重要。由于现有的语言变压器比其视觉对应物要大得多,因此我们训练迄今为止最大的VIT(VIT-E),以量化甚至大容量视觉模型的好处。为了训练Pali,我们基于一个新的图像文本训练集,其中包含10B图像和文本,以100多种语言来创建大型的多语言组合。帕利(Pali)在多个视觉和语言任务(例如字幕,视觉问题,索方式,场景文本理解)中实现了最新的,同时保留了简单,模块化和可扩展的设计。
translated by 谷歌翻译