Despite significant progress of generative models in the natural sciences, their controllability remains challenging. One fundamentally missing aspect of molecular or protein generative models is an inductive bias that can reflect continuous properties of interest. To that end, we propose the Regression Transformer (RT), a novel method that abstracts regression as a conditional sequence modeling problem. This introduces a new paradigm of multitask language models which seamlessly bridge sequence regression and conditional sequence generation. We thoroughly demonstrate that, despite using a nominal-scale training objective, the RT matches or surpasses the performance of conventional regression models in property prediction tasks of small molecules, proteins and chemical reactions. Critically, priming the same model with continuous properties yields a highly competitive conditional generative model that outperforms specialized approaches in a substructure-constrained, property-driven molecule generation benchmark. Our dichotomous approach is facilitated by a novel, alternating training scheme that enables the model to decorate seed sequences by desired properties, e.g., to optimize reaction yield. In sum, the RT is the first report of a multitask model that concurrently excels at predictive and generative tasks in biochemistry. This finds particular application in property-driven, local exploration of the chemical or protein space and could pave the road toward foundation models in material design. The code to reproduce all experiments of the paper is available at: https://github.com/IBM/regression-transformer
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译
虽然最近在许多科学领域都变得无处不在,但对其评估的关注较少。对于分子生成模型,最先进的是孤立或与其输入有关的输出。但是,它们的生物学和功能特性(例如配体 - 靶标相互作用)尚未得到解决。在这项研究中,提出了一种新型的生物学启发的基准,用于评估分子生成模型。具体而言,设计了三个不同的参考数据集,并引入了与药物发现过程直接相关的一组指标。特别是我们提出了一个娱乐指标,将药物目标亲和力预测和分子对接应用作为评估生成产量的互补技术。虽然所有三个指标均在测试的生成模型中均表现出一致的结果,但对药物目标亲和力结合和分子对接分数进行了更详细的比较,表明单峰预测器可能会导致关于目标结合在分子水平和多模式方法的错误结论,而多模式的方法是错误的结论。因此优选。该框架的关键优点是,它通过明确关注配体 - 靶标相互作用,将先前的物理化学域知识纳入基准测试过程,从而创建了一种高效的工具,不仅用于评估分子生成型输出,而且还用于丰富富含分子生成的输出。一般而言,药物发现过程。
translated by 谷歌翻译
人工智能(AI)在过去十年中一直在改变药物发现的实践。各种AI技术已在广泛的应用中使用,例如虚拟筛选和药物设计。在本调查中,我们首先概述了药物发现,并讨论了相关的应用,可以减少到两个主要任务,即分子性质预测和分子产生。然后,我们讨论常见的数据资源,分子表示和基准平台。此外,为了总结AI在药物发现中的进展情况,我们介绍了在调查的论文中包括模型架构和学习范式的相关AI技术。我们预计本调查将作为有兴趣在人工智能和药物发现界面工作的研究人员的指南。我们还提供了GitHub存储库(HTTPS:///github.com/dengjianyuan/survey_survey_au_drug_discovery),其中包含文件和代码,如适用,作为定期更新的学习资源。
translated by 谷歌翻译
动机:针对感兴趣的蛋白质的新颖化合物的发展是制药行业中最重要的任务之一。深层生成模型已应用于靶向分子设计,并显示出令人鼓舞的结果。最近,靶标特异性分子的产生被视为蛋白质语言与化学语言之间的翻译。但是,这种模型受相互作用蛋白质配对的可用性的限制。另一方面,可以使用大量未标记的蛋白质序列和化学化合物,并已用于训练学习有用表示的语言模型。在这项研究中,我们提出了利用预审核的生化语言模型以初始化(即温暖的开始)目标分子产生模型。我们研究了两种温暖的开始策略:(i)一种一阶段策略,其中初始化模型是针对靶向分子生成(ii)的两阶段策略进行培训的,该策略包含对分子生成的预处理,然后进行目标特定训练。我们还比较了两种生成化合物的解码策略:光束搜索和采样。结果:结果表明,温暖启动的模型的性能优于从头开始训练的基线模型。相对于基准广泛使用的指标,这两种拟议的温暖启动策略相互取得了相似的结果。然而,对许多新蛋白质生成的化合物进行对接评估表明,单阶段策略比两阶段策略更好地概括了。此外,我们观察到,在对接评估和基准指标中,梁搜索的表现优于采样,用于评估复合质量。可用性和实施​​:源代码可在https://github.com/boun-tabi/biochemical-lms-for-drug-design和材料中获得,并在Zenodo归档,网址为https://doi.org/10.5281/zenodo .6832145
translated by 谷歌翻译
We discover a robust self-supervised strategy tailored towards molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pre-training strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks setting a new state-of-the-art on 11 tasks. We then quantitatively show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and the pretrained model are publicly available.
translated by 谷歌翻译
通过生成模型生成具有特定化学和生物学特性的新分子已成为药物发现的有希望的方向。但是,现有的方法需要大型数据集进行广泛的培训/微调,在现实世界中通常无法使用。在这项工作中,我们提出了一个新的基于检索的框架,用于可控分子生成。我们使用一系列的示例分子,即(部分)满足设计标准的分子,以引导预先训练的生成模型转向满足给定设计标准的合成分子。我们设计了一种检索机制,该机制将示例分子与输入分子融合在一起,该分子受到一个新的自我监督目标训练,该目标可以预测输入分子的最近邻居。我们还提出了一个迭代改进过程,以动态更新生成的分子和检索数据库,以更好地泛化。我们的方法不可知生成模型,不需要特定于任务的微调。关于从简单设计标准到设计与SARS-COV-2主蛋白酶结合的铅化合物的具有挑战性的现实世界情景的各种任务,我们证明了我们的方法外推出了远远超出检索数据库,并且比检索数据库更高,并且比更高的性能和更广泛的适用性以前的方法。
translated by 谷歌翻译
与靶蛋白具有高结合亲和力的药物样分子的产生仍然是药物发现中的一项困难和资源密集型任务。现有的方法主要采用强化学习,马尔可夫采样或以高斯过程为指导的深层生成模型,在生成具有高结合亲和力的分子时,通过基于计算量的物理学方法计算出的高结合亲和力。我们提出了对分子(豪华轿车)的潜在构成主义,它通过类似于Inceptionism的技术显着加速了分子的产生。豪华轿车采用序列的两个神经网络采用变异自动编码器生成的潜在空间和性质预测,从而使基于梯度的分子特性更快地基于梯度的反相比。综合实验表明,豪华轿车在基准任务上具有竞争力,并且在产生具有高结合亲和力的类似药物的化合物的新任务上,其最先进的技术表现出了最先进的技术,可针对两个蛋白质靶标达到纳摩尔范围。我们通过对绝对结合能的基于更准确的基于分子动力学的计算来证实这些基于对接的结果,并表明我们生成的类似药物的化合物之一的预测$ k_d $(结合亲和力的量度)为$ 6 \ cdot 10^ {-14} $ m针对人类雌激素受体,远远超出了典型的早期药物候选物和大多数FDA批准的药物的亲和力。代码可从https://github.com/rose-stl-lab/limo获得。
translated by 谷歌翻译
人工智能(AI)已被广泛应用于药物发现中,其主要任务是分子财产预测。尽管分子表示学习中AI技术的繁荣,但尚未仔细检查分子性质预测的一些关键方面。在这项研究中,我们对三个代表性模型,即随机森林,莫尔伯特和格罗弗进行了系统比较,该模型分别利用了三个主要的分子表示,扩展连接的指纹,微笑的字符串和分子图。值得注意的是,莫尔伯特(Molbert)和格罗弗(Grover)以自我监督的方式在大规模的无标记分子库中进行了预定。除了常用的分子基准数据集外,我们还组装了一套与阿片类药物相关的数据集进行下游预测评估。我们首先对标签分布和结构分析进行了数据集分析;我们还检查了阿片类药物相关数据集中的活动悬崖问题。然后,我们培训了4,320个预测模型,并评估了学习表示的有用性。此外,我们通过研究统计测试,评估指标和任务设置的效果来探索模型评估。最后,我们将化学空间的概括分解为施加间和支柱内的概括,并测量了预测性能,以评估两种设置下模型的普遍性。通过采取这种喘息,我们反映了分子财产预测的基本关键方面,希望在该领域带来更好的AI技术的意识。
translated by 谷歌翻译
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.
translated by 谷歌翻译
Recently, deep learning approaches have been extensively studied for various problems in chemistry, such as property prediction, virtual screening, de novo molecule design, etc. Despite the impressive successes, separately designed networks for specific tasks are usually required for end-to-end training, so it is often difficult to acquire a unified principle to synergistically combine existing models and training datasets for novel tasks. To address this, here we present a novel multimodal chemical foundation model that can be used for various downstream tasks that require a simultaneous understanding of structure and property. Specifically, inspired by recent advances in pre-trained multi-modal foundation models such as Vision-Language Pretrained models (VLP), we proposed a novel structure-property multi-modal (SPMM) foundation model using the dual-stream transformer with X-shape attention, so that it can align the molecule structure and the chemical properties in a common embedding space. Thanks to the outstanding structure-property unimodal representation, experimental results confirm that SPMM can simultaneously perform molecule generation, property prediction, classification, reaction prediction, etc., which was previously not possible with a single architecture.
translated by 谷歌翻译
基于深度学习的分子建模的最新进步令人兴奋地加速硅药发现。可获得血清的生成模型,构建原子原子和键合或逐片键的分子。然而,许多药物发现项目需要固定的支架以存在于所生成的分子中,并纳入该约束仅探讨了该约束。在这里,我们提出了一种基于图形的模型,其自然地支持支架作为生成过程的初始种子,这是可能的,因为它不调节在发电历史上。我们的实验表明,Moler与最先进的方法进行了相当的方法,在无约会的分子优化任务上,并且在基于脚手架的任务上优于它们,而不是比现有方法从培训和样本更快的数量级。此外,我们展示了许多看似小设计选择对整体性能的影响。
translated by 谷歌翻译
There is increasing adoption of artificial intelligence in drug discovery. However, existing works use machine learning to mainly utilize the chemical structures of molecules yet ignore the vast textual knowledge available in chemistry. Incorporating textual knowledge enables us to realize new drug design objectives, adapt to text-based instructions, and predict complex biological activities. We present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecule's chemical structures and textual descriptions via a contrastive learning strategy. To train MoleculeSTM, we construct the largest multi-modal dataset to date, namely PubChemSTM, with over 280K chemical structure-text pairs. To demonstrate the effectiveness and utility of MoleculeSTM, we design two challenging zero-shot tasks based on text instructions, including structure-text retrieval and molecule editing. MoleculeSTM possesses two main properties: open vocabulary and compositionality via natural language. In experiments, MoleculeSTM obtains the state-of-the-art generalization ability to novel biochemical concepts across various benchmarks.
translated by 谷歌翻译
在药物发现中,具有所需生物活性的新分子的合理设计是一项至关重要但具有挑战性的任务,尤其是在治疗新的靶家庭或研究靶标时。在这里,我们提出了PGMG,这是一种用于生物活化分子产生的药效团的深度学习方法。PGMG通过药理的指导提供了一种灵活的策略,以使用训练有素的变异自动编码器在各种情况下生成具有结构多样性的生物活性分子。我们表明,PGMG可以在给定药效团模型的情况下生成匹配的分子,同时保持高度的有效性,独特性和新颖性。在案例研究中,我们证明了PGMG在基于配体和基于结构的药物从头设计以及铅优化方案中生成生物活性分子的应用。总体而言,PGMG的灵活性和有效性使其成为加速药物发现过程的有用工具。
translated by 谷歌翻译
我们解决了受控生成小分子的任务,该任务需要在某些约束(例如,与参考分子相似)下找到具有所需特性的新分子。在这里,我们介绍了Molmim,这是一种用于学习信息丰富且聚集的潜在空间的小分子药物发现的概率自动编码器。 Molmim通过共同信息机(MIM)学习训练,并提供可变长度微笑字符串的固定长度表示。由于编码器模型可以通过无效样品的``孔''来学习表示形式,因此我们在这里提出了训练程序的新型扩展,该过程促进了促进密集的潜在空间,并允许模型从潜在代码的随机扰动中采样有效分子。我们提供了Molmim与几个可变大小和固定尺寸的编码器模型的彻底比较,这表明了Molmim的上一代,如有效性,独特性和新颖性而言。然后,我们利用CMA-E,一种天真的黑盒和无梯度的搜索算法,是Molmim的潜在空间来实现属性引导分子优化的任务。我们实现了最新的单个属性优化任务以及多目标优化的具有挑战性的任务,从而提高了先前的成功率SOTA超过5 \%。我们将强有力的结果归因于莫尔米姆的潜在表示,这些表示在潜在空间中聚集了相似的分子,而CMA-ES通常用作基线优化方法。我们还证明了莫尔米姆在计算有限的制度中有利,使其成为这种情况的有吸引力的模型。
translated by 谷歌翻译
在三维分子结构上运行的计算方法有可能解决生物学和化学的重要问题。特别地,深度神经网络的重视,但它们在生物分子结构域中的广泛采用受到缺乏系统性能基准或统一工具包的限制,用于与分子数据相互作用。为了解决这个问题,我们呈现Atom3D,这是一个新颖的和现有的基准数据集的集合,跨越几个密钥的生物分子。我们为这些任务中的每一个实施多种三维分子学习方法,并表明它们始终如一地提高了基于单维和二维表示的方法的性能。结构的具体选择对于性能至关重要,具有涉及复杂几何形状的任务的三维卷积网络,在需要详细位置信息的系统中表现出良好的图形网络,以及最近开发的设备越多的网络显示出显着承诺。我们的结果表明,许多分子问题符合三维分子学习的增益,并且有可能改善许多仍然过分曝光的任务。为了降低进入并促进现场进一步发展的障碍,我们还提供了一套全面的DataSet处理,模型培训和在我们的开源ATOM3D Python包中的评估工具套件。所有数据集都可以从https://www.atom3d.ai下载。
translated by 谷歌翻译
贝叶斯优化(Bayesopt)是查询有效连续优化的黄金标准。然而,决策变量的离散,高维质阻碍了其对药物设计的采用。我们开发了一种新方法(LAMBO),该方法通过判别性多任务高斯流程主管共同训练Denoising AutoCododer,从而使基于梯度的多目标采集功能优化了自动装编码器的潜在空间。这些采集功能使Lambo能够在多个设计回合上平衡探索探索折衷方案,并通过在Pareto边境上的许多不同地点优化序列来平衡客观权衡。我们在两个小分子设计任务上评估了兰博,并引入了优化\ emph {在硅}和\ emph {Inter {In Betro}特性的新任务。在我们的实验中,兰博的表现优于遗传优化者,并且不需要大量的预处理,表明贝叶诺斯对生物序列设计是实用且有效的。
translated by 谷歌翻译
学习有效的蛋白质表示在生物学的各种任务中至关重要,例如预测蛋白质功能或结构。现有的方法通常在大量未标记的氨基酸序列上预先蛋白质语言模型,然后在下游任务中使用一些标记的数据来对模型进行修复。尽管基于序列的方法具有有效性,但尚未探索蛋白质性能预测的已知蛋白质结构的预处理功能,尽管蛋白质结构已知是蛋白质功能的决定因素,但尚未探索。在本文中,我们建议根据其3D结构预处理蛋白质。我们首先提出一个简单而有效的编码器,以学习蛋白质的几何特征。我们通过利用多视图对比学习和不同的自我预测任务来预先蛋白质图编码器。对功能预测和折叠分类任务的实验结果表明,我们提出的预处理方法表现优于或与最新的基于最新的序列方法相提并论,同时使用较少的数据。我们的实施可在https://github.com/deepgraphlearning/gearnet上获得。
translated by 谷歌翻译
We present $\textbf{MolT5}$ $-$ a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. $\textbf{MolT5}$ allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since $\textbf{MolT5}$ pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that $\textbf{MolT5}$-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
translated by 谷歌翻译
我们从第一批原则提供了一个理论分析,该原则在预训练和微调性能的关系归纳偏差之间建立了新的联系,同时提供了一般预训练模型的延长视图。我们进一步探讨了现有的预训练方法如何强加相关的归纳偏差,发现绝大多数现有方法几乎专注于以帧内方式建模的关系,而不是每种样本方式。我们建立了这些调查结果,这些发现与跨越3个数据模式和10个下游任务的标准基准测试。这些调查验证了我们的理论分析,并提供了一种方法,以产生新的预训练方法,该方法与现有的方法符合用户指定的关系图。
translated by 谷歌翻译