Proteins are fundamental biological entities that play a key role in life activities. The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, forming a special kind of sequence-structure data. With the development of Artificial Intelligence (AI) techniques, Protein Representation Learning (PRL) has recently emerged as a promising research topic for extracting informative knowledge from massive protein sequences or structures. To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications. We first briefly introduce the motivations for protein representation learning and formulate it in a general and unified framework. Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling. Finally, we discuss some technical challenges and potential directions for improving protein representation learning. The latest advances in PRL methods are summarized in a GitHub repository https://github.com/LirongWu/awesome-protein-representation-learning.
translated by 谷歌翻译
Solving partial differential equations is difficult. Recently proposed neural resolution-invariant models, despite their effectiveness and efficiency, usually require equispaced spatial points of data. However, sampling in spatial domain is sometimes inevitably non-equispaced in real-world systems, limiting their applicability. In this paper, we propose a Non-equispaced Fourier PDE Solver (\textsc{NFS}) with adaptive interpolation on resampled equispaced points and a variant of Fourier Neural Operators as its components. Experimental results on complex PDEs demonstrate its advantages in accuracy and efficiency. Compared with the spatially-equispaced benchmark methods, it achieves superior performance with $42.85\%$ improvements on MAE, and is able to handle non-equispaced data with a tiny loss of accuracy. Besides, to our best knowledge, \textsc{NFS} is the first ML-based method with mesh invariant inference ability to successfully model turbulent flows in non-equispaced scenarios, with a minor deviation of the error on unseen spatial points.
translated by 谷歌翻译
Transformer-based pre-trained models have become the de-facto solution for NLP tasks. Fine-tuning such pre-trained models for downstream tasks often requires tremendous amount of data that is both private and labeled. However, in reality: 1) such private data cannot be collected and is distributed across mobile devices, and 2) well-curated labeled data is scarce. To tackle those issues, we first define a data generator for federated few-shot learning tasks, which encompasses the quantity and distribution of scarce labeled data in a realistic setting. Then we propose AUG-FedPrompt, a prompt-based federated learning algorithm that carefully annotates abundant unlabeled data for data augmentation. AUG-FedPrompt can perform on par with full-set fine-tuning with very few initial labeled data.
translated by 谷歌翻译
Generating molecules that bind to specific proteins is an important but challenging task in drug discovery. Previous works usually generate atoms in an auto-regressive way, where element types and 3D coordinates of atoms are generated one by one. However, in real-world molecular systems, the interactions among atoms in an entire molecule are global, leading to the energy function pair-coupled among atoms. With such energy-based consideration, the modeling of probability should be based on joint distributions, rather than sequentially conditional ones. Thus, the unnatural sequentially auto-regressive modeling of molecule generation is likely to violate the physical rules, thus resulting in poor properties of the generated molecules. In this work, a generative diffusion model for molecular 3D structures based on target proteins as contextual constraints is established, at a full-atom level in a non-autoregressive way. Given a designated 3D protein binding site, our model learns the generative process that denoises both element types and 3D coordinates of an entire molecule, with an equivariant network. Experimentally, the proposed method shows competitive performance compared with prevailing works in terms of high affinity with proteins and appropriate molecule sizes as well as other drug properties such as drug-likeness of the generated molecules.
translated by 谷歌翻译
Since the recent success of Vision Transformers (ViTs), explorations toward transformer-style architectures have triggered the resurgence of modern ConvNets. In this work, we explore the representation ability of DNNs through the lens of interaction complexities. We empirically show that interaction complexity is an overlooked but essential indicator for visual recognition. Accordingly, a new family of efficient ConvNets, named MogaNet, is presented to pursue informative context mining in pure ConvNet-based models, with preferable complexity-performance trade-offs. In MogaNet, interactions across multiple complexities are facilitated and contextualized by leveraging two specially designed aggregation blocks in both spatial and channel interaction spaces. Extensive studies are conducted on ImageNet classification, COCO object detection, and ADE20K semantic segmentation tasks. The results demonstrate that our MogaNet establishes new state-of-the-art over other popular methods in mainstream scenarios and all model scales. Typically, the lightweight MogaNet-T achieves 80.0\% top-1 accuracy with only 1.44G FLOPs using a refined training setup on ImageNet-1K, surpassing ParC-Net-S by 1.4\% accuracy but saving 59\% (2.04G) FLOPs.
translated by 谷歌翻译
Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for this academic-industrial gap is the neighborhood-fetching latency incurred by data dependency in GNNs, which make it hard to deploy for latency-sensitive applications that require fast inference. Conversely, without involving any feature aggregation, MLPs have no data dependency and infer much faster than GNNs, but their performance is less competitive. Motivated by these complementary strengths and weaknesses, we propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and MLPs. Specifically, the GSDN framework is based purely on MLPs, where structural information is only implicitly used as prior to guide knowledge self-distillation between the neighborhood and the target, substituting the explicit neighborhood information propagation as in GNNs. As a result, GSDN enjoys the benefits of graph topology-awareness in training but has no data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with self-distillation, e.g., GSDN improves over stand-alone MLPs by 15.54\% on average and outperforms the state-of-the-art GNNs on six datasets. Regarding inference speed, GSDN infers 75X-89X faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
translated by 谷歌翻译
时间点过程(TPP)通常用于模拟具有出现时间戳的异步事件序列,并由以历史影响为条件的概率模型揭示。尽管以前的许多作品通过最大程度地提高了TPP模型的“合适性”,但它们的预测性能不令人满意,这意味着模型产生的时间戳与真实的观察相距甚远。最近,诸如DENOTO扩散和得分匹配模型之类的深层生成模型通过证明其生成高质量样本的能力,在图像生成任务方面取得了巨大进展。但是,在事件发生在TPP的情况下,尚无完整而统一的作品来探索和研究生成模型的潜力。在这项工作中,我们尝试通过设计一个unified \ textbf {g} \ textbf {n} eural \ textbf {t} emporal \ emporal \ textbf {p} oint \ textbf {p} rocess {p} rocess(\ textsc {\ textsc { GNTPP})模型探索其可行性和有效性,并进一步改善模型的预测性能。此外,在衡量历史影响方面,我们修改了细心的模型,这些模型总结了历史事件的影响,并以适应性的重新加权术语来考虑事件的类型关系和时间间隔。已经进行了广泛的实验,以说明\ textsc {gntpp}的预测能力的提高,并用一系列生成概率解码器,并从修订后的注意力中获得了绩效增长。据我们所知,这是第一批适应生成模型在完整的统一框架中并在TPP背景下研究其有效性的作品。我们的代码库包括第5.1.1节中给出的所有方法。5.1.1在\ url {https://github.com/bird-tao/gntpp}中打开。我们希望代码框架可以促进神经TPP的未来研究。
translated by 谷歌翻译
时间点过程作为连续域的随机过程通常用于模拟具有发生时间戳的异步事件序列。由于深度神经网络的强烈表达性,在时间点过程的背景下,它们是捕获异步序列中的模式的有希望的选择。在本文中,我们首先审查了最近的研究强调和困难,在深处时间点过程建模异步事件序列,可以得出四个领域:历史序列的编码,条件强度函数的制定,事件的关系发现和学习方法优化。我们通过将其拆除进入四个部分来介绍最近提出的模型,并通过对公平实证评估的相同学习策略进行重新涂布前三个部分进行实验。此外,我们扩展了历史编码器和条件强度函数家族,并提出了一种GRANGER因果区发现框架,用于利用多种事件之间的关系。因为格兰杰因果关系可以由格兰杰因果关系图表示,所以采用分层推断框架中的离散图结构学习来揭示图的潜在结构。进一步的实验表明,具有潜在图表发现的提议框架可以捕获关系并实现改进的拟合和预测性能。
translated by 谷歌翻译
时空预测是归因于时间动态的高非线性以及空间域中的复杂位置表征模式,尤其是天气预报等领域。图表卷积通常用于对气象中的空间依赖性进行建模,以处理传感器空间位置的不规则分布。在这项工作中,提出了一种用于模仿气象流动的基于图的基于图的卷积,以捕获局部空间模式。基于位置表征模式的平滑度的假设,我们提出了条件本地卷积,其共享内核在节点的局部空间上近似通过前馈网络近似,具有通过地平线所获得的坐标的本地表示作为其输入。既定的联合标准的本地坐标系保留了地理位置的方向。我们进一步提出了距离和方向缩放术语,以减少不规则空间分布的影响。卷积嵌入到经常性的神经网络架构中以模拟时间动态,导致条件本地卷积复制网络(CLCRN)。我们的模型是在真实世界的天气基准数据集上进行评估,实现了最先进的性能,具有明显的改进。我们对本地模式可视化,模型的框架选择,地平线地图等的优势进行进一步分析。
translated by 谷歌翻译
Datasets serve as crucial training resources and model performance trackers. However, existing datasets have exposed a plethora of problems, inducing biased models and unreliable evaluation results. In this paper, we propose a model-agnostic dataset evaluation framework for automatic dataset quality evaluation. We seek the statistical properties of the datasets and address three fundamental dimensions: reliability, difficulty, and validity, following a classical testing theory. Taking the Named Entity Recognition (NER) datasets as a case study, we introduce $9$ statistical metrics for a statistical dataset evaluation framework. Experimental results and human evaluation validate that our evaluation framework effectively assesses various aspects of the dataset quality. Furthermore, we study how the dataset scores on our statistical metrics affect the model performance, and appeal for dataset quality evaluation or targeted dataset improvement before training or testing models.
translated by 谷歌翻译