Graph generative models have broad applications in biology, chemistry and social science. However, modelling and understanding the generative process of graphs is challenging due to the discrete and high-dimensional nature of graphs, as well as permutation invariance to node orderings in underlying graph distributions. Current leading autoregressive models fail to capture the permutation invariance nature of graphs for the reliance on generation ordering and have high time complexity. Here, we propose a continuous-time generative diffusion process for permutation invariant graph generation to mitigate these issues. Specifically, we first construct a forward diffusion process defined by a stochastic differential equation (SDE), which smoothly converts graphs within the complex distribution to random graphs that follow a known edge probability. Solving the corresponding reverse-time SDE, graphs can be generated from newly sampled random graphs. To facilitate the reverse-time SDE, we newly design a position-enhanced graph score network, capturing the evolving structure and position information from perturbed graphs for permutation equivariant score estimation. Under the evaluation of comprehensive metrics, our proposed generative diffusion process achieves competitive performance in graph distribution learning. Experimental results also show that GraphGDP can generate high-quality graphs in only 24 function evaluations, much faster than previous autoregressive models.
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
生成图形结构化数据需要学习图形的基础分布。然而,这是一个具有挑战性的问题,先前的图生成方法要么无法捕获图形的置换率属性,要么无法充分对节点和边缘之间的复杂依赖性进行建模,这对于生成现实世界图(例如分子)至关重要。为了克服此类局限性,我们为具有连续时间框架的图形提出了一种基于分数的新型生成模型。具体而言,我们提出了一个新的图扩散过程,该过程通过随机微分方程(SDE)系统建模节点和边缘的联合分布。然后,我们得出了针对建议的扩散过程量身定制的新的分数匹配目标,以估算关节对数密度相对于每个组件的梯度,并为SDE系统引入一个新的求解器,以从反向扩散过程中有效采样。我们验证了不同数据集的图形生成方法,在该数据集上,它要么在其上取得了比基线显着或竞争性能的。进一步的分析表明,我们的方法能够生成接近训练分布但不违反化学价值规则的分子,从而证明了SDE系统在建模节点边缘关系中的有效性。我们的代码可在https://github.com/harryjo97/gdss上找到。
translated by 谷歌翻译
Denoising diffusion probabilistic models and score matching models have proven to be very powerful for generative tasks. While these approaches have also been applied to the generation of discrete graphs, they have, so far, relied on continuous Gaussian perturbations. Instead, in this work, we suggest using discrete noise for the forward Markov process. This ensures that in every intermediate step the graph remains discrete. Compared to the previous approach, our experimental results on four datasets and multiple architectures show that using a discrete noising process results in higher quality generated samples indicated with an average MMDs reduced by a factor of 1.5. Furthermore, the number of denoising steps is reduced from 1000 to 32 steps leading to a 30 times faster sampling procedure.
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
时间图代表实体之间的动态关系,并发生在许多现实生活中的应用中,例如社交网络,电子商务,通信,道路网络,生物系统等。他们需要根据其生成建模和表示学习的研究超出与静态图有关的研究。在这项调查中,我们全面回顾了近期针对处理时间图提出的神经时间依赖图表的学习和生成建模方法。最后,我们确定了现有方法的弱点,并讨论了我们最近发表的论文提格的研究建议[24]。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
我们从光谱的角度解决图形生成问题,首先生成图形laplacian光谱的主要部分,然后构建与这些特征值和特征向量相匹配的图。光谱调节允许直接建模全局和局部图结构,并有助于克服单发图生成器的表达性和模式崩溃问题。我们的新颖的甘(Spectre)称为Spectre,可以使用一声模型来产生比以前可能更大的图。Spectre的表现优于最先进的深度自动回归发电机在建模忠诚方面,同时还避免了昂贵的顺序产生和对节点排序的依赖。一个很好的例子,在相当大的合成和现实图形中,Specter的幽灵比最佳竞争对手的最佳竞争对手的改进是4到170倍,该竞争对手不合适,比自回旋发电机快23至30倍。
translated by 谷歌翻译
这项工作引入了离题,这是一种用于生成具有分类节点和边缘属性图的图形的离散denoising扩散模型。我们的模型定义了一个扩散过程,该过程逐步编辑了具有噪声(添加或删除边缘,更改类别)的图形以及学会恢复此过程的图形变压器网络。有了这两种成分,我们将分布学习将上的分布学习减少到一个简单的分类任务序列。我们通过提出一个新的马尔可夫噪声模型来进一步提高样品质量,该模型在扩散过程中保留节点和边缘类型的边际分布,并通过在每个扩散步骤中添加从嘈杂图中得出的辅助图理论特征。最后,我们提出了一个指导程序,以根据图形级特征调理生成。总体而言,离题可以在分子和非分子数据集上达到最新性能,在平面图数据集上,有效性提高了3倍。特别是,这是第一个模型,将鳞片缩放到包含130万个药物样分子的大型鳄梨调子数据集,而无需使用分子特异性表示,例如微笑或片段。
translated by 谷歌翻译
图表无处不在地编码许多域中现实世界对象的关系信息。图形生成的目的是从类似于观察到的图形的分布中生成新图形,由于深度学习模型的最新进展,人们的关注越来越大。在本文中,我们对现有的图形生成文献进行了全面综述,从各种新兴方法到其广泛的应用领域。具体来说,我们首先提出了深图生成的问题,并与几个相关的图形学习任务讨论了它的差异。其次,我们根据模型架构将最新方法分为三类,并总结其生成策略。第三,我们介绍了深图生成的三个关键应用领域。最后,我们重点介绍了深图生成的未来研究中的挑战和机遇。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
深度学习表现出巨大的生成任务潜力。生成模型是可以根据某些隐含参数随机生成观测值的模型类。最近,扩散模型由于其发电能力而成为一类生成模型。如今,已经取得了巨大的成就。除了计算机视觉,语音产生,生物信息学和自然语言处理外,还需要在该领域探索更多应用。但是,扩散模型具有缓慢生成过程的自然缺点,从而导致许多增强的作品。该调查总结了扩散模型的领域。我们首先说明了两项具有里程碑意义的作品的主要问题-DDPM和DSM。然后,我们提供各种高级技术,以加快扩散模型 - 训练时间表,无训练采样,混合模型以及得分和扩散统一。关于现有模型,我们还根据特定的NFE提供了FID得分的基准和NLL。此外,引入了带有扩散模型的应用程序,包括计算机视觉,序列建模,音频和科学AI。最后,该领域以及局限性和进一步的方向都进行了摘要。
translated by 谷歌翻译
已经为图形生成模型提出了广泛的模型,需要采用有效的方法来评估其质量。到目前为止,大多数技术都使用基于子图计数的传统指标或随机初始化的图形神经网络(GNN)的表示。我们建议使用对比训练的GNN而不是随机GNN的表示形式,并表明这给出了更可靠的评估指标。但是,传统方法和基于GNN的方法都没有主导另一方:我们举例说明每种方法无法区分的示例。我们证明了图形子结构网络(GSN),以一种结合两种方法的方式,可以更好地区分图形数据集之间的距离。
translated by 谷歌翻译
Score-based modeling through stochastic differential equations (SDEs) has provided a new perspective on diffusion models, and demonstrated superior performance on continuous data. However, the gradient of the log-likelihood function, i.e., the score function, is not properly defined for discrete spaces. This makes it non-trivial to adapt \textcolor{\cdiff}{the score-based modeling} to categorical data. In this paper, we extend diffusion models to discrete variables by introducing a stochastic jump process where the reverse process denoises via a continuous-time Markov chain. This formulation admits an analytical simulation during backward sampling. To learn the reverse process, we extend score matching to general categorical data and show that an unbiased estimator can be obtained via simple matching of the conditional marginal distributions. We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
translated by 谷歌翻译
基于分数的生成模型(SGMS)已经证明了显着的合成质量。 SGMS依赖于扩散过程,逐渐将数据逐渐渗透到贸易分布,而生成式模型则学会去噪。除了数据分布本身,这种去噪任务的复杂性是由扩散过程独特地确定的。我们认为当前的SGMS采用过于简单的扩散,导致不必要的复杂的去噪流程,限制了生成的建模性能。根据与统计力学的联系,我们提出了一种新型危及阻尼Langevin扩散(CLD),并表明基于CLD的SGMS实现了优异的性能。 CLD可以被解释为在扩展空间中运行关节扩散,其中辅助变量可以被视为耦合到数据变量的“速度”,如Hamiltonian动态。我们推导了一种用于CLD的小说得分匹配目标,并表明该模型仅需要了解给定数据的速度分布的条件分布的得分函数,而不是直接学习数据的分数。我们还导出了一种新的采样方案,用于从基于CLD的扩散模型有效合成。我们发现CLD在类似的网络架构和采样计算预算中优于综合质量的先前SGM。我们展示我们的CLD的新型采样器显着优于欧拉 - 玛雅山等求解器。我们的框架为基于刻痕的去噪扩散模型提供了新的见解,并且可以随时用于高分辨率图像合成。项目页面和代码:https://nv-tlabs.github.io/cld-sgm。
translated by 谷歌翻译
我们提出了一个首次击中扩散模型(FHDM)的家族,该模型是深层生成模型,该模型以扩散过程生成数据,该过程在随机的首次击中时间终止。这产生了在预先指定的确定性时间终止的标准固定时间扩散模型的扩展。尽管标准扩散模型是为连续不受约束的数据而设计的,但FHDM自然设计用于在连续以及一系列离散和结构域上学习分布。此外,FHDM启用依赖实例的终止时间,并加速扩散过程,以更少的扩散步骤采样更高质量的数据。从技术上讲,我们通过根据DOOB的$ h $转换得出的有条件的首次击中过程(即桥)来训练FHDM,以最大的似然估计从观察到的数据增强的扩散轨迹(即桥梁),从而偏离了常用的使用时间反转机制。我们应用FHDM在各个领域中生成数据,例如点云(一般连续分布),地球上的气候和地理事件(球体上的连续分布),未加权图(二进制矩阵的分布)以及2D图像的分割图(高度图像(高) - 二维分配)。我们观察到与质量和速度的最新方法相比,相比之下。
translated by 谷歌翻译
图表神经网络(GNNS)在各种机器学习任务中获得了表示学习的提高。然而,应用邻域聚合的大多数现有GNN通常在图中的图表上执行不良,其中相邻的节点属于不同的类。在本文中,我们示出了在典型的异界图中,边缘可以被引导,以及是否像是处理边缘,也可以使它们过度地影响到GNN模型的性能。此外,由于异常的限制,节点对来自本地邻域之外的类似节点的消息非常有益。这些激励我们开发一个自适应地学习图表的方向性的模型,并利用潜在的长距离相关性节点之间。我们首先将图拉普拉斯概括为基于所提出的特征感知PageRank算法向数字化,该算法同时考虑节点之间的图形方向性和长距离特征相似性。然后,Digraph Laplacian定义了一个图形传播矩阵,导致一个名为{\ em diglaciangcn}的模型。基于此,我们进一步利用节点之间的通勤时间测量的节点接近度,以便在拓扑级别上保留节点的远距离相关性。具有不同级别的10个数据集的广泛实验,同意级别展示了我们在节点分类任务任务中对现有解决方案的有效性。
translated by 谷歌翻译
最近,在对图形结构数据上应用深度神经网络有很大的成功。然而,大多数工作侧重于节点或图形级监督学习,例如节点,链接或图形分类或节点级无监督学习(例如节点群集)。尽管其应用广泛,但图表级无监督的学习尚未受到很多关注。这可能主要归因于图形的高表示复杂性,可以由n表示!等效邻接矩阵,其中n是节点的数量。在这项工作中,我们通过提出用于图形结构数据的置换不变变化自动码器来解决此问题。我们所提出的模型间接学习以匹配输入和输出图的节点排序,而不施加特定节点排序或执行昂贵的图形匹配。我们展示了我们提出模型对各种图形重建和生成任务的有效性,并评估了下游图形水平分类和回归提取的表示的表现力。
translated by 谷歌翻译
在本文中,我们提出了多分辨率的等级图变分性Autiachoders(MGVAE),第一层级生成模型以多分辨率和等分的方式学习和生成图。在每个分辨率级别,MGVAE采用更高的顺序消息,以便在学习中对图进行编码,同时学习将其分配到互斥的集群中并赋予最终产生潜在分布的层次结构的较低分辨率。然后,MGVAE构造分层生成模型以改变地解码成粗糙的图形的层次。重要的是,我们提出的框架是关于节点排序的端到端排列等级。MGVAE通过多种生成任务实现竞争结果,包括一般图生成,分子产生,无监督的分子表示学习,以预测分子特性,引用图的链路预测,以及基于图的图像生成。
translated by 谷歌翻译