微生物组构成中的大规模扰动与人类生理的健康和功能密切相关,无论是驱动力还是后果。但是,由于微生物之间的大量复杂相互作用,了解健康和疾病个体的微生物组轮廓的差异可能会变得复杂。我们建议将这些相互作用建模为随时间变化的图,其节点是微生物,边缘是它们之间的相互作用。由于需要分析这种复杂的相互作用的需要,我们开发了一种方法,该方法可以学习时间不断发展的图表的低维表示,并保持在高维空间中发生的动力学。通过我们的实验,我们表明我们可以提取图形特征,例如节点簇或边缘簇,这些节点或边缘对模型具有最大影响,以学习低维表示。这些信息对于鉴定与临床疾病密切相关的微生物以及它们之间的相互作用至关重要。我们对合成和现实世界微生物组数据集进行了实验。
translated by 谷歌翻译
时间图代表实体之间的动态关系,并发生在许多现实生活中的应用中,例如社交网络,电子商务,通信,道路网络,生物系统等。他们需要根据其生成建模和表示学习的研究超出与静态图有关的研究。在这项调查中,我们全面回顾了近期针对处理时间图提出的神经时间依赖图表的学习和生成建模方法。最后,我们确定了现有方法的弱点,并讨论了我们最近发表的论文提格的研究建议[24]。
translated by 谷歌翻译
给定实体及其在Web数据中的交互,可能在不同的时间发生,我们如何找到实体社区并跟踪其演变?在本文中,我们从图形群集的角度处理这项重要任务。最近,通过深层聚类方法,已经实现了各个领域的最新聚类性能。特别是,深图聚类(DGC)方法通过学习节点表示和群集分配在关节优化框架中成功扩展到图形结构的数据。尽管建模选择有所不同(例如,编码器架构),但现有的DGC方法主要基于自动编码器,并使用相同的群集目标和相对较小的适应性。同样,尽管许多现实世界图都是动态的,但以前的DGC方法仅被视为静态图。在这项工作中,我们开发了CGC,这是一个新颖的端到端图形聚类框架,其与现有方法的根本不同。 CGC在对比度图学习框架中学习节点嵌入和群集分配,在多级别方案中仔细选择了正面和负样本,以反映层次结构的社区结构和网络同质。此外,我们将CGC扩展到时间不断发展的数据,其中时间图以增量学习方式执行,并具有检测更改点的能力。对现实世界图的广泛评估表明,所提出的CGC始终优于现有方法。
translated by 谷歌翻译
Graph learning is a popular approach for performing machine learning on graph-structured data. It has revolutionized the machine learning ability to model graph data to address downstream tasks. Its application is wide due to the availability of graph data ranging from all types of networks to information systems. Most graph learning methods assume that the graph is static and its complete structure is known during training. This limits their applicability since they cannot be applied to problems where the underlying graph grows over time and/or new tasks emerge incrementally. Such applications require a lifelong learning approach that can learn the graph continuously and accommodate new information whilst retaining previously learned knowledge. Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure. As a result, graph lifelong learning is gaining attention from the research community. This survey paper provides a comprehensive overview of recent advancements in graph lifelong learning, including the categorization of existing methods, and the discussions of potential applications and open research problems.
translated by 谷歌翻译
基于深度学习的图生成方法具有显着的图形数据建模能力,从而使它们能够解决广泛的现实世界问题。使这些方法能够在生成过程中考虑不同的条件,甚至通过授权它们生成满足所需标准的新图形样本来提高其有效性。本文提出了一种条件深图生成方法,称为SCGG,该方法考虑了特定类型的结构条件。具体而言,我们提出的SCGG模型采用初始子图,并自动重新收获在给定条件子结构之上生成新节点及其相应的边缘。 SCGG的体系结构由图表表示网络和自动回归生成模型组成,该模型是端到端训练的。使用此模型,我们可以解决图形完成,这是恢复缺失的节点及其相关的部分观察图的猖and固有的困难问题。合成数据集和现实世界数据集的实验结果证明了我们方法的优势与最先进的基准相比。
translated by 谷歌翻译
图形相似性学习是指计算两个图之间的相似性得分,这在许多现实的应用程序(例如视觉跟踪,图形分类和协作过滤)中需要。由于大多数现有的图形神经网络产生了单个图的有效图表,因此几乎没有努力共同学习两个图表并计算其相似性得分。此外,现有的无监督图相似性学习方法主要基于聚类,它忽略了图对中体现的有价值的信息。为此,我们提出了一个对比度图匹配网络(CGMN),以进行自我监督的图形相似性学习,以计算任何两个输入图对象之间的相似性。具体而言,我们分别在一对中为每个图生成两个增强视图。然后,我们采用两种策略,即跨视图相互作用和跨刻画相互作用,以实现有效的节点表示学习。前者求助于两种观点中节点表示的一致性。后者用于识别不同图之间的节点差异。最后,我们通过汇总操作进行图形相似性计算将节点表示形式转换为图形表示。我们已经在八个现实世界数据集上评估了CGMN,实验结果表明,所提出的新方法优于图形相似性学习下游任务的最新方法。
translated by 谷歌翻译
Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence.
translated by 谷歌翻译
了解细胞类型的多样性及其在大脑中的功能是神经科学中的关键挑战之一。大规模数据集的出现引起了细胞类型分类的不偏不倚和定量方法。我们提出了GraphDino,一种学习神经元3D形态的低尺寸表示的纯粹数据驱动方法。 GraphDino是一种新的图形表示,用于在变压器模型上利用自我监督学习的空间图表。它在节点与经典图卷积处理之间的注意力全局交互之间平滑地插值。我们表明,该方法能够屈服于与基于手动特征的分类相当的形态细胞型聚类,并且对两种不同物种和皮质区域的专家标记的细胞类型表示良好的对应关系。我们的方法适用于在数据集中的样本是图形和图形级嵌入的设置中的神经科学中。
translated by 谷歌翻译
Data-efficient learning on graphs (GEL) is essential in real-world applications. Existing GEL methods focus on learning useful representations for nodes, edges, or entire graphs with ``small'' labeled data. But the problem of data-efficient learning for subgraph prediction has not been explored. The challenges of this problem lie in the following aspects: 1) It is crucial for subgraphs to learn positional features to acquire structural information in the base graph in which they exist. Although the existing subgraph neural network method is capable of learning disentangled position encodings, the overall computational complexity is very high. 2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure. Subgraph augmentation is more susceptible to undesirable perturbations. 3) Only a small number of nodes in the base graph are contained in subgraphs, which leads to a potential ``bias'' problem that the subgraph representation learning is dominated by these ``hot'' nodes. By contrast, the remaining nodes fail to be fully learned, which reduces the generalization ability of subgraph representation learning. In this paper, we aim to address the challenges above and propose a Position-Aware Data-Efficient Learning framework for subgraph neural networks called PADEL. Specifically, we propose a novel node position encoding method that is anchor-free, and design a new generative subgraph augmentation method based on a diffused variational subgraph autoencoder, and we propose exploratory and exploitable views for subgraph contrastive learning. Extensive experiment results on three real-world datasets show the superiority of our proposed method over state-of-the-art baselines.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
由于其独立性与标签及其稳健性的独立性,自我监督的学习最近引起了很多关注。目前关于本主题的研究主要使用诸如图形结构的静态信息,但不能很好地捕获诸如边缘时间戳的动态信息。现实图形通常是动态的,这意味着节点之间的交互发生在特定时间。本文提出了一种自我监督的动态图形表示学习框架(DYSUBC),其定义了一个时间子图对比学学习任务,以同时学习动态图的结构和进化特征。具体地,首先提出了一种新的时间子图采样策略,其将动态图的每个节点作为中心节点提出,并使用邻域结构和边缘时间戳来采样相应的时间子图。然后根据在编码每个子图中的节点之后,根据中心节点上的邻域节点的影响设计子图表示功能。最后,定义了结构和时间对比损失,以最大化节点表示和时间子图表示之间的互信息。五个现实数据集的实验表明(1)DySubc比下游链路预测任务中的两个图形对比学习模型和四个动态图形表示学习模型更好地表现出更好的相关基线,(2)使用时间信息不能使用只有更有效的子图,还可以通过时间对比损失来学习更好的表示。
translated by 谷歌翻译
图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
在异质图上的自我监督学习(尤其是对比度学习)方法可以有效地摆脱对监督数据的依赖。同时,大多数现有的表示学习方法将异质图嵌入到欧几里得或双曲线的单个几何空间中。这种单个几何视图通常不足以观察由于其丰富的语义和复杂结构而观察到异质图的完整图片。在这些观察结果下,本文提出了一种新型的自我监督学习方法,称为几何对比度学习(GCL),以更好地表示监督数据是不可用时的异质图。 GCL同时观察了从欧几里得和双曲线观点的异质图,旨在强烈合并建模丰富的语义和复杂结构的能力,这有望为下游任务带来更多好处。 GCL通过在局部局部和局部全球语义水平上对比表示两种几何视图之间的相互信息。在四个基准数据集上进行的广泛实验表明,在三个任务上,所提出的方法在包括节点分类,节点群集和相似性搜索在内的三个任务上都超过了强基础,包括无监督的方法和监督方法。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
生物医学网络是与疾病网络的蛋白质相互作用的普遍描述符,从蛋白质相互作用,一直到医疗保健系统和科学知识。随着代表学习提供强大的预测和洞察的显着成功,我们目睹了表现形式学习技术的快速扩展,进入了这些网络的建模,分析和学习。在这篇综述中,我们提出了一个观察到生物学和医学中的网络长期原则 - 而在机器学习研究中经常出口 - 可以为代表学习提供概念基础,解释其当前的成功和限制,并告知未来进步。我们综合了一系列算法方法,即在其核心利用图形拓扑到将网络嵌入到紧凑的向量空间中,并捕获表示陈述学习证明有用的方式的广度。深远的影响包括鉴定复杂性状的变异性,单细胞的异心行为及其对健康的影响,协助患者的诊断和治疗以及制定安全有效的药物。
translated by 谷歌翻译
Streets networks provide an invaluable source of information about the different temporal and spatial patterns emerging in our cities. These streets are often represented as graphs where intersections are modelled as nodes and streets as links between them. Previous work has shown that raster representations of the original data can be created through a learning algorithm on low-dimensional representations of the street networks. In contrast, models that capture high-level urban network metrics can be trained through convolutional neural networks. However, the detailed topological data is lost through the rasterisation of the street network. The models cannot recover this information from the image alone, failing to capture complex street network features. This paper proposes a model capable of inferring good representations directly from the street network. Specifically, we use a variational autoencoder with graph convolutional layers and a decoder that outputs a probabilistic fully-connected graph to learn latent representations that encode both local network structure and the spatial distribution of nodes. We train the model on thousands of street network segments and use the learnt representations to generate synthetic street configurations. Finally, we proposed a possible application to classify the urban morphology of different network segments by investigating their common characteristics in the learnt space.
translated by 谷歌翻译
动态图形表示学习是具有广泛应用程序的重要任务。以前关于动态图形学习的方法通常对嘈杂的图形信息(如缺失或虚假连接)敏感,可以产生退化的性能和泛化。为了克服这一挑战,我们提出了一种基于变换器的动态图表学习方法,命名为动态图形变换器(DGT),带有空间 - 时间编码,以有效地学习图形拓扑并捕获隐式链接。为了提高泛化能力,我们介绍了两个补充自我监督的预训练任务,并表明共同优化了两种预训练任务,通过信息理论分析导致较小的贝叶斯错误率。我们还提出了一个时间联盟图形结构和目标 - 上下文节点采样策略,用于高效和可扩展的培训。与现实世界数据集的广泛实验说明了与几个最先进的基线相比,DGT呈现出优异的性能。
translated by 谷歌翻译
道路网络和轨迹表示学习对于交通系统至关重要,因为学习的表示形式可以直接用于各种下游任务(例如,交通速度推理和旅行时间估计)。但是,大多数现有方法仅在同一规模内对比,即分别处理道路网络和轨迹,这些方法忽略了有价值的相互关系。在本文中,我们旨在提出一个统一的框架,该框架共同学习道路网络和轨迹表示端到端。我们为公路对比度和轨迹 - 轨迹对比度分别设计了特定领域的增强功能,即路段及其上下文邻居和轨迹分别替换和丢弃了替代方案。最重要的是,我们进一步引入了路面跨尺度对比,与最大化总互信息桥接了这两个尺度。与仅在形成对比的图形及其归属节点上的现有跨尺度对比度学习方法不同,路段和轨迹之间的对比是通过新颖的正面抽样和适应性加权策略精心量身定制的。我们基于两个实际数据集进行了审慎的实验,这些数据集具有四个下游任务,证明了性能和有效性的提高。该代码可在https://github.com/mzy94/jclrnt上找到。
translated by 谷歌翻译
学习在动态环境中网络的低维拓扑表示由于许多真实网络的时间不断发展而引起了很多关注。动态网络嵌入(DNE)的主要和共同目标是有效更新节点嵌入品,同时在每次步骤保留网络拓扑时。大多数现有DNE方法的想法是捕获受影响的节点(而不是所有节点)的拓扑变化,并因此更新节点嵌入。遗憾的是,这种近似虽然可以提高效率,但是在每次步骤中不能有效地保留动态网络的全局拓扑,因为没有考虑通过高阶接近传播的累积拓扑变化的非活动子网。为了解决这一挑战,我们提出了一种新颖的节点选择策略,以在网络上多移地选择代表节点,这与基于Skip-gram的嵌入方法的新增量学习范例协调。广泛的实验显示Glodyne,较小的节点部分被选中,可以实现优越或相当的性能W.R.T.在三个典型的下游任务中最先进的DNE方法。特别是,Glodyne显着优于图形重建任务中的其他方法,这表明了其全球拓扑保存能力。源代码可在https://github.com/houchengbin/glodyne获得
translated by 谷歌翻译