图形神经网络(GNNS)已被广泛用于许多域,在这些领域中,数据被表示为图,包括社交网络,推荐系统,生物学,化学等。最近,GNNS的表现力引起了人们的兴趣。已经表明,尽管GNNS在许多应用中取得了有希望的经验结果,但GNN中存在一些局限性,阻碍了他们对某些任务的绩效。例如,由于GNNS更新节点功能主要基于本地信息,因此它们在捕获图中节点之间的长距离依赖性方面具有有限的表达能力。为了解决GNN的一些局限性,最近的几项工作开始探索增强的GNN,并记忆以提高其在相关任务中的表现力。在本文中,我们对现有的记忆启发性GNN的现有文献进行了全面综述。我们通过心理学和神经科学的角度回顾了这些作品,后者已经在生物学大脑中建立了多种记忆系统和机制。我们提出了记忆GNN作品的分类法,以及比较记忆机制的一组标准。我们还提供有关这些作品局限性的重要讨论。最后,我们讨论了该领域的挑战和未来方向。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence.
translated by 谷歌翻译
Graph learning is a popular approach for performing machine learning on graph-structured data. It has revolutionized the machine learning ability to model graph data to address downstream tasks. Its application is wide due to the availability of graph data ranging from all types of networks to information systems. Most graph learning methods assume that the graph is static and its complete structure is known during training. This limits their applicability since they cannot be applied to problems where the underlying graph grows over time and/or new tasks emerge incrementally. Such applications require a lifelong learning approach that can learn the graph continuously and accommodate new information whilst retaining previously learned knowledge. Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure. As a result, graph lifelong learning is gaining attention from the research community. This survey paper provides a comprehensive overview of recent advancements in graph lifelong learning, including the categorization of existing methods, and the discussions of potential applications and open research problems.
translated by 谷歌翻译
图形神经网络(GNN),图数据上深度神经网络的概括已被广泛用于各个领域,从药物发现到推荐系统。但是,当可用样本很少的情况下,这些应用程序的GNN是有限的。元学习一直是解决机器学习中缺乏样品的重要框架,近年来,研究人员已经开始将元学习应用于GNNS。在这项工作中,我们提供了对涉及GNN的不同元学习方法的综合调查,这些方法在各种图表中显示出使用这两种方法的力量。我们根据提出的架构,共享表示和应用程序分类文献。最后,我们讨论了几个激动人心的未来研究方向和打开问题。
translated by 谷歌翻译
Graph mining tasks arise from many different application domains, ranging from social networks, transportation to E-commerce, etc., which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph data mining tasks. However, these graph mining methods and RL models are dispersed in different research areas, which makes it hard to compare them. In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains and summarize the method descriptions, open-source codes, and benchmark datasets of GRL methods. Furthermore, we propose important directions and challenges to be solved in the future. As far as we know, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. In addition, we create an online open-source for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
注意力是一种令人震惊的状态,能够通过在一条信息上选择性地关注一个信息,同时忽略其他可察觉的信息,能够在人类中处理有限的处理瓶颈。几十年来,在哲学,心理学,神经科学和计算中研究了注意的概念和函数。目前,这家酒店已广泛探索深神经网络。现在可以使用许多不同的神经关注模型,并且在过去六年中是一个非常活跃的研究区域。从关注的理论观点来看,该调查对主要神经关注模型进行了批判性分析。在这里,我们提出了一种与预测深度学习的理论方面的分类学。我们的分类系统提供了一个组织结构,提出了新问题和结构对现有的注意机制的理解。特别地,17种来自心理学和神经科学的标准和神经科学经典研究的标准用于分析一组超过650篇论文的51个主要模型的定性比较和批判性分析。此外,我们突出了尚未探索的几个理论问题,包括讨论生物合理性,突出目前的研究趋势,并为未来提供见解。
translated by 谷歌翻译
Graph Neural Networks (GNNs) are a family of graph networks inspired by mechanisms existing between nodes on a graph. In recent years there has been an increased interest in GNN and their derivatives, i.e., Graph Attention Networks (GAT), Graph Convolutional Networks (GCN), and Graph Recurrent Networks (GRN). An increase in their usability in computer vision is also observed. The number of GNN applications in this field continues to expand; it includes video analysis and understanding, action and behavior recognition, computational photography, image and video synthesis from zero or few shots, and many more. This contribution aims to collect papers published about GNN-based approaches towards computer vision. They are described and summarized from three perspectives. Firstly, we investigate the architectures of Graph Neural Networks and their derivatives used in this area to provide accurate and explainable recommendations for the ensuing investigations. As for the other aspect, we also present datasets used in these works. Finally, using graph analysis, we also examine relations between GNN-based studies in computer vision and potential sources of inspiration identified outside of this field.
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
最近,图形神经网络已成为机器学习界的热门话题。本文提出了基于SCOPUS,自2004年以来,GNN论文首次发布的基于GNNS研究的概述。该研究旨在评估总量和定性的GNN研究趋势。我们提供了研究,积极和有影响力的作者和机构,出版物来源,最具引用文件和热门话题的趋势。我们的调查表明,该领域中最常见的主题类别是计算机科学,工程,电信,语言学,运营研究和管理科学,信息科学和图书馆学,商业和经济学,自动化和控制系统,机器人和社会科学。此外,GNN出版物最具活跃的来源是计算机科学的讲义。最多产或有影响力的机构在美国,中国和加拿大发现。我们还提供必须阅读论文和未来方向。最后,图表卷积网络和注意机制的应用现在是GNN研究的热门话题。
translated by 谷歌翻译
受到计算机愿景和语言理解的深度学习的巨大成功的影响,建议的研究已经转移到发明基于神经网络的新推荐模型。近年来,我们在开发神经推荐模型方面目睹了显着进展,这概括和超越了传统的推荐模型,由于神经网络的强烈代表性。在本调查论文中,我们从建议建模与准确性目标的角度进行了系统审查,旨在总结该领域,促进研究人员和从业者在推荐系统上工作的研究人员和从业者。具体而具体基于推荐建模期间的数据使用,我们将工作划分为协作过滤和信息丰富的建议:1)协作滤波,其利用用户项目交互数据的关键来源; 2)内容丰富的建议,其另外利用与用户和项目相关的侧面信息,如用户配置文件和项目知识图; 3)时间/顺序推荐,其考虑与交互相关的上下文信息,例如时间,位置和过去的交互。在为每种类型审查代表性工作后,我们终于讨论了这一领域的一些有希望的方向。
translated by 谷歌翻译
保持个人特征和复杂的关系,广泛利用和研究了图表数据。通过更新和聚合节点的表示,能够捕获结构信息,图形神经网络(GNN)模型正在获得普及。在财务背景下,该图是基于实际数据构建的,这导致复杂的图形结构,因此需要复杂的方法。在这项工作中,我们在最近的财务环境中对GNN模型进行了全面的审查。我们首先将普通使用的财务图分类并总结每个节点的功能处理步骤。然后,我们总结了每个地图类型的GNN方法,每个区域的应用,并提出一些潜在的研究领域。
translated by 谷歌翻译
时间图代表实体之间的动态关系,并发生在许多现实生活中的应用中,例如社交网络,电子商务,通信,道路网络,生物系统等。他们需要根据其生成建模和表示学习的研究超出与静态图有关的研究。在这项调查中,我们全面回顾了近期针对处理时间图提出的神经时间依赖图表的学习和生成建模方法。最后,我们确定了现有方法的弱点,并讨论了我们最近发表的论文提格的研究建议[24]。
translated by 谷歌翻译
许多实际关系系统,如社交网络和生物系统,包含动态相互作用。在学习动态图形表示时,必须采用连续的时间信息和几何结构。主流工作通过消息传递网络(例如,GCN,GAT)实现拓扑嵌入。另一方面,时间演进通常通过在栅极机构中具有方便信息过滤的存储单元(例如,LSTM或GU)来表达。但是,由于过度复杂的编码,这种设计可以防止大规模的输入序列。这项工作从自我关注的哲学中学习,并提出了一种高效的基于频谱的神经单元,采用信息的远程时间交互。发达的频谱窗口单元(SWINIT)模型预测了具有保证效率的可扩展动态图形。该架构与一些构成随机SVD,MLP和图形帧卷积的一些简单的有效计算块组装。 SVD加MLP模块编码动态图事件的长期特征演进。帧卷积中的快速帧图形变换嵌入了结构动态。两种策略都提高了模型对可扩展分析的能力。特别地,迭代的SVD近似度将注意力的计算复杂性缩小到具有n个边缘和D边缘特征的动态图形的关注的计算复杂性,并且帧卷积的多尺度变换允许在网络训练中具有足够的可扩展性。我们的Swinit在各种在线连续时间动态图表学习任务中实现了最先进的性能,而与基线方法相比,可学习参数的数量可达七倍。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
事实证明,信息提取方法可有效从结构化或非结构化数据中提取三重。以(头部实体,关系,尾部实体)形式组织这样的三元组的组织称为知识图(kgs)。当前的大多数知识图都是不完整的。为了在下游任务中使用kgs,希望预测kgs中缺少链接。最近,通过将实体和关系嵌入到低维的矢量空间中,旨在根据先前访问的三元组来预测三元组,从而对KGS表示不同的方法。根据如何独立或依赖对三元组进行处理,我们将知识图完成的任务分为传统和图形神经网络表示学习,并更详细地讨论它们。在传统的方法中,每个三重三倍将独立处理,并在基于GNN的方法中进行处理,三倍也考虑了他们的当地社区。查看全文
translated by 谷歌翻译
人口级社会事件,如民事骚乱和犯罪,往往对我们的日常生活产生重大影响。预测此类事件对于决策和资源分配非常重要。由于缺乏关于事件发生的真实原因和潜在机制的知识,事件预测传统上具有挑战性。近年来,由于两个主要原因,研究事件预测研究取得了重大进展:(1)机器学习和深度学习算法的开发和(2)社交媒体,新闻来源,博客,经济等公共数据的可访问性指标和其他元数据源。软件/硬件技术中的数据的爆炸性增长导致了社会事件研究中的深度学习技巧的应用。本文致力于提供社会事件预测的深层学习技术的系统和全面概述。我们专注于两个社会事件的域名:\ Texit {Civil unrest}和\ texit {犯罪}。我们首先介绍事件预测问题如何作为机器学习预测任务制定。然后,我们总结了这些问题的数据资源,传统方法和最近的深度学习模型的发展。最后,我们讨论了社会事件预测中的挑战,并提出了一些有希望的未来研究方向。
translated by 谷歌翻译