Aspect ratio and spatial layout are two of the principal factors influencing the aesthetic value of a photograph. However, incorporating these into the traditional convolutionbased frameworks for the task of image aesthetics assessment is problematic. The aspect ratio of the photographs gets distorted while they are resized/cropped to a fixed dimension to facilitate training batch sampling. On the other hand, the convolutional filters process information locally and are limited in their ability to model the global spatial layout of a photograph. In this work, we present a two-stage framework based on graph neural networks and address both these problems jointly. First, we propose a feature-graph representation in which the input image is modelled as a graph, maintaining its original aspect ratio and resolution. Second, we propose a graph neural network architecture that takes this feature-graph and captures the semantic relationship between different regions of the input image using visual attention. Our experiments show that the proposed framework advances the state-of-the-art results in aesthetic score regression on the Aesthetic Visual Analysis (AVA) benchmark. Our code is publicly available for comparisons and further explorations. 1
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
人类自然有效地在复杂的场景中找到突出区域。通过这种观察的动机,引入了计算机视觉中的注意力机制,目的是模仿人类视觉系统的这一方面。这种注意机制可以基于输入图像的特征被视为动态权重调整过程。注意机制在许多视觉任务中取得了巨大的成功,包括图像分类,对象检测,语义分割,视频理解,图像生成,3D视觉,多模态任务和自我监督的学习。在本调查中,我们对计算机愿景中的各种关注机制进行了全面的审查,并根据渠道注意,空间关注,暂时关注和分支注意力进行分类。相关的存储库https://github.com/menghaoguo/awesome-vision-tions致力于收集相关的工作。我们还建议了未来的注意机制研究方向。
translated by 谷歌翻译
图像的美学质量被定义为图像美的度量或欣赏。美学本质上是一个主观性的财产,但是存在一些影响它的因素,例如图像的语义含量,描述艺术方面的属性,用于射击的摄影设置等。在本文中,我们提出了一种方法基于语义含量分析,艺术风格和图像的组成的图像自动预测图像的美学。所提出的网络包括:用于语义特征的预先训练的网络,提取(骨干网);依赖于骨干功能的多层的Perceptron(MLP)网络,用于预测图像属性(attributeNet);一种自适应的HyperNetwork,可利用以前编码到attributeNet生成的嵌入的属性以预测专用于美学估计的目标网络的参数(AestheticNet)。鉴于图像,所提出的多网络能够预测:风格和组成属性,以及美学分数分布。结果三个基准数据集展示了所提出的方法的有效性,而消融研究则更好地了解所提出的网络。
translated by 谷歌翻译
在过去的几年中,已经开发了图形绘图技术,目的是生成美学上令人愉悦的节点链接布局。最近,利用可区分损失功能的使用已为大量使用梯度下降和相关优化算法铺平了道路。在本文中,我们提出了一个用于开发图神经抽屉(GND)的新框架,即依靠神经计算来构建有效且复杂的图的机器。 GND是图形神经网络(GNN),其学习过程可以由任何提供的损失函数(例如图形图中通常使用的损失函数)驱动。此外,我们证明,该机制可以由通过前馈神经网络计算的损失函数来指导,并根据表达美容特性的监督提示,例如交叉边缘的最小化。在这种情况下,我们表明GNN可以通过位置功能很好地丰富与未标记的顶点处理。我们通过为边缘交叉构建损失函数来提供概念验证,并在提议的框架下工作的不同GNN模型之间提供定量和定性的比较。
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
从空中和卫星图像提取自动化路线图是一个长期存在的挑战。现有算法基于像素级分段,然后是矢量化,或者使用下一个移动预测的迭代图构造。这两种策略都遭受了严重的缺点,特别是高计算资源和不完整的产出。相比之下,我们提出了一种直接在单次通过中缩小最终道路图的方法。关键思想包括组合完全卷积的网络,这些网络负责定位点,例如交叉点,死头和转弯,以及预测这些点之间的链路的图形神经网络。这种策略比迭代方法更有效,并允许我们通过在保持训练端到端的同时消除生成起始位置的需要来简化培训过程。我们评估我们对流行的道路流数据集上现有工作的方法,并实现竞争结果。我们还将速度基准测试,并表明它优于现有的方法。这为嵌入式设备打开了飞行中的可能性。
translated by 谷歌翻译
本文通过控制功能级别的RGB图像和深度图之间的消息,介绍了RGB-D显着对象检测的新型深神经网络框架,并探索有关RGB和深度特征的远程语义上下文和几何信息推断出明显的对象。为了实现这一目标,我们通过图神经网络和可变形的卷积制定动态消息传播(DMP)模块,以动态学习上下文信息,并自动预测消息传播控制的过滤权重和亲和力矩阵。我们将该模块进一步嵌入基于暹罗的网络中,分别处理RGB图像和深度图,并设计多级特征融合(MFF)模块,以探索精制的RGB和深度特征之间的跨级信息。与六个基准数据集上用于RGB-D显着对象检测的17种最先进的方法相比,实验结果表明,我们的方法在定量和视觉上都优于其他所有方法。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
在过去的几年中,基于深度卷积神经网络(CNN)的图像识别已取得了重大进展。这主要是由于此类网络在挖掘判别对象姿势以及质地和形状的零件信息方面具有强大的能力。这通常不适合细粒度的视觉分类(FGVC),因为它由于阻塞,变形,照明等而表现出较高的类内和较低的阶层差异。表征对象/场景。为此,我们提出了一种方法,该方法可以通过汇总大多数相关图像区域的上下文感知特征及其在区分细颗粒类别中避免边界框和/或可区分的零件注释中的重要性来有效捕获细微的变化。我们的方法的灵感来自最新的自我注意力和图形神经网络(GNNS)方法的启发端到端的学习过程。我们的模型在八个基准数据集上进行了评估,该数据集由细粒对象和人类对象相互作用组成。它的表现优于最先进的方法,其识别准确性很大。
translated by 谷歌翻译
随着传感技术的进步,多元时间序列分类(MTSC)最近受到了相当大的关注。基于深度学习的MTSC技术主要依赖于卷积或经常性神经网络,主要涉及单时间序列的时间依赖性。结果,他们努力直接在多变量变量中表达成对依赖性。此外,基于图形神经网络(GNNS)的当前空间 - 时间建模(例如,图形分类)方法本质上是平的,并且不能以分层方式聚合集线器数据。为了解决这些限制,我们提出了一种基于新的图形汇集框架MTPOOL,以获得MTS的表现力全球表示。我们首先通过采用通过图形结构学习模块的相互作用来将MTS切片转换为曲线图,并通过时间卷积模块获得空间 - 时间图节点特征。为了获得全局图形级表示,我们设计了基于“编码器 - 解码器”的变形图池池模块,用于为群集分配创建自适应质心。然后我们将GNN和我们所提出的变分图层汇集层组合用于联合图表示学习和图形粗糙化,之后该图逐渐赋予一个节点。最后,可差异化的分类器将此粗糙的表示来获取最终预测的类。 10个基准数据集的实验表明MTPOOL优于MTSC任务中最先进的策略。
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
图形神经网络非常适合捕获时空域中各个实体之间的潜在相互作用(例如视频)。但是,当不可用的显式结构时,它不明显的原子元素应该表示为节点。当前工作通常使用预先训练的对象探测器或固定的预定义区域来提取曲线节点。我们提出的模型改进了这一点,了解动态地附加到沉重的突出区域的节点,其与更高级别的任务相关,而不使用任何对象级监督。构建这些本地化的自适应节点,使我们的模型感应偏向为中心的表示,并且我们表明它发现与视频中的对象完全相关的区域。在广泛的消融研究和两个具有挑战性数据集的实验中,我们向前图神经网络模型显示出卓越的性能,用于视频分类。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
A prominent paradigm for graph neural networks is based on the message passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate \textit{long distance communication} between nodes, as deep convolutional networks are prone to over-smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE), with a learnable time parameter. Our approach allows to adapt the spatial extent of diffusion across different tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture directly enables local message passing and thus inherits from the expressive power of local message passing approaches. We show that on widely used graph benchmarks we achieve comparable performance and on a synthetic mesh dataset we outperform state-of-the-art methods like GCN or GRAND by a significant margin.
translated by 谷歌翻译
图表神经网络(GNN)已被广泛用于学习图形结构数据的矢量表示,并实现比传统方法更好的任务性能。 GNN的基础是消息传递过程,它将节点中的信息传播到其邻居。由于该过程每层进行一个步骤,因此节点之间的信息传播的范围在下层中很小,并且它朝向更高的层扩展。因此,GNN模型必须深入地捕获图中的全局结构信息。另一方面,众所周知,深入的GNN模型遭受性能下降,因为它们丢失了节点的本地信息,这对于良好的模型性能至关重要,通过许多消息传递步骤。在本研究中,我们提出了用于图形级分类任务的多级注意汇总(MLAP),这可以适应图表中的本地和全局结构信息。对于每个消息传递步骤,它具有注意池层,通过统一层方格图表示来计算最终图表示。 MLAP架构允许模型利用具有多个级别的本地图形的结构信息,因为它在由于过度的过天气丢失时保留了层面信息。我们的实验结果表明,与基线架构相比,MLAP架构提高了图形分类性能。此外,图表表示的分析表明,来自多个级别的地方的聚合信息确实具有提高学习图表表示的可怜的潜力。
translated by 谷歌翻译
基于视频的人重新识别(REID)旨在识别多个非重叠摄像机的给定的行人视频序列。为了汇总视频样本的时间和空间特征,引入了图神经网络(GNN)。但是,现有的基于图的模型(例如STGCN)在节点功能上执行\ textIt {mean}/\ textit {max boming}以获取图表表示,该图表忽略了图形拓扑和节点的重要性。在本文中,我们建议图形池网络(GPNET)学习视频检索的多粒度图表示,其中实现了\ textit {Graph boming layer},以简化图形。我们首先构建了一个多粒图,其节点特征表示由骨架学到的图像嵌入,并且在颞和欧几里得邻域节点之间建立了边缘。然后,我们实现多个图形卷积层以在图上执行邻域聚集。为了下图,我们提出了一个多头全注意图池(MHFAPOOL)层,该图集合了现有节点群集和节点选择池的优势。具体而言,MHFAPOOL将全部注意矩阵的主要特征向量作为聚合系数涉及每个汇总节点中的全局图信息。广泛的实验表明,我们的GPNET在四个广泛使用的数据集(即火星,dukemtmc-veneoreid,ilids-vid and Prid-2011)上实现了竞争结果。
translated by 谷歌翻译