Graph neural networks (GNNs) have pushed the state-of-the-art (SOTA) for performance in learning and predicting on large-scale data present in social networks, biology, etc. Since integrated circuits (ICs) can naturally be represented as graphs, there has been a tremendous surge in employing GNNs for machine learning (ML)-based methods for various aspects of IC design. Given this trajectory, there is a timely need to review and discuss some powerful and versatile GNN approaches for advancing IC design. In this paper, we propose a generic pipeline for tailoring GNN models toward solving challenging problems for IC design. We outline promising options for each pipeline element, and we discuss selected and promising works, like leveraging GNNs to break SOTA logic obfuscation. Our comprehensive overview of GNNs frameworks covers (i) electronic design automation (EDA) and IC design in general, (ii) design of reliable ICs, and (iii) design as well as analysis of secure ICs. We provide our overview and related resources also in the GNN4IC hub at https://github.com/DfX-NYUAD/GNN4IC. Finally, we discuss interesting open problems for future research.
translated by 谷歌翻译
逆向工程集成电路网表是一个强大的工具,可以帮助检测恶意逻辑和抵消设计盗版。该域中的一个关键挑战是设计中数据路径和控制逻辑寄存器的正确分类。我们展示了一种新的基于学习的寄存器分类方法,该方法将图形神经网络(GNN)与结构分析相结合,以将寄存器分类在电路中,以高精度和概括不同的设计。 GNN在处理电路网表方面特别有效,以便在节点和它们的邻域的利用,以便学习有效地区分不同类型的节点。结构分析可以进一步通过GNN将被错误分类错误分类的寄存器通过分析在网表图中的强连接的组件来纠正为状态寄存器。一组基准的数值结果表明,Reignn可以平均实现96.5%的平衡准确性和不同设计的灵敏度97.7%。
translated by 谷歌翻译
过程变化和设备老化对电路设计师构成了深刻的挑战。如果不对变化对电路路径的延迟的影响进行精确理解,无法正确估计避免定时违规行为的后卫带。对于先进的技术节点,此问题加剧了,其中晶体管尺寸达到原子水平,并且已建立的边缘受到严格限制。因此,传统的最坏情况分析变得不切实际,导致无法忍受的性能开销。相反,过程变化/衰老感知的静态时序分析(STA)为设计师提供了准确的统计延迟分布。然后可以有效地估计小但足够的时正时标志。但是,这样的分析是昂贵的,因为它需要密集的蒙特卡洛模拟。此外,它需要访问基于机密的物理老化模型来生成STA所需的标准细胞库。在这项工作中,我们采用图形神经网络(GNN)来准确估计过程变化和设备衰老对电路中任何路径延迟的影响。我们提出的GNN4REL框架使设计师能够执行快速准确的可靠性估计,而无需访问晶体管模型,标准细胞库甚至STA;这些组件均通过铸造厂的训练纳入GNN模型中。具体而言,对GNN4REL进行了针对工业14NM测量数据进行校准的FinFET技术模型的培训。通过我们对EPFL和ITC-99基准以及RISC-V处理器进行的广泛实验,我们成功估计了所有路径的延迟降级(尤其是在几秒钟内),平均绝对误差降至0.01个百分点。
translated by 谷歌翻译
综合电路(IC)供应链的全球化已将大部分设计,制造和测试过程从单一的受信任实体转移到全球各种不信任的第三方实体。使用不信任的第三方知识产权(3PIP)的风险是,对手可能会插入称为硬件木马(HTS)的恶意修改。这些HT可以损害完整性,恶化性能,拒绝服务并改变设计的功能。尽管文献中已经提出了许多HT检测方法,但HT定位的关键任务被忽略了。此外,一些现有的HT本地化方法具有多个弱点:依赖黄金参考,无法概括所有类型的HT,缺乏可扩展性,低位置分辨率以及手动功能工程/属性定义。为了克服他们的缺点,我们通过利用图形卷积网络(GCN)提出了一种新颖的,无参考的HT定位方法。在这项工作中,我们将电路设计转换为其内在数据结构,绘制并提取节点属性。之后,图形卷积对节点进行自动提取,以将节点分类为特洛伊木马或良性。我们的自动化方法不会通过手动代码审查来负担设计师的负担。它以99.6%的精度,93.1%的F1得分和假阳性速率低于0.009%的速率定位特洛伊木马信号。
translated by 谷歌翻译
保持个人特征和复杂的关系,广泛利用和研究了图表数据。通过更新和聚合节点的表示,能够捕获结构信息,图形神经网络(GNN)模型正在获得普及。在财务背景下,该图是基于实际数据构建的,这导致复杂的图形结构,因此需要复杂的方法。在这项工作中,我们在最近的财务环境中对GNN模型进行了全面的审查。我们首先将普通使用的财务图分类并总结每个节点的功能处理步骤。然后,我们总结了每个地图类型的GNN方法,每个区域的应用,并提出一些潜在的研究领域。
translated by 谷歌翻译
目前有技术节点缩放,早期设计阶段的精确预测模型可以显着降低设计周期。特别是在逻辑合成期间,预测由于逻辑组合不当导致的细胞拥塞可以减少后续物理实现的负担。已经尝试使用图形神经网络(GNN)技术来解决逻辑合成阶段的拥塞预测。然而,它们需要信息性小区特征来实现合理的性能,因为GNN的核心概念构建在消息通过框架上,这在早期逻辑合成阶段将是不切实际的。为了解决这个限制,我们提出了一个框架,可以直接学习给定网表的嵌入式,以提高节点功能的质量。基于流行的随机播放的嵌入方法,如Node2VEC,LINE和DeadWalk遭受横绘对齐和普遍性的问题,以取消差价,效率低于性能和成本耗费的运行时。在我们的框架中,我们介绍了一种卓越的替代方案,可以获得可以使用矩阵分解方法概括在网表图中的节点嵌入。我们在子图水平上提出了一种高效的迷你批量培训方法,可以保证并行培训并满足大规模网手册的内存限制。我们呈现利用开源EDA工具的结果,如Dreamplace和OpenORAD框架上的各种公开的电路。通过将学习的嵌入在网手册的顶部与GNN结合,我们的方法可以提高预测性能,推广到新电路线,并且在训练中具有高效,潜在节省超过$ 90 \%运行时。
translated by 谷歌翻译
在电子设计自动化(EDA)领域的应用深度学习(DL)技术已成为近年来的趋势主题。大多数现有解决方案适用于开发的DL模型来解决特定的EDA问题。在展示有希望的结果的同时,他们需要仔细模型调整每个问题。关于\ Texit的基本问题{“如何获得一般和有效的电路神经表征?”}尚未得到解答。在这项工作中,我们迈出了解决这个问题的第一步。我们提出\ Textit {DeepGate},一种新颖的表示学习解决方案,其有效地将电路的逻辑功能和结构信息嵌入为每个门上的向量。具体而言,我们将电路转换为统一和倒换图格式,以便学习和使用信号概率作为Deplegate中的监控任务。然后,我们介绍一种新的图形神经网络,该网络神经网络在实际电路中使用强烈的电感偏差作为信号概率预测的学习前沿。我们的实验结果表明了深度的功效和泛化能力。
translated by 谷歌翻译
Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence.
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
在过去的十年中,在逻辑锁定的设计和评估方面取得了很大进展;一种维护整个电子供应链中集成电路完整性的首选技术。然而,机器学习的广泛增殖最近引入了评估逻辑锁定方案的新途径。本文总结了当代机器学习模型前沿的逻辑锁定攻击和对策的最新发展。基于所提出的工作,重点突出了关键的外卖,机会和挑战,为下一代逻辑锁定设计提供了建议。
translated by 谷歌翻译
计算机架构和系统已优化了很长时间,以便高效执行机器学习(ML)模型。现在,是时候重新考虑ML和系统之间的关系,并让ML转换计算机架构和系统的设计方式。这有一个双重含义:改善设计师的生产力,以及完成良性周期。在这篇论文中,我们对应用ML进行计算机架构和系统设计的工作进行了全面的审查。首先,我们考虑ML技术在架构/系统设计中的典型作用,即快速预测建模或设计方法,我们执行高级分类学。然后,我们总结了通过ML技术解决的计算机架构/系统设计中的常见问题,并且所用典型的ML技术来解决它们中的每一个。除了在狭义中强调计算机架构外,我们采用数据中心可被认为是仓库规模计算机的概念;粗略的计算机系统中提供粗略讨论,例如代码生成和编译器;我们还注意ML技术如何帮助和改造设计自动化。我们进一步提供了对机会和潜在方向的未来愿景,并设想应用ML的计算机架构和系统将在社区中蓬勃发展。
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译
Influence Maximization (IM) is a classical combinatorial optimization problem, which can be widely used in mobile networks, social computing, and recommendation systems. It aims at selecting a small number of users such that maximizing the influence spread across the online social network. Because of its potential commercial and academic value, there are a lot of researchers focusing on studying the IM problem from different perspectives. The main challenge comes from the NP-hardness of the IM problem and \#P-hardness of estimating the influence spread, thus traditional algorithms for overcoming them can be categorized into two classes: heuristic algorithms and approximation algorithms. However, there is no theoretical guarantee for heuristic algorithms, and the theoretical design is close to the limit. Therefore, it is almost impossible to further optimize and improve their performance. With the rapid development of artificial intelligence, the technology based on Machine Learning (ML) has achieved remarkable achievements in many fields. In view of this, in recent years, a number of new methods have emerged to solve combinatorial optimization problems by using ML-based techniques. These methods have the advantages of fast solving speed and strong generalization ability to unknown graphs, which provide a brand-new direction for solving combinatorial optimization problems. Therefore, we abandon the traditional algorithms based on iterative search and review the recent development of ML-based methods, especially Deep Reinforcement Learning, to solve the IM problem and other variants in social networks. We focus on summarizing the relevant background knowledge, basic principles, common methods, and applied research. Finally, the challenges that need to be solved urgently in future IM research are pointed out.
translated by 谷歌翻译
Graph mining tasks arise from many different application domains, ranging from social networks, transportation to E-commerce, etc., which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph data mining tasks. However, these graph mining methods and RL models are dispersed in different research areas, which makes it hard to compare them. In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains and summarize the method descriptions, open-source codes, and benchmark datasets of GRL methods. Furthermore, we propose important directions and challenges to be solved in the future. As far as we know, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. In addition, we create an online open-source for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
通信网络是当代社会中的重要基础设施。仍存在许多挑战,在该活性研究区域中不断提出新的解决方案。近年来,为了模拟网络拓扑,基于图形的深度学习在通信网络中的一系列问题中实现了最先进的性能。在本调查中,我们使用基于不同的图形的深度学习模型来审查快速增长的研究机构,例如,使用不同的图形深度学习模型。图表卷积和曲线图注意网络,在不同类型的通信网络中的各种问题中,例如,无线网络,有线网络和软件定义的网络。我们还为每项研究提供了一个有组织的问题和解决方案列表,并确定了未来的研究方向。据我们所知,本文是第一个专注于在涉及有线和无线场景的通信网络中应用基于图形的深度学习方法的调查。要跟踪后续研究,创建了一个公共GitHub存储库,其中相关文件将不断更新。
translated by 谷歌翻译
生物医学网络是与疾病网络的蛋白质相互作用的普遍描述符,从蛋白质相互作用,一直到医疗保健系统和科学知识。随着代表学习提供强大的预测和洞察的显着成功,我们目睹了表现形式学习技术的快速扩展,进入了这些网络的建模,分析和学习。在这篇综述中,我们提出了一个观察到生物学和医学中的网络长期原则 - 而在机器学习研究中经常出口 - 可以为代表学习提供概念基础,解释其当前的成功和限制,并告知未来进步。我们综合了一系列算法方法,即在其核心利用图形拓扑到将网络嵌入到紧凑的向量空间中,并捕获表示陈述学习证明有用的方式的广度。深远的影响包括鉴定复杂性状的变异性,单细胞的异心行为及其对健康的影响,协助患者的诊断和治疗以及制定安全有效的药物。
translated by 谷歌翻译
图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译