图形神经网络(GNN)是图形数据的有效的神经网络模型,广泛用于不同的领域,包括无线通信。与其他神经网络模型不同,GNN可以以分散的方式实现,其中邻居之间的信息交换,使其成为无线通信系统中分散控制的潜在强大的工具。然而,主要的瓶颈是无线频道损伤,其恶化了GNN的预测稳健性。为了克服这个障碍,我们在本文中分析和增强了不同无线通信系统中分散的GNN的鲁棒性。具体地,使用GNN二进制分类器作为示例,我们首先开发一种方法来验证预测是否稳健。然后,我们在未编码和编码的无线通信系统中分析分散的GNN二进制分类器的性能。为了解决不完美的无线传输并增强预测稳健性,我们进一步提出了用于上述两个通信系统的新型重传机制。通过仿真对合成图数据,我们验证了我们的分析,验证了提出的重传机制的有效性,并为实际实施提供了一些见解。
translated by 谷歌翻译
As an efficient graph analytical tool, graph neural networks (GNNs) have special properties that are particularly fit for the characteristics and requirements of wireless communications, exhibiting good potential for the advancement of next-generation wireless communications. This article aims to provide a comprehensive overview of the interplay between GNNs and wireless communications, including GNNs for wireless communications (GNN4Com) and wireless communications for GNNs (Com4GNN). In particular, we discuss GNN4Com based on how graphical models are constructed and introduce Com4GNN with corresponding incentives. We also highlight potential research directions to promote future research endeavors for GNNs in wireless communications.
translated by 谷歌翻译
作为图形数据的有效神经网络模型,图形神经网络(GNN)最近找到了针对各种无线优化问题的成功应用程序。鉴于GNN的推理阶段可以自然地以分散的方式实施,因此GNN是下一代无线通信中分散控制/管理的潜在推动力。但是,由于在与GNN的分散推断期间,邻居之间的信息交流可能会发生隐私泄漏。为了解决这个问题,在本文中,我们分析并增强了无线网络中GNN分散推断的隐私。具体来说,我们采用当地的差异隐私作为指标,设计了新颖的隐私信号以及隐私保证的培训算法,以实现保护隐私的推论。我们还定义了SNR私人关系权衡功能,以分析无线网络中使用GNN的分散推理的性能上限。为了进一步提高沟通和计算效率,我们采用了空中计算技术,理论上证明了其在隐私保护方面的优势。通过对合成图数据的大量模拟,我们验证了理论分析,验证提出的隐私无线信号传导和隐私保证培训算法的有效性,并就实际实施提供一些指导。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
Deep learning-based approaches have been developed to solve challenging problems in wireless communications, leading to promising results. Early attempts adopted neural network architectures inherited from applications such as computer vision. They often yield poor performance in large scale networks (i.e., poor scalability) and unseen network settings (i.e., poor generalization). To resolve these issues, graph neural networks (GNNs) have been recently adopted, as they can effectively exploit the domain knowledge, i.e., the graph topology in wireless communications problems. GNN-based methods can achieve near-optimal performance in large-scale networks and generalize well under different system settings, but the theoretical underpinnings and design guidelines remain elusive, which may hinder their practical implementations. This paper endeavors to fill both the theoretical and practical gaps. For theoretical guarantees, we prove that GNNs achieve near-optimal performance in wireless networks with much fewer training samples than traditional neural architectures. Specifically, to solve an optimization problem on an $n$-node graph (where the nodes may represent users, base stations, or antennas), GNNs' generalization error and required number of training samples are $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$ times lower than the unstructured multi-layer perceptrons. For design guidelines, we propose a unified framework that is applicable to general design problems in wireless networks, which includes graph modeling, neural architecture design, and theory-guided performance enhancement. Extensive simulations, which cover a variety of important problems and network settings, verify our theory and the effectiveness of the proposed design framework.
translated by 谷歌翻译
在本文中,我们旨在改善干扰限制的无线网络中超级可靠性和低延迟通信(URLLC)的服务质量(QoS)。为了在通道连贯性时间内获得时间多样性,我们首先提出了一个随机重复方案,该方案随机将干扰能力随机。然后,我们优化了每个数据包的保留插槽数量和重复数量,以最大程度地减少QoS违规概率,该概率定义为无法实现URLLC的用户百分比。我们构建了一个级联的随机边缘图神经网络(REGNN),以表示重复方案并开发一种无模型的无监督学习方法来训练它。我们在对称场景中使用随机几何形状分析了QoS违规概率,并应用基于模型的详尽搜索(ES)方法来找到最佳解决方案。仿真结果表明,在对称方案中,通过模型学习方法和基于模型的ES方法实现的QoS违规概率几乎相同。在更一般的情况下,级联的Regnn在具有不同尺度,网络拓扑,细胞密度和频率重复使用因子的无线网络中很好地概括了。在模型不匹配的情况下,它的表现优于基于模型的ES方法。
translated by 谷歌翻译
通信网络是当代社会中的重要基础设施。仍存在许多挑战,在该活性研究区域中不断提出新的解决方案。近年来,为了模拟网络拓扑,基于图形的深度学习在通信网络中的一系列问题中实现了最先进的性能。在本调查中,我们使用基于不同的图形的深度学习模型来审查快速增长的研究机构,例如,使用不同的图形深度学习模型。图表卷积和曲线图注意网络,在不同类型的通信网络中的各种问题中,例如,无线网络,有线网络和软件定义的网络。我们还为每项研究提供了一个有组织的问题和解决方案列表,并确定了未来的研究方向。据我们所知,本文是第一个专注于在涉及有线和无线场景的通信网络中应用基于图形的深度学习方法的调查。要跟踪后续研究,创建了一个公共GitHub存储库,其中相关文件将不断更新。
translated by 谷歌翻译
Communication and computation are often viewed as separate tasks. This approach is very effective from the perspective of engineering as isolated optimizations can be performed. On the other hand, there are many cases where the main interest is a function of the local information at the devices instead of the local information itself. For such scenarios, information theoretical results show that harnessing the interference in a multiple-access channel for computation, i.e., over-the-air computation (OAC), can provide a significantly higher achievable computation rate than the one with the separation of communication and computation tasks. Besides, the gap between OAC and separation in terms of computation rate increases with more participating nodes. Given this motivation, in this study, we provide a comprehensive survey on practical OAC methods. After outlining fundamentals related to OAC, we discuss the available OAC schemes with their pros and cons. We then provide an overview of the enabling mechanisms and relevant metrics to achieve reliable computation in the wireless channel. Finally, we summarize the potential applications of OAC and point out some future directions.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
在本章中,我们将主要关注跨无线设备的协作培训。培训ML模型相当于解决优化问题,并且在过去几十年中已经开发了许多分布式优化算法。这些分布式ML算法提供数据局部性;也就是说,可以协同地培训联合模型,而每个参与设备的数据仍然是本地的数据。这个地址,一些延伸,隐私问题。它们还提供计算可扩展性,因为它们允许利用分布在许多边缘设备的计算资源。然而,在实践中,这不会直接导致整体学习速度的线性增益与设备的数量。这部分是由于通信瓶颈限制了整体计算速度。另外,无线设备在其计算能力中具有高度异构,并且它们的计算速度和通信速率都可能由于物理因素而高度变化。因此,考虑到时变通信网络的影响以及器件的异构和随机计算能力,必须仔细设计分布式学习算法,特别是在无线网络边缘实现的算法。
translated by 谷歌翻译
本文通过匹配的追求方法开发了一类低复杂设备调度算法,以实现空中联合学习。提出的方案紧密跟踪了通过差异编程实现的接近最佳性能,并且基于凸松弛的众所周知的基准算法极大地超越了众所周知的基准算法。与最先进的方案相比,所提出的方案在系统上构成了较低的计算负载:对于$ k $设备和参数服务器上的$ n $ antennas,基准的复杂性用$ \ left缩放(n^)2 + k \ right)^3 + n^6 $,而提出的方案量表的复杂性则以$ 0 <p,q \ leq 2 $为$ k^p n^q $。通过CIFAR-10数据集上的数值实验证实了所提出的方案的效率。
translated by 谷歌翻译
这项工作将重新审视关节波束形成(BF)和天线选择(AS)问题,以及其在不完美的通道状态信息(CSI)下的稳健光束成型(RBF)版本。在射频链的数量(RF)链的数量小于发射器上的天线元件的情况下,出现了此类问题,这已成为大型阵列时代的关键考虑。关节(r)bf \&作为问题是一个混合整数和非线性程序,因此发现{\ it最佳解决方案}通常是昂贵的,即使不是完全不可能。绝大多数先前的作品都使用基于连续优化的近似来解决这些问题 - 但是这些近似不能确保解决方案的最佳性甚至可行性。这项工作的主要贡献是三倍。首先,提出了一个有效的{\ it分支和绑定}(b \&b)解决感兴趣问题的框架。利用现有的BF和RBF求解器,表明B \&B框架保证了所考虑的问题的全球最优性。其次,为了加快潜在昂贵的B \&B算法,提出了一种基于机器学习(ML)的方案,以帮助跳过B \&B搜索树的中间状态。学习模型具有{\ it图形神经网络}(GNN)的设计,该设计对无线通信中通常遇到的挑战有抵抗力,即,培训和测试中问题大小的变化(例如,用户数量)的变化(例如,用户数量)阶段。第三,提出了全面的性能特征,表明基于GNN的方法在合理的条件下保留了B \&B的全球最佳性,其复杂性可降低。数值模拟还表明,基于ML的加速度通常可以相对于B \&b实现速度的速度。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
互联网连接系统的指数增长产生了许多挑战,例如频谱短缺问题,需要有效的频谱共享(SS)解决方案。复杂和动态的SS系统可以接触不同的潜在安全性和隐私问题,需要保护机制是自适应,可靠和可扩展的。基于机器学习(ML)的方法经常提议解决这些问题。在本文中,我们对最近的基于ML的SS方法,最关键的安全问题和相应的防御机制提供了全面的调查。特别是,我们详细说明了用于提高SS通信系统的性能的最先进的方法,包括基于ML基于ML的基于的数据库辅助SS网络,ML基于基于的数据库辅助SS网络,包括基于ML的数据库辅助的SS网络,基于ML的LTE-U网络,基于ML的环境反向散射网络和其他基于ML的SS解决方案。我们还从物理层和基于ML算法的相应防御策略的安全问题,包括主要用户仿真(PUE)攻击,频谱感测数据伪造(SSDF)攻击,干扰攻击,窃听攻击和隐私问题。最后,还给出了对ML基于ML的开放挑战的广泛讨论。这种全面的审查旨在为探索新出现的ML的潜力提供越来越复杂的SS及其安全问题,提供基础和促进未来的研究。
translated by 谷歌翻译
联合学习产生了重大兴趣,几乎所有作品都集中在一个“星形”拓扑上,其中节点/设备每个都连接到中央服务器。我们远离此架构,并将其通过网络维度扩展到最终设备和服务器之间存在多个节点的情况。具体而言,我们开发多级混合联合学习(MH-FL),是层内模型学习的混合,将网络视为基于多层群集的结构。 MH-FL认为集群中的节点中的拓扑结构,包括通过设备到设备(D2D)通信形成的本地网络,并假设用于联合学习的半分散式架构。它以协作/协作方式(即,使用D2D交互)在不同网络层处的设备进行编程,以在模型参数上形成本地共识,并将其与树形层次层的层之间的多级参数中继相结合。我们相对于网络拓扑(例如,光谱半径)和学习算法的参数来得出MH-F1的收敛的大界限(例如,不同簇中的D2D圆数的数量)。我们在不同的集群中获得了一系列D2D轮的政策,以保证有限的最佳差距或收敛到全局最佳。然后,我们开发一个分布式控制算法,用于MH-FL在每个集群中调整每个集群的D2D轮,以满足特定的收敛标准。我们在现实世界数据集上的实验验证了我们的分析结果,并展示了MH-FL在资源利用率指标方面的优势。
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译
Influence Maximization (IM) is a classical combinatorial optimization problem, which can be widely used in mobile networks, social computing, and recommendation systems. It aims at selecting a small number of users such that maximizing the influence spread across the online social network. Because of its potential commercial and academic value, there are a lot of researchers focusing on studying the IM problem from different perspectives. The main challenge comes from the NP-hardness of the IM problem and \#P-hardness of estimating the influence spread, thus traditional algorithms for overcoming them can be categorized into two classes: heuristic algorithms and approximation algorithms. However, there is no theoretical guarantee for heuristic algorithms, and the theoretical design is close to the limit. Therefore, it is almost impossible to further optimize and improve their performance. With the rapid development of artificial intelligence, the technology based on Machine Learning (ML) has achieved remarkable achievements in many fields. In view of this, in recent years, a number of new methods have emerged to solve combinatorial optimization problems by using ML-based techniques. These methods have the advantages of fast solving speed and strong generalization ability to unknown graphs, which provide a brand-new direction for solving combinatorial optimization problems. Therefore, we abandon the traditional algorithms based on iterative search and review the recent development of ML-based methods, especially Deep Reinforcement Learning, to solve the IM problem and other variants in social networks. We focus on summarizing the relevant background knowledge, basic principles, common methods, and applied research. Finally, the challenges that need to be solved urgently in future IM research are pointed out.
translated by 谷歌翻译
This paper presents a methodology for integrating machine learning techniques into metaheuristics for solving combinatorial optimization problems. Namely, we propose a general machine learning framework for neighbor generation in metaheuristic search. We first define an efficient neighborhood structure constructed by applying a transformation to a selected subset of variables from the current solution. Then, the key of the proposed methodology is to generate promising neighbors by selecting a proper subset of variables that contains a descent of the objective in the solution space. To learn a good variable selection strategy, we formulate the problem as a classification task that exploits structural information from the characteristics of the problem and from high-quality solutions. We validate our methodology on two metaheuristic applications: a Tabu Search scheme for solving a Wireless Network Optimization problem and a Large Neighborhood Search heuristic for solving Mixed-Integer Programs. The experimental results show that our approach is able to achieve a satisfactory trade-off between the exploration of a larger solution space and the exploitation of high-quality solution regions on both applications.
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译