Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
It is a common sense that datasets with high-quality data samples play an important role in artificial intelligence (AI), machine learning (ML) and related studies. However, although AI/ML has been introduced in wireless researches long time ago, few datasets are commonly used in the research community. Without a common dataset, AI-based methods proposed for wireless systems are hard to compare with both the traditional baselines and even each other. The existing wireless AI researches usually rely on datasets generated based on statistical models or ray-tracing simulations with limited environments. The statistical data hinder the trained AI models from further fine-tuning for a specific scenario, and ray-tracing data with limited environments lower down the generalization capability of the trained AI models. In this paper, we present the Wireless AI Research Dataset (WAIR-D)1, which consists of two scenarios. Scenario 1 contains 10,000 environments with sparsely dropped user equipments (UEs), and Scenario 2 contains 100 environments with densely dropped UEs. The environments are randomly picked up from more than 40 cities in the real world map. The large volume of the data guarantees that the trained AI models enjoy good generalization capability, while fine-tuning can be easily carried out on a specific chosen environment. Moreover, both the wireless channels and the corresponding environmental information are provided in WAIR-D, so that extra-information-aided communication mechanism can be designed and evaluated. WAIR-D provides the researchers benchmarks to compare their different designs or reproduce results of others. In this paper, we show the detailed construction of this dataset and examples of using it.
translated by 谷歌翻译
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
translated by 谷歌翻译
In split machine learning (ML), different partitions of a neural network (NN) are executed by different computing nodes, requiring a large amount of communication cost. To ease communication burden, over-the-air computation (OAC) can efficiently implement all or part of the computation at the same time of communication. Based on the proposed system, the system implementation over wireless network is introduced and we provide the problem formulation. In particular, we show that the inter-layer connection in a NN of any size can be mathematically decomposed into a set of linear precoding and combining transformations over MIMO channels. Therefore, the precoding matrix at the transmitter and the combining matrix at the receiver of each MIMO link, as well as the channel matrix itself, can jointly serve as a fully connected layer of the NN. The generalization of the proposed scheme to the conventional NNs is also introduced. Finally, we extend the proposed scheme to the widely used convolutional neural networks and demonstrate its effectiveness under both the static and quasi-static memory channel conditions with comprehensive simulations. In such a split ML system, the precoding and combining matrices are regarded as trainable parameters, while MIMO channel matrix is regarded as unknown (implicit) parameters.
translated by 谷歌翻译
我们解决了从一般标记(例如电影海报)估计对应关系到捕获这种标记的图像的问题。通常,通过拟合基于稀疏特征匹配的同型模型来解决此问题。但是,他们只能处理类似平面的标记,而稀疏功能不能充分利用外观信息。在本文中,我们提出了一个新颖的框架神经标记器,训练神经网络估计在各种具有挑战性的条件下(例如标记变形,严格的照明等)估算密集标记的对应关系。此外,我们还提出了一种新颖的标记通信评估方法,对真实标记的注释进行了注释。 - 图像对并创建一个新的基准测试。我们表明,神经标记的表现明显优于以前的方法,并实现了新的有趣应用程序,包括增强现实(AR)和视频编辑。
translated by 谷歌翻译
可重新配置的智能表面(RIS)可以显着增强TERA-HERTZ大量多输入多输出(MIMO)通信系统的服务覆盖范围。但是,获得有限的飞行员和反馈信号开销的准确高维通道状态信息(CSI)具有挑战性,从而严重降低了常规空间分裂多次访问的性能。为了提高针对CSI缺陷的鲁棒性,本文提出了针对RIS辅助TERA-HERTZ多用户MIMO系统的基于深度学习的(DL)基于速率的多访问(RSMA)方案。具体而言,我们首先提出了基于DL的混合数据模型驱动的RSMA预编码方案,包括RIS的被动预编码以及模拟主动编码和基本站(BS)的RSMA数字活动预码。为了实现RIS的被动预码,我们提出了一个基于变压器的数据驱动的RIS反射网络(RRN)。至于BS的模拟主动编码,我们提出了一个基于匹配器的模拟预编码方案,因为BS和RIS采用了Los-Mimo天线阵列结构。至于BS的RSMA数字活动预码,我们提出了一个低复杂性近似加权的最小均方误差(AWMMSE)数字编码方案。此外,为了更好地编码性能以及较低的计算复杂性,模型驱动的深层展开的主动编码网络(DFAPN)也是通过将所提出的AWMMSE方案与DL相结合的。然后,为了在BS处获得准确的CSI,以实现提高光谱效率的RSMA预编码方案,我们提出了一个CSI采集网络(CAN),具有低飞行员和反馈信号开销,下行链接飞行员的传输,CSI在此处使用CSI的CSI反馈。 (UES)和BS处的CSI重建被建模为基于变压器的端到端神经网络。
translated by 谷歌翻译
大多数用于音频任务的机器学习模型都在处理手工制作的功能,即频谱图。但是,仍然未知是否可以用基于深度学习的功能代替频谱图。在本文中,我们通过将不同的可学习神经网络与成功的频谱图模型进行比较,并提出了基于双U-NET(GAFX-U)的一般音频提取器(GAFX)(GAFX-R(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R)(GAFX-R))和注意力(GAFX-A)模块。我们设计实验以评估GTZAN数据集上的音乐流派分类任务,并遵循音频频谱变压器(AST)分类器Achie Achie Achie aCHIE竞争性能,对我们框架的不同配置和模型GAFX-U进行了详细的消融研究。
translated by 谷歌翻译
本文解决了对预先训练的深神经网络进行排名并筛选最下游任务的重要问题。这是具有挑战性的,因为每个任务的基本模型排名只能通过微调目标数据集中的预训练模型来生成,该模型是蛮力且计算昂贵的。最近的高级方法提出了几个轻巧的可转移性指标来预测微调结果。但是,这些方法仅捕获静态表示,但忽略了微调动态。为此,本文提出了一个新的可传递性度量,称为\ textbf {s} elf-challenging \ textbf {f} isher \ textbf {d} is Criminant \ textbf {a} nalisy(\ textbf {\ textbf {sfda})现有作品没有的有吸引力的好处。首先,SFDA可以将静态特征嵌入渔民空间中,并完善它们,以在类之间更好地分离性。其次,SFDA使用一种自我挑战的机制来鼓励不同的预训练模型来区分硬性示例。第三,SFDA可以轻松地为模型集合选择多个预训练的模型。 $ 33 $预培训的$ 11 $下游任务的$ 33 $预培训模型的广泛实验表明,在测量预训练模型的可传递性时,SFDA具有高效,有效和健壮。例如,与最先进的方法NLEEP相比,SFDA平均显示了59.1美元的增益,同时带来了$ 22.5 $ x的墙壁速度速度。该代码将在\ url {https://github.com/tencentarc/sfda}上提供。
translated by 谷歌翻译
利用图像生成模型的最新进展,现有的可控面图像合成方法能够生成具有某些可控性的高保真图像,例如控制生成的面部图像的形状,表达,纹理和姿势。但是,这些方法集中在2D图像生成模型上,这些模型容易在大表达和姿势变化下产生不一致的面部图像。在本文中,我们提出了一个新的基于NERF的条件3D面部合成框架,该框架可以通过从3D脸先进的3D面部施加显式3D条件来对生成的面部图像进行3D可控性。其核心是有条件的生成占用场(CGOF),可有效地强制生成的面部形状,以使其对给定的3D形态模型(3DMM)网格进行。为了准确控制合成图像的细粒3D面部形状,我们还将3D地标损耗以及体积翘曲损失纳入我们的合成算法中。实验验证了所提出的方法的有效性,该方法能够生成高保真的面部图像,并显示出比基于2D的最新可控制的面部合成方法更精确的3D可控性。在https://keqiangsun.github.io/projects/cgof上查找代码和演示。
translated by 谷歌翻译
我们介绍了光流变压器,被称为流动型,这是一种基于变压器的神经网络体系结构,用于学习光流。流动形式将图像对构建的4D成本量构成,将成本令牌编码为成本记忆,并在新颖的潜在空间中使用备用组变压器(AGT)层编码成本记忆,并通过反复的变压器解码器与动态位置成本查询来解码成本记忆。在SINTEL基准测试中,流动型在干净和最终通行证上达到1.144和2.183平均末端PONIT-ERROR(AEPE),从最佳发布的结果(1.388和2.47)降低了17.6%和11.6%的误差。此外,流程度还达到了强大的概括性能。在不接受Sintel的培训的情况下,FlowFormer在Sintel训练套装清洁通行证上达到了0.95 AEPE,优于最佳发布结果(1.29),提高了26.9%。
translated by 谷歌翻译