我们考虑在存在噪声和不存在同步的情况下给定Raw I / Q波形的雷达脉冲对雷达脉冲的问题。我们还考虑分类多个叠加雷达脉冲的问题。对于这两者来说,我们设计了对同步,脉冲宽度和SNR具有鲁棒的深度神经网络(DNN)。我们的设计在以前最先进的误差率下收益超过100倍。
translated by 谷歌翻译
Communications systems to date are primarily designed with the goal of reliable (error-free) transfer of digital sequences (bits). Next generation (NextG) communication systems are beginning to explore shifting this design paradigm of reliably decoding bits to reliably executing a given task. Task-oriented communications system design is likely to find impactful applications, for example, considering the relative importance of messages. In this paper, a wireless signal classification is considered as the task to be performed in the NextG Radio Access Network (RAN) for signal intelligence and spectrum awareness applications such as user equipment (UE) identification and authentication, and incumbent signal detection for spectrum co-existence. For that purpose, edge devices collect wireless signals and communicate with the NextG base station (gNodeB) that needs to know the signal class. Edge devices may not have sufficient processing power and may not be trusted to perform the signal classification task, whereas the transfer of the captured signals from the edge devices to the gNodeB may not be efficient or even feasible subject to stringent delay, rate, and energy restrictions. We present a task-oriented communications approach, where all the transmitter, receiver and classifier functionalities are jointly trained as two deep neural networks (DNNs), one for the edge device and another for the gNodeB. We show that this approach achieves better accuracy with smaller DNNs compared to the baselines that treat communications and signal classification as two separate tasks. Finally, we discuss how adversarial machine learning poses a major security threat for the use of DNNs for task-oriented communications. We demonstrate the major performance loss under backdoor (Trojan) attacks and adversarial (evasion) attacks that target the training and test processes of task-oriented communications.
translated by 谷歌翻译
鉴于无线频谱的有限性和对无线通信最近的技术突破产生的频谱使用不断增加的需求,干扰问题仍在继续持续存在。尽管最近解决干涉问题的进步,但干扰仍然呈现出有效使用频谱的挑战。这部分是由于Wi-Fi的无许可和管理共享乐队使用的升高,长期演进(LTE)未许可(LTE-U),LTE许可辅助访问(LAA),5G NR等机会主义频谱访问解决方案。因此,需要对干扰稳健的有效频谱使用方案的需求从未如此重要。在过去,通过使用避免技术以及非AI缓解方法(例如,自适应滤波器)来解决问题的大多数解决方案。非AI技术的关键缺陷是需要提取或开发信号特征的域专业知识,例如CycrationArity,带宽和干扰信号的调制。最近,研究人员已成功探索了AI / ML的物理(PHY)层技术,尤其是深度学习,可减少或补偿干扰信号,而不是简单地避免它。 ML基于ML的方法的潜在思想是学习来自数据的干扰或干扰特性,从而使需要对抑制干扰的域专业知识进行侧联。在本文中,我们审查了广泛的技术,这些技术已经深入了解抑制干扰。我们为干扰抑制中许多不同类型的深度学习技术提供比较和指导。此外,我们突出了在干扰抑制中成功采用深度学习的挑战和潜在的未来研究方向。
translated by 谷歌翻译
未来的通信网络必须解决稀缺范围,以适应异质无线设备的广泛增长。无线信号识别对于频谱监视,频谱管理,安全通信等越来越重要。因此,对边缘的综合频谱意识有可能成为超越5G网络的新兴推动力。该领域的最新研究具有(i)仅关注单个任务 - 调制或信号(协议)分类 - 在许多情况下,该系统不足以对系统作用,(ii)考虑要么考虑雷达或通信波形(同质波形类别),(iii)在神经网络设计阶段没有解决边缘部署。在这项工作中,我们首次在无线通信域中,我们利用了基于深神经网络的多任务学习(MTL)框架的潜力,同时学习调制和信号分类任务,同时考虑异质无线信号,例如雷达和通信波形。在电磁频谱中。提出的MTL体系结构受益于两项任务之间的相互关系,以提高分类准确性以及使用轻型神经网络模型的学习效率。此外,我们还将对模型进行实验评估,并通过空中收集的样品进行了对模型压缩的第一手洞察力,以及在资源受限的边缘设备上部署的深度学习管道。我们在两个参考体系结构上展示了所提出的模型的显着计算,记忆和准确性提高。除了建模适用于资源约束的嵌入式无线电平台的轻型MTL模型外,我们还提供了一个全面的异质无线信号数据集,以供公众使用。
translated by 谷歌翻译
State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, these DNNs are location and time sensitive, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. In this paper, we introduce AirNet, a novel training and transmission method that allows efficient wireless delivery of DNNs under stringent transmit power and latency constraints. We first train the DNN with noise injection to counter the wireless channel noise. Then we employ pruning to reduce the network size to the available channel bandwidth, and perform knowledge distillation from a larger model to achieve satisfactory performance, despite pruning. We show that AirNet achieves significantly higher test accuracy compared to digital alternatives under the same bandwidth and power constraints. The accuracy of the network at the receiver also exhibits graceful degradation with channel quality, which reduces the requirement for accurate channel estimation. We further improve the performance of AirNet by pruning the network below the available bandwidth, and using channel expansion to provide better robustness against channel noise. We also benefit from unequal error protection (UEP) by selectively expanding more important layers of the network. Finally, we develop an ensemble training approach, which trains a whole spectrum of DNNs, each of which can be used at different channel condition, resolving the impractical memory requirements.
translated by 谷歌翻译
为了解决逆问题,已经开发了插件(PNP)方法,可以用呼叫特定于应用程序的DeNoiser在凸优化算法中替换近端步骤,该算法通常使用深神经网络(DNN)实现。尽管这种方法已经成功,但可以改进它们。例如,Denoiser通常经过设计/训练以消除白色高斯噪声,但是PNP算法中的DINOISER输入误差通常远非白色或高斯。近似消息传递(AMP)方法提供了白色和高斯DEOISER输入误差,但仅当正向操作员是一个大的随机矩阵时。在这项工作中,对于基于傅立叶的远期运营商,我们提出了一种基于普遍期望一致性(GEC)近似的PNP算法 - AMP的紧密表弟 - 在每次迭代时提供可预测的错误统计信息,以及新的DNN利用这些统计数据的Denoiser。我们将方法应用于磁共振成像(MRI)图像恢复,并证明其优于现有的PNP和AMP方法。
translated by 谷歌翻译
随着深度和卷积神经网络的发展,近年来,神经网络领域已经出现了重大进展。虽然目前的许多作品地址地址的实际型号,但最近的研究表明,具有超清印的参数的神经网络可以更好地捕获,概括并表示多维数据的复杂性。本文探讨了急性淋巴细胞白血病诊断急性淋巴细胞白血病的季屈节型卷积神经网络应用。精确地,我们比较了实值和四元值值卷积神经网络的性能,从外周血涂片微观图像分类淋巴细胞。四元值卷积的卷积神经网络比其相应的实值网络实现更好或类似的性能,但仅使用其参数的34%。该结果证实,四元数代数允许从具有较少参数的彩色图像捕获和提取信息。
translated by 谷歌翻译
第五代(5G)网络和超越设想巨大的东西互联网(物联网)推出,以支持延长现实(XR),增强/虚拟现实(AR / VR),工业自动化,自主驾驶和智能所有带来的破坏性应用一起占用射频(RF)频谱的大规模和多样化的IOT设备。随着频谱嘎嘎和吞吐量挑战,这种大规模的无线设备暴露了前所未有的威胁表面。 RF指纹识别是预约的作为候选技术,可以与加密和零信任安全措施相结合,以确保无线网络中的数据隐私,机密性和完整性。在未来的通信网络中,在这项工作中,在未来的通信网络中的相关性,我们对RF指纹识别方法进行了全面的调查,从传统观点到最近的基于深度学习(DL)的算法。现有的调查大多专注于无线指纹方法的受限制呈现,然而,许多方面仍然是不可能的。然而,在这项工作中,我们通过解决信号智能(SIGINT),应用程序,相关DL算法,RF指纹技术的系统文献综述来缓解这一点,跨越过去二十年的RF指纹技术的系统文献综述,对数据集和潜在研究途径的讨论 - 必须以百科全书的方式阐明读者的必要条件。
translated by 谷歌翻译
自动目标识别(ATR)算法将给定的合成孔径雷达(SAR)图像分类为已知的目标类之一,使用一组可用于每个类的训练图像。最近,如果有丰富的训练数据可用,在类中均匀地采样及其姿势,则已经显示出学习方法可以实现最先进的分类精度。在本文中,我们考虑了ATR的任务,其中一组培训图像有限。我们提出了一种数据增强方法,以结合域知识并提高数据密集型学习算法的概括能力,例如卷积神经网络(CNN)。提出的数据增强方法采用有限的持久性稀疏建模方法,利用广角合成孔径雷达(SAR)图像的普遍观察到的特征。具体而言,我们利用空间结构域中的散射中心的稀疏性以及方位角域中散射系数的平滑结构,以解决过度分析模型拟合的缺陷问题。使用此估计的模型,我们合成了给定数据中没有可用的姿势和子像素翻译的新图像来增强CNN的培训数据。实验结果表明,对于训练数据饥饿的区域,提出的方法为结果ATR算法的泛化性能提供了显着增长。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
在带有频划分双链体(FDD)的常规多用户多用户多输入多输出(MU-MIMO)系统中,尽管高度耦合,但已单独设计了通道采集和预编码器优化过程。本文研究了下行链路MU-MIMO系统的端到端设计,其中包括试点序列,有限的反馈和预编码。为了解决这个问题,我们提出了一个新颖的深度学习(DL)框架,该框架共同优化了用户的反馈信息生成和基础站(BS)的预编码器设计。 MU-MIMO系统中的每个过程都被智能设计的多个深神经网络(DNN)单元所取代。在BS上,神经网络生成试验序列,并帮助用户获得准确的频道状态信息。在每个用户中,频道反馈操作是由单个用户DNN以分布方式进行的。然后,另一个BS DNN从用户那里收集反馈信息,并确定MIMO预编码矩阵。提出了联合培训算法以端到端的方式优化所有DNN单元。此外,还提出了一种可以避免针对可扩展设计的不同网络大小进行重新训练的培训策略。数值结果证明了与经典优化技术和其他常规DNN方案相比,提出的DL框架的有效性。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
近年来,MMWave FMCW雷达吸引了人类居中应用的大量研究兴趣,例如人类姿态/活动识别。大多数现有的管道由传统的离散傅立叶变换(DFT)预处理和深神经网络分类器混合方法建立,其中大多数以前的作品专注于设计下游分类器以提高整体精度。在这项工作中,我们返回返回并查看预处理模块。为了避免传统DFT预处理的缺点,我们提出了一个名为Cubelearn的学习预处理模块,直接从原始雷达信号中提取特征,并为MMWAVE FMCW雷达运动识别应用构建端到端的深神经网络。广泛的实验表明,我们的立方体模块一直提高不同管道的分类准确性,特别是利益以前较弱的模型。我们提供关于所提出的模块的初始化方法和结构的消融研究,以及对PC和边缘设备上运行时间的评估。这项工作也用作不同方法对数据立方体切片的比较。通过我们的任务无关设计,我们向雷达识别问题提出了一步迈向通用端到端解决方案。
translated by 谷歌翻译
我们介绍了一种能够有效地检测I / Q信号的一部分的调制方案的神经网络架构。该网络比其他最先进的架构在相同或类似的任务上工作的最先进的架构更轻。此外,参数的数量不依赖于允许处理数据流的信号持续时间,并导致信号长不变网络。此外,我们已经基于对传播信道和解调器的损伤模拟来生成数据集,即传播通道和解调器可以带到录制的I / Q信号:随机相移,延迟,滚动,采样率和频率偏移。我们受益于此数据集,培训我们的神经网络在现实真实条件下的调制之间的解散方面不变。重现结果的数据和代码是公开可用的。
translated by 谷歌翻译
雷达传感器逐渐成为道路车辆的广泛设备,在自主驾驶和道路安全中发挥着至关重要的作用。广泛采用雷达传感器增加了不同车辆的传感器之间干扰的可能性,产生损坏的范围曲线和范围 - 多普勒地图。为了从范围 - 多普勒地图中提取多个目标的距离和速度,需要减轻影响每个范围分布的干扰。本文提出了一种全卷积神经网络,用于汽车雷达干扰缓解。为了在真实的方案中培训我们的网络,我们介绍了具有多个目标和多个干扰的新数据集的现实汽车雷达信号。为了我们的知识,我们是第一个在汽车雷达领域施加体重修剪的施加量,与广泛使用的辍学相比获得了优越的结果。虽然最先前的作品成功地估计了汽车雷达信号的大小,但我们提出了一种可以准确估计相位的深度学习模型。例如,我们的新方法将相对于普通采用的归零技术的相位估计误差从12.55度到6.58度降低了一半。考虑到缺乏汽车雷达干扰缓解数据库,我们将释放开源我们的大规模数据集,密切复制了多次干扰案例的现实世界汽车场景,允许其他人客观地比较他们在该域中的未来工作。我们的数据集可用于下载:http://github.com/ristea/arim-v2。
translated by 谷歌翻译
目的:提出使用深神经网络(DNN)的新型SSVEP分类方法,提高单通道和用户独立的脑电电脑接口(BCIS)的性能,具有小的数据长度。方法:我们建议与DNN结合使用过滤器组(创建EEG信号的子带分量)。在这种情况下,我们创建了三种不同的模型:经常性的神经网络(FBRNN)分析时域,2D卷积神经网络(FBCNN-2D)处理复谱特征和3D卷积神经网络(FBCNN-3D)分析复杂谱图,我们在本研究中介绍了SSVEP分类的可能输入。我们通过开放数据集培训了我们的神经网络,并构思了它们,以便不需要从最终用户校准:因此,测试主题数据与训练和验证分开。结果:带滤波器银行的DNN超越了类似网络的准确性,在没有相当大的边距(高达4.6%)的情况下,它们甚至更高的边距(高达7.1%)超越了常见的SSVEP分类方法(SVM和FBCCA) 。在使用过滤器银行中的三个DNN中,FBRNN获得了最佳结果,然后是FBCNN-3D,最后由FBCNN-2D获得。结论和意义:滤波器银行允许不同类型的深神经网络,以更有效地分析SSVEP的谐波分量。复谱图比复杂频谱特征和幅度谱进行更多信息,允许FBCNN-3D超越另一个CNN。在具有挑战性的分类问题中获得的平均测试精度(87.3%)和F1分数(0.877)表示施工,经济,快速和低延迟BCIS建设的强大潜力。
translated by 谷歌翻译
深度学习(DL)在无线领域中找到了丰富的应用,以提高频谱意识。通常,DL模型要么是根据统计分布后随机初始初始初始初始初始初始初始初始初始初始化,要么在其他数据域(例如计算机视觉)(以转移学习的形式)上进行鉴定,而无需考虑无线信号的唯一特征。即使只有有限的带有标签的培训数据样本,自我监督的学习也能够从射频(RF)信号本身中学习有用的表示形式。我们通过专门制定一组转换以捕获无线信号特征来提出第一个自我监督的RF信号表示学习模型,并将其应用于自动调制识别(AMR)任务。我们表明,通过学习信号表示具有自我监督的学习,可以显着提高样本效率(实现一定准确性性能所需的标记样品数量)。这转化为大量时间和节省成本。此外,与最先进的DL方法相比,自我监管的学习可以提高模型的准确性,即使使用了一小部分训练数据样本,也可以保持高精度。
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
在这项工作中,我们设计了一个完全复杂的神经网络,用于虹膜识别的任务。与一般物体识别的问题不同,在实际值的神经网络可以用于提取相关特征的情况下,虹膜识别取决于从输入的虹膜纹理提取两个相位和幅度信息,以便更好地表示其生物识别内容。这需要提取和处理不能由实值神经网络有效处理的相位信息。在这方面,我们设计了一个完全复杂的神经网络,可以更好地捕获虹膜纹理的多尺度,多分辨率和多向阶段和多向阶段和幅度特征。我们展示了具有用于生成经典iRIscode的Gabor小波的提出的复合值虹膜识别网络的强烈对应关系;然而,所提出的方法使得能够为IRIS识别量身定​​制的自动复数特征学习的新能力。我们对三个基准数据集进行实验 - Nd-Crosssensor-2013,Casia-Iris-千和Ubiris.v2 - 并显示了拟议网络的虹膜识别任务的好处。我们利用可视化方案来传达复合网络的方式,与标准的实际网络相比,从虹膜纹理提取根本不同的特征。
translated by 谷歌翻译