神经形态计算是一项新兴技术,可为需要有效的在线推理和/或控制的应用程序提供以事件为导向的数据处理。最近的工作引入了神经形态通信的概念,在该概念中,神经形态计算与Impulse Radio(IR)传输集成在一起,以实现无线物联网网络中的低能量和低延迟远程推断。在本文中,我们介绍了神经形态综合传感和通信(N-ISAC),这是一种新的解决方案,可实现有效的在线数据解码和雷达传感。 N-ISAC利用了一个常见的IR波形,以传达数字信息并检测存在或不存在雷达靶标的双重目的。在接收方部署了尖峰神经网络(SNN),以解码数字数据并使用接收的信号检测雷达目标。通过平衡数据通信和雷达传感的性能指标,突出了两个应用程序之间的协同作用和权衡,可以优化SNN操作。
translated by 谷歌翻译
神经形态计算是一种新兴的计算范式,它从批处理的处理转向在线,事件驱动的流数据处理。当神经形态芯片与基于尖峰的传感器结合在一起时,只有在峰值时间内记录相关事件并证明对变化条件的低延迟响应时,才能通过消耗能量来固有地适应数据分布的“语义”。环境。本文为神经形态无线网络系统系统提出了端到端设计,该系统集成了基于尖峰的传感,处理和通信。在拟议的神经系统系统中,每个传感设备都配备了神经形态传感器,尖峰神经网络(SNN)和带有多个天线的脉冲无线电发射器。传输发生在配备了多Antenna脉冲无线电接收器和SNN的接收器上的共享褪色通道上进行。为了使接收器适应褪色的通道条件,我们引入了一项超网络,以使用飞行员控制解码SNN的权重。飞行员,编码SNN,解码SNN和超网络经过多个通道实现的共同训练。该系统被证明可以显着改善基于传统的基于框架的数字解决方案以及替代性非自适应训练方法,从时间到准确性和能源消耗指标方面。
translated by 谷歌翻译
In the past years, artificial neural networks (ANNs) have become the de-facto standard to solve tasks in communications engineering that are difficult to solve with traditional methods. In parallel, the artificial intelligence community drives its research to biology-inspired, brain-like spiking neural networks (SNNs), which promise extremely energy-efficient computing. In this paper, we investigate the use of SNNs in the context of channel equalization for ultra-low complexity receivers. We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE). For conversion of real-world data into spike signals we introduce a novel ternary encoding and compare it with traditional log-scale encoding. We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels. We highlight that mainly the conversion of the channel output to spikes introduces a small performance penalty. The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
translated by 谷歌翻译
在集成感测和通信(ISAC)系统中表征传感和通信性能权衡,在基于学习的人类运动识别的应用中具有挑战性。这是因为大型实验数据集和深神经网络的黑盒性质。本文介绍了SDP3,这是一种模拟驱动的性能预测指标和优化器,由SDP3数据模拟器,SDP3性能预测器和SDP3性能优化器组成。具体而言,SDP3数据模拟器在虚拟环境中生成生动的无线传感数据集,SDP3性能预测器预测基于函数回归方法的传感性能,而SDP3性能优化器会在分析上研究传感和通信性能。结果表明,模拟传感数据集在运动识别精度中非常匹配实验数据集。通过利用SDP3,发现可实现的识别准确性和通信吞吐量由通信饱和区组成,感应饱和区和通讯感应的对抗区域,ISAC系统的所需平衡性能位于第三个一。
translated by 谷歌翻译
Communication and computation are often viewed as separate tasks. This approach is very effective from the perspective of engineering as isolated optimizations can be performed. On the other hand, there are many cases where the main interest is a function of the local information at the devices instead of the local information itself. For such scenarios, information theoretical results show that harnessing the interference in a multiple-access channel for computation, i.e., over-the-air computation (OAC), can provide a significantly higher achievable computation rate than the one with the separation of communication and computation tasks. Besides, the gap between OAC and separation in terms of computation rate increases with more participating nodes. Given this motivation, in this study, we provide a comprehensive survey on practical OAC methods. After outlining fundamentals related to OAC, we discuss the available OAC schemes with their pros and cons. We then provide an overview of the enabling mechanisms and relevant metrics to achieve reliable computation in the wireless channel. Finally, we summarize the potential applications of OAC and point out some future directions.
translated by 谷歌翻译
Ultra-reliable short-packet communication is a major challenge in future wireless networks with critical applications. To achieve ultra-reliable communications beyond 99.999%, this paper envisions a new interaction-based communication paradigm that exploits feedback from the receiver. We present AttentionCode, a new class of feedback codes leveraging deep learning (DL) technologies. The underpinnings of AttentionCode are three architectural innovations: AttentionNet, input restructuring, and adaptation to fading channels, accompanied by several training methods, including large-batch training, distributed learning, look-ahead optimizer, training-test signal-to-noise ratio (SNR) mismatch, and curriculum learning. The training methods can potentially be generalized to other wireless communication applications with machine learning. Numerical experiments verify that AttentionCode establishes a new state of the art among all DL-based feedback codes in both additive white Gaussian noise (AWGN) channels and fading channels. In AWGN channels with noiseless feedback, for example, AttentionCode achieves a block error rate (BLER) of $10^{-7}$ when the forward channel SNR is 0 dB for a block size of 50 bits, demonstrating the potential of AttentionCode to provide ultra-reliable short-packet communications.
translated by 谷歌翻译
Effective and adaptive interference management is required in next generation wireless communication systems. To address this challenge, Rate-Splitting Multiple Access (RSMA), relying on multi-antenna rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receivers, has been intensively studied in recent years, albeit mostly under the assumption of perfect Channel State Information at the Receiver (CSIR) and ideal capacity-achieving modulation and coding schemes. To assess its practical performance, benefits, and limits under more realistic conditions, this work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods, which aims to unite the simple structure of the conventional SIC receiver and the robustness and model agnosticism of deep learning techniques. The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS), and average training overhead. Also, a comparison with the SIC receiver, with perfect and imperfect CSIR, is given. Results reveal that the MBDL receiver outperforms by a significant margin the SIC receiver with imperfect CSIR, due to its ability to generate on demand non-linear symbol detection boundaries in a pure data-driven manner.
translated by 谷歌翻译
在多输入多输出(MIMO)系统中使用深度自动码器(DAE)进行端到端通信,是一种具有重要潜力的新概念。在误码率(BER)方面,已示出DAE-ADED MIMO以占地识别的奇异值分解(SVD)为基础的预编码MIMO。本文提出将信道矩阵的左右奇异矢量嵌入到DAE编码器和解码器中,以进一步提高MIMO空间复用的性能。 SVD嵌入式DAE主要优于BER的理论线性预编码。这是显着的,因为它表明所提出的DAES通过将通信系统视为单个端到端优化块来超出当前系统设计的极限。基于仿真结果,在SNR = 10dB,所提出的SVD嵌入式设计可以实现近10美元,并将BER减少至少10次,而没有SVD,相比增长了18倍的增长率最高18倍具有理论线性预编码。我们将这一点归因于所提出的DAE可以将输入和输出与具有有限字母输入的自适应调制结构匹配。我们还观察到添加到DAE的剩余连接进一步提高了性能。
translated by 谷歌翻译
已知尖峰神经网络(SNN)对于神经形态处理器实施非常有效,可以在传统深度学习方法上提高能效和计算潜伏期的数量级。最近,随着监督培训算法对SNN的背景,最近也使可比的算法性能成为可能。但是,包括音频,视频和其他传感器衍生数据在内的信息通常被编码为不适合SNN的实用值信号,从而阻止网络利用SPIKE定时信息。因此,从实价信号到尖峰的有效编码是至关重要的,并且会显着影响整个系统的性能。为了有效地将信号编码为尖峰,必须考虑与手头任务相关的信息以及编码尖峰的密度。在本文中,我们在扬声器独立数字分类系统的背景下研究了四种尖峰编码方法:发送三角洲,第一次尖峰的时间,漏水的集成和火神经元和弯曲尖刺算法。我们首先表明,与传统的短期傅立叶变换相比,在编码生物启发的耳蜗时,使用较少的尖峰会产生更高的分类精度。然后,我们证明了两种对三角洲变体的发送导致分类结果可与最先进的深卷积神经网络基线相媲美,同时降低了编码的比特率。最后,我们表明,几种编码方法在某些情况下导致比传统深度学习基线的性能提高,进一步证明了编码实用值信号中编码算法的尖峰力量艺术技术。
translated by 谷歌翻译
生物智能的主要特征之一是能源效率,持续适应能力以及通过不确定性量化的风险管理。到目前为止,神经形态工程主要是由实施节能机器从生物学大脑的基于时间的计算范式中获得灵感的目标的驱动。在本文中,我们采取了朝着设计神经形态系统设计的步骤,这些系统能够适应改变学习任务,同时产生良好的不确定性量化估计。为此,我们得出了在贝叶斯持续学习框架内尖峰神经网络(SNN)的在线学习规则。在其中,每个突触重量都由参数表示,这些参数量化了先验知识和观察到的数据引起的当前认知不确定性。提出的在线规则在观察到数据时以流方式更新分布参数。我们实例化了实用值和二元突触权重的建议方法。使用英特尔熔岩平台的实验结果表明,贝叶斯在适应能力和不确定性定量方面的经常学习优点。
translated by 谷歌翻译
The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).
translated by 谷歌翻译
穗状花序的神经形状硬件占据了深度神经网络(DNN)的更节能实现的承诺,而不是GPU的标准硬件。但这需要了解如何在基于事件的稀疏触发制度中仿真DNN,否则能量优势丢失。特别地,解决序列处理任务的DNN通常采用难以使用少量尖峰效仿的长短期存储器(LSTM)单元。我们展示了许多生物神经元的面部,在每个尖峰后缓慢的超积极性(AHP)电流,提供了有效的解决方案。 AHP电流可以轻松地在支持多舱神经元模型的神经形状硬件中实现,例如英特尔的Loihi芯片。滤波近似理论解释为什么AHP-Neurons可以模拟LSTM单元的功能。这产生了高度节能的时间序列分类方法。此外,它为实现了非常稀疏的大量大型DNN来实现基础,这些大型DNN在文本中提取单词和句子之间的关系,以便回答有关文本的问题。
translated by 谷歌翻译
神经形态计算机通过模拟人脑进行计算,并使用极低的功率。预计将来对于节能计算是必不可少的。尽管它们主要用于尖峰基于神经网络的机器学习应用程序,但已知神经形态计算机是Turing-Complete,因此能够进行通用计算。但是,为了充分意识到它们的通用,节能计算的潜力,重要的是要设计有效的编码数字机制。当前的编码方法的适用性有限,可能不适合通用计算。在本文中,我们将虚拟神经元视为整数和理性数字的编码机制。我们评估虚拟神经元在物理和模拟神经形态硬件上的性能,并表明它可以使用基于混合信号的Memristor神经形态处理器平均使用23 nj的能量执行加法操作。我们还通过在某些MU回复功能中使用它来证明其实用性,这些功能是通用计算的构建块。
translated by 谷歌翻译
Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices. Neuromorphic hardware architectures that emulate SNNs in analog/mixed-signal domains have been proposed to achieve order-of-magnitude higher energy efficiency than all-digital architectures, however at the expense of limited scalability, susceptibility to noise, complex verification, and poor flexibility. On the other hand, state-of-the-art digital neuromorphic architectures focus either on achieving high energy efficiency (Joules/synaptic operation (SOP)) or throughput efficiency (SOPs/second/area), resulting in poor ET efficiency. In this work, we present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks. We implemented THOR in 28nm FDSOI CMOS technology and our post-layout results demonstrate an ET efficiency of 7.29G $\text{TSOP}^2/\text{mm}^2\text{Js}$ at 0.9V, 400 MHz, which represents a 3X improvement over state-of-the-art digital neuromorphic processors.
translated by 谷歌翻译
在本文中,我们提出了一种节能的SNN体系结构,该体系结构可以通过提高的精度无缝地运行深度尖峰神经网络(SNN)。首先,我们提出了一个转换意识培训(CAT),以减少无硬件实施开销而无需安排SNN转换损失。在拟议的CAT中,可以有效利用用于在ANN训练过程中模拟SNN的激活函数,以减少转换后的数据表示误差。基于CAT技术,我们还提出了一项首要尖峰编码,该编码可以通过使用SPIKE时间信息来轻巧计算。支持提出技术的SNN处理器设计已使用28nm CMOS流程实施。该处理器的推理能量分别为486.7UJ,503.6UJ和1426UJ的最高1级准确性,分别为91.7%,67.9%和57.4%,分别为CIFAR-10,CIFAR-100和TININE-IMIMAGENET处理。16具有5位对数权重。
translated by 谷歌翻译
尖峰神经网络(SNN)为时间信号处理提供了有效的计算机制,尤其是与低功率SNN推理相结合时。历史上很难配置SNN,缺乏为任意任务寻找解决方案的一般方法。近年来,逐渐发芽的优化方法已应用于SNN,并且越来越轻松。因此,SNN和SNN推理处理器为在没有云依赖性的能源约束环境中为商业低功率信号处理提供了一个良好的平台。但是,迄今为止,行业中的ML工程师无法访问这些方法,需要研究生级培训才能成功配置单个SNN应用程序。在这里,我们演示了一条方便的高级管道,用于设计,训练和部署任意的时间信号处理应用程序,向子-MW SNN推理硬件。我们使用用于时间信号处理的新型直接SNN体系结构,使用突触时间常数的金字塔在一系列时间尺度上提取信号特征。我们在环境音频分类任务上演示了这种体系结构,该任务部署在流式传输模式下的Xylo SNN推理处理器上。我们的应用以低功率(<4MUW推理功率)达到了高准确性(98%)和低潜伏期(100ms)。我们的方法使培训和部署SNN应用程序可用于具有通用NN背景的ML工程师,而无需先前的Spiking NNS经验。我们打算将神经形态硬件和SNN成为商业低功率和边缘信号处理应用程序的吸引人选择。
translated by 谷歌翻译
我们考虑无上行赠款非正交多访问(NOMA)中的多用户检测(MUD)问题,其中访问点必须确定活动互联网(IoT)设备的总数和正确的身份他们传输的数据。我们假设IoT设备使用复杂的扩散序列并以随机访问的方式传输信息,按照爆发 - 距离模型,其中一些物联网设备以高概率在多个相邻的时间插槽中传输其数据,而另一些物联网设备在帧中仅传输一次。利用时间相关性,我们提出了一个基于注意力的双向长期记忆(BILSTM)网络来解决泥浆问题。 Bilstm网络使用前向和反向通过LSTM创建设备激活历史记录的模式,而注意机制为设备激活点提供了基本背景。通过这样做,遵循了层次途径,以在无拨款方案中检测主动设备。然后,通过利用复杂的扩散序列,对估计的活动设备进行了盲数据检测。所提出的框架不需要对设备稀疏水平和执行泥浆的通道的先验知识。结果表明,与现有的基准方案相比,提议的网络的性能更好。
translated by 谷歌翻译
Spectrum coexistence is essential for next generation (NextG) systems to share the spectrum with incumbent (primary) users and meet the growing demand for bandwidth. One example is the 3.5 GHz Citizens Broadband Radio Service (CBRS) band, where the 5G and beyond communication systems need to sense the spectrum and then access the channel in an opportunistic manner when the incumbent user (e.g., radar) is not transmitting. To that end, a high-fidelity classifier based on a deep neural network is needed for low misdetection (to protect incumbent users) and low false alarm (to achieve high throughput for NextG). In a dynamic wireless environment, the classifier can only be used for a limited period of time, i.e., coherence time. A portion of this period is used for learning to collect sensing results and train a classifier, and the rest is used for transmissions. In spectrum sharing systems, there is a well-known tradeoff between the sensing time and the transmission time. While increasing the sensing time can increase the spectrum sensing accuracy, there is less time left for data transmissions. In this paper, we present a generative adversarial network (GAN) approach to generate synthetic sensing results to augment the training data for the deep learning classifier so that the sensing time can be reduced (and thus the transmission time can be increased) while keeping high accuracy of the classifier. We consider both additive white Gaussian noise (AWGN) and Rayleigh channels, and show that this GAN-based approach can significantly improve both the protection of the high-priority user and the throughput of the NextG user (more in Rayleigh channels than AWGN channels).
translated by 谷歌翻译
Loihi is a 60-mm 2 chip fabricated in Intel's 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay product compared to conventional solvers running on a CPU isoprocess/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.Neuroscience offers a bountiful source of inspiration for novel hardware architectures and algorithms. Through their complex interactions at large scales, biological neurons exhibit an impressive range of behaviors and properties that we currently struggle to model with modern analytical tools, let alone replicate with our design and manufacturing technology. Some of the magic that we see in the brain undoubtedly stems from exotic device and material properties that will remain out of our fabs' reach for
translated by 谷歌翻译
最近在无线通信领域的许多任务中考虑了机器学习算法。以前,我们已经提出了使用深度卷积神经网络(CNN)进行接收器处理的使用,并证明它可以提供可观的性能提高。在这项研究中,我们专注于发射器的机器学习算法。特别是,我们考虑进行波束形成并提出一个CNN,该CNN对于给定上行链路通道估计值作为输入,输出下链路通道信息用于波束成形。考虑到基于UE接收器性能的损失函数的上行链路传输和下行链路传输,CNN以有监督的方式进行培训。神经网络的主要任务是预测上行链路和下行链路插槽之间的通道演变,但它也可以学会处理整个链中的效率低下和错误,包括实际的光束成型阶段。提供的数值实验证明了波束形成性能的改善。
translated by 谷歌翻译