由于其异步,稀疏和二进制信息处理,尖峰神经网络(SNN)最近成为人工神经网络(ANN)的低功耗替代品。为了提高能源效率和吞吐量,可以在使用新兴的非挥发性(NVM)设备在模拟域中实现多重和蓄积(MAC)操作的回忆横梁上实现SNN。尽管SNN与回忆性横梁具有兼容性,但很少关注固有的横杆非理想性和随机性对SNN的性能的影响。在本文中,我们对SNN在非理想横杆上的鲁棒性进行了全面分析。我们检查通过学习算法训练的SNN,例如,替代梯度和ANN-SNN转换。我们的结果表明,跨多个时间阶段的重复横梁计算会导致错误积累,从而导致SNN推断期间的性能下降。我们进一步表明,经过较少时间步长培训的SNN在部署在磁带横梁上时可以更好地准确。
translated by 谷歌翻译
由于稀疏,异步和二进制事件(或尖峰)驱动加工,尖峰神经网络(SNNS)最近成为深度学习的替代方案,可以在神经形状硬件上产生巨大的能效益。然而,从划痕训练高精度和低潜伏期的SNN,患有尖刺神经元的非微弱性质。要在SNNS中解决此培训问题,我们重新批准批量标准化,并通过时间(BNTT)技术提出时间批量标准化。大多数先前的SNN工程到现在忽略了批量标准化,认为它无效地训练时间SNN。与以前的作品不同,我们提出的BNTT沿着时轴沿着时间轴解耦的参数,以捕获尖峰的时间动态。在BNTT中的时间上不断发展的可学习参数允许神经元通过不同的时间步长来控制其尖峰率,从头开始实现低延迟和低能量训练。我们对CiFar-10,CiFar-100,微小想象特和事件驱动的DVS-CIFAR10数据集进行实验。 BNTT允许我们首次在三个复杂的数据集中培训深度SNN架构,只需25-30步即可。我们还使用BNTT中的参数分布提前退出算法,以降低推断的延迟,进一步提高了能量效率。
translated by 谷歌翻译
我们如何为神经系统带来隐私和能效?在本文中,我们提出了PrivateNN,旨在从预先训练的ANN模型构建低功耗尖峰神经网络(SNNS),而不会泄漏包含在数据集中的敏感信息。在这里,我们解决两种类型的泄漏问题:1)当网络在Ann-SNN转换过程中访问真实训练数据时,会导致数据泄漏。 2)当类相关的特征可以从网络参数重建时,会导致类泄漏。为了解决数据泄漏问题,我们从预先培训的ANN生成合成图像,并使用所生成的图像将ANN转换为SNNS。然而,转换的SNNS仍然容易受到类泄漏的影响,因为权重参数相对于ANN参数具有相同的(或缩放)值。因此,通过训练SNNS,通过训练基于时间尖峰的学习规则来加密SNN权重。使用时间数据更新权重参数使得SNN难以在空间域中解释。我们观察到,加密的私人没有消除数据和类泄漏问题,略微的性能下降(小于〜2),与标准ANN相比,与标准ANN相比的显着的能效增益(约55倍)。我们对各种数据集进行广泛的实验,包括CiFar10,CiFar100和Tinyimagenet,突出了隐私保留的SNN培训的重要性。
translated by 谷歌翻译
尖峰神经网络是低功率环境的有效计算模型。基于SPIKE的BP算法和ANN-TO-SNN(ANN2SNN)转换是SNN培训的成功技术。然而,尖峰碱BP训练速度很慢,需要大量的记忆成本。尽管Ann2NN提供了一种培训SNN的低成本方式,但它需要许多推理步骤才能模仿训练有素的ANN以表现良好。在本文中,我们提出了一个snn-to-ang(SNN2ANN)框架,以快速和记忆的方式训练SNN。 SNN2ANN由2个组成部分组成:a)ANN和SNN和B)尖峰映射单元之间的重量共享体系结构。首先,该体系结构在ANN分支上训练重量共享参数,从而快速训练和SNN的记忆成本较低。其次,尖峰映射单元确保ANN的激活值是尖峰特征。结果,可以通过训练ANN分支来优化SNN的分类误差。此外,我们设计了一种自适应阈值调整(ATA)算法来解决嘈杂的尖峰问题。实验结果表明,我们的基于SNN2ANN的模型在基准数据集(CIFAR10,CIFAR100和TININE-IMAGENET)上表现良好。此外,SNN2ANN可以在0.625倍的时间步长,0.377倍训练时间,0.27倍GPU内存成本以及基于SPIKE的BP模型的0.33倍尖峰活动下实现可比精度。
translated by 谷歌翻译
我们提出了一种新的学习算法,使用传统的人工神经网络(ANN)作为代理训练尖刺神经网络(SNN)。我们分别与具有相同网络架构和共享突触权重的集成和火(IF)和Relu神经元进行两次SNN和ANN网络。两个网络的前进通过完全独立。通过假设具有速率编码的神经元作为Relu的近似值,我们将SNN中的SNN的误差进行了回复,以更新共享权重,只需用SNN的ANN最终输出替换ANN最终输出。我们将建议的代理学习应用于深度卷积的SNNS,并在Fahion-Mnist和CiFar10的两个基准数据集上进行评估,分别为94.56%和93.11%的分类准确性。所提出的网络可以优于培训的其他深鼻涕,训练,替代学习,代理梯度学习,或从深处转换。转换的SNNS需要长时间的仿真时间来达到合理的准确性,而我们的代理学习导致高效的SNN,模拟时间较短。
translated by 谷歌翻译
由于降低了von-neumann架构运行深度学习模型的功耗的基本限制,在聚光灯下,基于低功率尖刺神经网络的神经栓塞系统的研究。为了整合大量神经元,神经元需要设计占据一个小面积,而是随着技术缩小,模拟神经元难以缩放,并且它们遭受降低的电压净空/动态范围和电路非线性。鉴于此,本文首先模拟了在28nm工艺中设计的现有电流镜的电压域神经元的非线性行为,并显示了神经元非线性的效果严重降低了SNN推理精度。然后,为了减轻这个问题,我们提出了一种新的神经元,该新型神经元在时域中加入输入的尖峰,并且大大改善了线性度,从而改善了与现有电压域神经元相比的推理精度。在Mnist DataSet上进行测试,所提出的神经元的推理误差率与理想神经元的引起误差率不同于0.1%。
translated by 谷歌翻译
Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) have achieved a higher accuracy result on image recognition tasks than other SNN training algorithms. Despite the success from the algorithm perspective, prior works neglect the evaluation of the hardware energy overheads of BPTT due to the lack of a hardware evaluation platform for this SNN training algorithm. Moreover, although SNNs have long been seen as an energy-efficient counterpart of ANNs, a quantitative comparison between the training cost of SNNs and ANNs is missing. To address the aforementioned issues, in this work, we introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA provides a simple and re-configurable systolic-based accelerator architecture, which makes it easy to analyze the training energy for BPTT-based SNN training algorithms. By utilizing the sparsity, SATA increases its computation energy efficiency by $5.58 \times$ compared to the one without using sparsity. Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs. The results show that, on Eyeriss-like systolic-based architecture, SNNs consume $1.27\times$ more total energy with sparsities when compared to ANNs. We find that such high training energy cost is from time-repetitive convolution operations and data movements during backpropagation. Moreover, to propel the future SNN training algorithm design, we provide several observations on energy efficiency for different SNN-specific training parameters and propose an energy estimation framework for SNN training. Code for our framework is made publicly available.
translated by 谷歌翻译
Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal computing paradigm for a wide range of low-power vision tasks. However, state-of-the-art (SOTA) SNN models either incur multiple time steps which hinder their deployment in real-time use cases or increase the training complexity significantly. To mitigate this concern, we present a training framework (from scratch) for one-time-step SNNs that uses a novel variant of the recently proposed Hoyer regularizer. We estimate the threshold of each SNN layer as the Hoyer extremum of a clipped version of its activation map, where the clipping threshold is trained using gradient descent with our Hoyer regularizer. This approach not only downscales the value of the trainable threshold, thereby emitting a large number of spikes for weight update with a limited number of iterations (due to only one time step) but also shifts the membrane potential values away from the threshold, thereby mitigating the effect of noise that can degrade the SNN accuracy. Our approach outperforms existing spiking, binary, and adder neural networks in terms of the accuracy-FLOPs trade-off for complex image recognition tasks. Downstream experiments on object detection also demonstrate the efficacy of our approach.
translated by 谷歌翻译
由于具有高生物学合理性和低能消耗在神经形态硬件上的特性,因此尖峰神经网络(SNN)非常重要。作为获得深SNN的有效方法,转化方法在各种大型数据集上表现出高性能。但是,它通常遭受严重的性能降解和高时间延迟。特别是,以前的大多数工作都集中在简单的分类任务上,同时忽略了与ANN输出的精确近似。在本文中,我们首先从理论上分析转换误差,并得出时间变化极端对突触电流的有害影响。我们提出尖峰校准(Spicalib),以消除离散尖峰对输出分布的损坏,并修改脂肪,以使任意最大化层无损地转换。此外,提出了针对最佳标准化参数的贝叶斯优化,以避免经验设置。实验结果证明了分类,对象检测和分割任务的最新性能。据我们所知,这是第一次获得与ANN同时在这些任务上相当的SNN。此外,我们只需要先前在检测任务上工作的1/50推理时间,并且可以在0.492 $ \ times $ $下在分段任务上实现相同的性能。
translated by 谷歌翻译
尖峰 - 神经网络(SNNS)在边缘设备处具有前景,因为与模拟 - 神经网络(ANN)相比,SNN的事件驱动操作提供了显着较低的功率。虽然很难有效地训练SNN,但是已经开发了许多将培训的ANN转换为SNNS的技术。但是,在转换之后,SNN中的准确性和延迟之间存在权衡关系,在大尺寸数据集中导致诸如想象成的大尺寸数据集之间的相当大。我们提出了一种名为TCL的技术,以缓解权衡问题,使得73.87%(VGG-16)和70.37%(Reset-34)的准确性,在SNNS中的250个周期的中等潜伏期。
translated by 谷歌翻译
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations. For ANN-to-SNN conversion, it requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts. In this paper, we propose a novel SNN training approach that combines the benefits of the two methods. We first train a single-step SNN by approximating the neural potential distribution with random noise, then convert the single-step SNN to a multi-step SNN losslessly. The introduction of Gaussian distributed noise leads to a significant gain in accuracy after conversion. The results show that our method considerably reduces the training and inference times of SNNs while maintaining their high accuracy. Compared to the previous two methods, ours can reduce training time by 65%-75% and achieves more than 100 times faster inference speed. We also argue that the neuron model augmented with noise makes it more bio-plausible.
translated by 谷歌翻译
尖峰神经网络(SNN)由于其固有的高表象激活而引起了传统人工神经网络(ANN)的势能有效替代品。但是,大多数先前的SNN方法都使用类似Ann的架构(例如VGG-NET或RESNET),这可以为SNN中二进制信息的时间序列处理提供亚最佳性能。为了解决这个问题,在本文中,我们介绍了一种新型的神经体系结构搜索(NAS)方法,以找到更好的SNN体系结构。受到最新的NAS方法的启发,这些方法从初始化时从激活模式中找到了最佳体系结构,我们选择了可以代表不同数据样本的不同尖峰激活模式的体系结构,而无需训练。此外,为了进一步利用尖峰之间的时间信息,我们在层之间搜索馈电的连接以及向后连接(即时间反馈连接)。有趣的是,我们的搜索算法发现的SNASNET通过向后连接实现了更高的性能,这表明设计SNN体系结构以适当使用时间信息的重要性。我们对三个图像识别基准进行了广泛的实验,我们表明SNASNET可以实现最新的性能,而时间段明显较低(5个时间段)。代码可在GitHub上找到。
translated by 谷歌翻译
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, $2-7\times$ and $8.9-24.6\times$, respectively, depending on the SNN model and the workload, compared to HP-ADC IMC.
translated by 谷歌翻译
由于它们的时间加工能力及其低交换(尺寸,重量和功率)以及神经形态硬件中的节能实现,尖峰神经网络(SNNS)已成为传统人工神经网络(ANN)的有趣替代方案。然而,培训SNNS所涉及的挑战在准确性方面有限制了它们的表现,从而限制了他们的应用。因此,改善更准确的特征提取的学习算法和神经架构是SNN研究中的当前优先级之一。在本文中,我们展示了现代尖峰架构的关键组成部分的研究。我们在从最佳执行网络中凭经验比较了图像分类数据集中的不同技术。我们设计了成功的残余网络(Reset)架构的尖峰版本,并测试了不同的组件和培训策略。我们的结果提供了SNN设计的最新版本,它允许在尝试构建最佳视觉特征提取器时进行明智的选择。最后,我们的网络优于CIFAR-10(94.1%)和CIFAR-100(74.5%)数据集的先前SNN架构,并将现有技术与DVS-CIFAR10(71.3%)相匹配,参数较少而不是先前的状态艺术,无需安静转换。代码在https://github.com/vicenteax/spiking_resnet上获得。
translated by 谷歌翻译
我们提出了Memprop,即采用基于梯度的学习来培训完全的申请尖峰神经网络(MSNNS)。我们的方法利用固有的设备动力学来触发自然产生的电压尖峰。这些由回忆动力学发出的尖峰本质上是类似物,因此完全可区分,这消除了尖峰神经网络(SNN)文献中普遍存在的替代梯度方法的需求。回忆性神经网络通常将备忘录集成为映射离线培训网络的突触,或者以其他方式依靠关联学习机制来训练候选神经元的网络。相反,我们直接在循环神经元和突触的模拟香料模型上应用了通过时间(BPTT)训练算法的反向传播。我们的实现是完全的综合性,因为突触重量和尖峰神经元都集成在电阻RAM(RRAM)阵列上,而无需其他电路来实现尖峰动态,例如模数转换器(ADCS)或阈值比较器。结果,高阶电物理效应被充分利用,以在运行时使用磁性神经元的状态驱动动力学。通过朝着非同一梯度的学习迈进,我们在以前报道的几个基准上的轻巧密集的完全MSNN中获得了高度竞争的准确性。
translated by 谷歌翻译
深度尖峰神经网络(SNNS)目前由于离散二进制激活和复杂的空间 - 时间动态而导致的基于梯度的方法的优化困难。考虑到Reset的巨大成功在深度学习中,将深入了解剩余学习,这将是自然的。以前的尖峰Reset模仿ANN的标准残留块,并简单地用尖刺神经元取代relu激活层,这遭受了劣化问题,并且很难实施剩余学习。在本文中,我们提出了尖峰元素 - 明智(SEW)RESET,以实现深部SNNS的剩余学习。我们证明SEW RESET可以轻松实现身份映射并克服Spiking Reset的消失/爆炸梯度问题。我们在Imagenet,DVS手势和CIFAR10-DVS数据集中评估我们的SEF RESET,并显示SEW RESNET以准确性和时间步长,最先进的直接训练的SNN。此外,SEW Reset通过简单地添加更多层来实现更高的性能,提供一种培训深舒头的简单方法。为了我们的最佳知识,这是第一次直接训练具有100多层以上的深度SNN。我们的代码可在https://github.com/fangwei123456/spike-element-wore-resnet上获得。
translated by 谷歌翻译
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning and rules on neuromorphic hardware. Other works connect spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that gradients of OTTT can provide a similar descent direction for optimization as gradients based on spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in small time steps.
translated by 谷歌翻译
在本文中,我们提出了一种节能的SNN体系结构,该体系结构可以通过提高的精度无缝地运行深度尖峰神经网络(SNN)。首先,我们提出了一个转换意识培训(CAT),以减少无硬件实施开销而无需安排SNN转换损失。在拟议的CAT中,可以有效利用用于在ANN训练过程中模拟SNN的激活函数,以减少转换后的数据表示误差。基于CAT技术,我们还提出了一项首要尖峰编码,该编码可以通过使用SPIKE时间信息来轻巧计算。支持提出技术的SNN处理器设计已使用28nm CMOS流程实施。该处理器的推理能量分别为486.7UJ,503.6UJ和1426UJ的最高1级准确性,分别为91.7%,67.9%和57.4%,分别为CIFAR-10,CIFAR-100和TININE-IMIMAGENET处理。16具有5位对数权重。
translated by 谷歌翻译
由于它们的低能量消耗,对神经形态计算设备上的尖刺神经网络(SNNS)越来越兴趣。最近的进展使培训SNNS在精度方面开始与传统人工神经网络(ANNS)进行竞争,同时在神经胸壁上运行时的节能。然而,培训SNNS的过程仍然基于最初为ANNS开发的密集的张量操作,这不利用SNN的时空稀疏性质。我们在这里介绍第一稀疏SNN BackPropagation算法,该算法与最新的现有技术实现相同或更好的准确性,同时显着更快,更高的记忆力。我们展示了我们对不同复杂性(时尚 - MNIST,神经影像学 - MNIST和Spiking Heidelberg数字的真实数据集的有效性,在不失精度的情况下实现了高达150倍的后向通行证的加速,而不会减少精度。
translated by 谷歌翻译
尖峰神经网络(SNNS)最近成为新一代的低功耗深神经网络,二进制尖峰在多个时间步中传达信息。 SNN的修剪非常重要,因为它们被部署在资源控制移动/边缘设备上。先前的SNN修剪作品的重点是浅SNN(2〜6层),但是,最深层的SNN(> 16层)是由最先进的SNN作品提出的,这很难与当前的修剪工作兼容。为了扩展针对深SNN的修剪技术,我们研究了彩票假说(LTH),该假说(LTH)指出,密集的网络包含较小的子网络(即获胜的票),这些子网与密集网络相当。我们对LTH的研究表明,获胜的门票始终存在于各种数据集和体系结构的深SNN中,可提供多达97%的稀疏性,而没有巨大的性能降级。但是,LTH的迭代搜索过程与SNN的多个时间段相结合时,带来了巨大的培训计算成本。为了减轻这种沉重的搜索成本,我们提出了早期(ET)票,从而从少量的时间步中找到重要的重量连接性。提出的ET票可以与常见的修剪技术无缝结合,以查找获胜门票,例如迭代级修剪(IMP)和早鸟(EB)门票。我们的实验结果表明,与IMP或EB方法相比,提出的ET票可将搜索时间缩短多达38%。
translated by 谷歌翻译