尖峰神经网络是低功率环境的有效计算模型。基于SPIKE的BP算法和ANN-TO-SNN(ANN2SNN)转换是SNN培训的成功技术。然而,尖峰碱BP训练速度很慢,需要大量的记忆成本。尽管Ann2NN提供了一种培训SNN的低成本方式,但它需要许多推理步骤才能模仿训练有素的ANN以表现良好。在本文中,我们提出了一个snn-to-ang(SNN2ANN)框架,以快速和记忆的方式训练SNN。 SNN2ANN由2个组成部分组成:a)ANN和SNN和B)尖峰映射单元之间的重量共享体系结构。首先,该体系结构在ANN分支上训练重量共享参数,从而快速训练和SNN的记忆成本较低。其次,尖峰映射单元确保ANN的激活值是尖峰特征。结果,可以通过训练ANN分支来优化SNN的分类误差。此外,我们设计了一种自适应阈值调整(ATA)算法来解决嘈杂的尖峰问题。实验结果表明,我们的基于SNN2ANN的模型在基准数据集(CIFAR10,CIFAR100和TININE-IMAGENET)上表现良好。此外,SNN2ANN可以在0.625倍的时间步长,0.377倍训练时间,0.27倍GPU内存成本以及基于SPIKE的BP模型的0.33倍尖峰活动下实现可比精度。
translated by 谷歌翻译
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations. For ANN-to-SNN conversion, it requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts. In this paper, we propose a novel SNN training approach that combines the benefits of the two methods. We first train a single-step SNN by approximating the neural potential distribution with random noise, then convert the single-step SNN to a multi-step SNN losslessly. The introduction of Gaussian distributed noise leads to a significant gain in accuracy after conversion. The results show that our method considerably reduces the training and inference times of SNNs while maintaining their high accuracy. Compared to the previous two methods, ours can reduce training time by 65%-75% and achieves more than 100 times faster inference speed. We also argue that the neuron model augmented with noise makes it more bio-plausible.
translated by 谷歌翻译
由于稀疏,异步和二进制事件(或尖峰)驱动加工,尖峰神经网络(SNNS)最近成为深度学习的替代方案,可以在神经形状硬件上产生巨大的能效益。然而,从划痕训练高精度和低潜伏期的SNN,患有尖刺神经元的非微弱性质。要在SNNS中解决此培训问题,我们重新批准批量标准化,并通过时间(BNTT)技术提出时间批量标准化。大多数先前的SNN工程到现在忽略了批量标准化,认为它无效地训练时间SNN。与以前的作品不同,我们提出的BNTT沿着时轴沿着时间轴解耦的参数,以捕获尖峰的时间动态。在BNTT中的时间上不断发展的可学习参数允许神经元通过不同的时间步长来控制其尖峰率,从头开始实现低延迟和低能量训练。我们对CiFar-10,CiFar-100,微小想象特和事件驱动的DVS-CIFAR10数据集进行实验。 BNTT允许我们首次在三个复杂的数据集中培训深度SNN架构,只需25-30步即可。我们还使用BNTT中的参数分布提前退出算法,以降低推断的延迟,进一步提高了能量效率。
translated by 谷歌翻译
我们提出了一种新的学习算法,使用传统的人工神经网络(ANN)作为代理训练尖刺神经网络(SNN)。我们分别与具有相同网络架构和共享突触权重的集成和火(IF)和Relu神经元进行两次SNN和ANN网络。两个网络的前进通过完全独立。通过假设具有速率编码的神经元作为Relu的近似值,我们将SNN中的SNN的误差进行了回复,以更新共享权重,只需用SNN的ANN最终输出替换ANN最终输出。我们将建议的代理学习应用于深度卷积的SNNS,并在Fahion-Mnist和CiFar10的两个基准数据集上进行评估,分别为94.56%和93.11%的分类准确性。所提出的网络可以优于培训的其他深鼻涕,训练,替代学习,代理梯度学习,或从深处转换。转换的SNNS需要长时间的仿真时间来达到合理的准确性,而我们的代理学习导致高效的SNN,模拟时间较短。
translated by 谷歌翻译
由于其异步,稀疏和二进制信息处理,尖峰神经网络(SNN)最近成为人工神经网络(ANN)的低功耗替代品。为了提高能源效率和吞吐量,可以在使用新兴的非挥发性(NVM)设备在模拟域中实现多重和蓄积(MAC)操作的回忆横梁上实现SNN。尽管SNN与回忆性横梁具有兼容性,但很少关注固有的横杆非理想性和随机性对SNN的性能的影响。在本文中,我们对SNN在非理想横杆上的鲁棒性进行了全面分析。我们检查通过学习算法训练的SNN,例如,替代梯度和ANN-SNN转换。我们的结果表明,跨多个时间阶段的重复横梁计算会导致错误积累,从而导致SNN推断期间的性能下降。我们进一步表明,经过较少时间步长培训的SNN在部署在磁带横梁上时可以更好地准确。
translated by 谷歌翻译
Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal computing paradigm for a wide range of low-power vision tasks. However, state-of-the-art (SOTA) SNN models either incur multiple time steps which hinder their deployment in real-time use cases or increase the training complexity significantly. To mitigate this concern, we present a training framework (from scratch) for one-time-step SNNs that uses a novel variant of the recently proposed Hoyer regularizer. We estimate the threshold of each SNN layer as the Hoyer extremum of a clipped version of its activation map, where the clipping threshold is trained using gradient descent with our Hoyer regularizer. This approach not only downscales the value of the trainable threshold, thereby emitting a large number of spikes for weight update with a limited number of iterations (due to only one time step) but also shifts the membrane potential values away from the threshold, thereby mitigating the effect of noise that can degrade the SNN accuracy. Our approach outperforms existing spiking, binary, and adder neural networks in terms of the accuracy-FLOPs trade-off for complex image recognition tasks. Downstream experiments on object detection also demonstrate the efficacy of our approach.
translated by 谷歌翻译
由于具有高生物学合理性和低能消耗在神经形态硬件上的特性,因此尖峰神经网络(SNN)非常重要。作为获得深SNN的有效方法,转化方法在各种大型数据集上表现出高性能。但是,它通常遭受严重的性能降解和高时间延迟。特别是,以前的大多数工作都集中在简单的分类任务上,同时忽略了与ANN输出的精确近似。在本文中,我们首先从理论上分析转换误差,并得出时间变化极端对突触电流的有害影响。我们提出尖峰校准(Spicalib),以消除离散尖峰对输出分布的损坏,并修改脂肪,以使任意最大化层无损地转换。此外,提出了针对最佳标准化参数的贝叶斯优化,以避免经验设置。实验结果证明了分类,对象检测和分割任务的最新性能。据我们所知,这是第一次获得与ANN同时在这些任务上相当的SNN。此外,我们只需要先前在检测任务上工作的1/50推理时间,并且可以在0.492 $ \ times $ $下在分段任务上实现相同的性能。
translated by 谷歌翻译
深度尖峰神经网络(SNNS)目前由于离散二进制激活和复杂的空间 - 时间动态而导致的基于梯度的方法的优化困难。考虑到Reset的巨大成功在深度学习中,将深入了解剩余学习,这将是自然的。以前的尖峰Reset模仿ANN的标准残留块,并简单地用尖刺神经元取代relu激活层,这遭受了劣化问题,并且很难实施剩余学习。在本文中,我们提出了尖峰元素 - 明智(SEW)RESET,以实现深部SNNS的剩余学习。我们证明SEW RESET可以轻松实现身份映射并克服Spiking Reset的消失/爆炸梯度问题。我们在Imagenet,DVS手势和CIFAR10-DVS数据集中评估我们的SEF RESET,并显示SEW RESNET以准确性和时间步长,最先进的直接训练的SNN。此外,SEW Reset通过简单地添加更多层来实现更高的性能,提供一种培训深舒头的简单方法。为了我们的最佳知识,这是第一次直接训练具有100多层以上的深度SNN。我们的代码可在https://github.com/fangwei123456/spike-element-wore-resnet上获得。
translated by 谷歌翻译
Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) have achieved a higher accuracy result on image recognition tasks than other SNN training algorithms. Despite the success from the algorithm perspective, prior works neglect the evaluation of the hardware energy overheads of BPTT due to the lack of a hardware evaluation platform for this SNN training algorithm. Moreover, although SNNs have long been seen as an energy-efficient counterpart of ANNs, a quantitative comparison between the training cost of SNNs and ANNs is missing. To address the aforementioned issues, in this work, we introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA provides a simple and re-configurable systolic-based accelerator architecture, which makes it easy to analyze the training energy for BPTT-based SNN training algorithms. By utilizing the sparsity, SATA increases its computation energy efficiency by $5.58 \times$ compared to the one without using sparsity. Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs. The results show that, on Eyeriss-like systolic-based architecture, SNNs consume $1.27\times$ more total energy with sparsities when compared to ANNs. We find that such high training energy cost is from time-repetitive convolution operations and data movements during backpropagation. Moreover, to propel the future SNN training algorithm design, we provide several observations on energy efficiency for different SNN-specific training parameters and propose an energy estimation framework for SNN training. Code for our framework is made publicly available.
translated by 谷歌翻译
我们如何为神经系统带来隐私和能效?在本文中,我们提出了PrivateNN,旨在从预先训练的ANN模型构建低功耗尖峰神经网络(SNNS),而不会泄漏包含在数据集中的敏感信息。在这里,我们解决两种类型的泄漏问题:1)当网络在Ann-SNN转换过程中访问真实训练数据时,会导致数据泄漏。 2)当类相关的特征可以从网络参数重建时,会导致类泄漏。为了解决数据泄漏问题,我们从预先培训的ANN生成合成图像,并使用所生成的图像将ANN转换为SNNS。然而,转换的SNNS仍然容易受到类泄漏的影响,因为权重参数相对于ANN参数具有相同的(或缩放)值。因此,通过训练SNNS,通过训练基于时间尖峰的学习规则来加密SNN权重。使用时间数据更新权重参数使得SNN难以在空间域中解释。我们观察到,加密的私人没有消除数据和类泄漏问题,略微的性能下降(小于〜2),与标准ANN相比,与标准ANN相比的显着的能效增益(约55倍)。我们对各种数据集进行广泛的实验,包括CiFar10,CiFar100和Tinyimagenet,突出了隐私保留的SNN培训的重要性。
translated by 谷歌翻译
由于它们的时间加工能力及其低交换(尺寸,重量和功率)以及神经形态硬件中的节能实现,尖峰神经网络(SNNS)已成为传统人工神经网络(ANN)的有趣替代方案。然而,培训SNNS所涉及的挑战在准确性方面有限制了它们的表现,从而限制了他们的应用。因此,改善更准确的特征提取的学习算法和神经架构是SNN研究中的当前优先级之一。在本文中,我们展示了现代尖峰架构的关键组成部分的研究。我们在从最佳执行网络中凭经验比较了图像分类数据集中的不同技术。我们设计了成功的残余网络(Reset)架构的尖峰版本,并测试了不同的组件和培训策略。我们的结果提供了SNN设计的最新版本,它允许在尝试构建最佳视觉特征提取器时进行明智的选择。最后,我们的网络优于CIFAR-10(94.1%)和CIFAR-100(74.5%)数据集的先前SNN架构,并将现有技术与DVS-CIFAR10(71.3%)相匹配,参数较少而不是先前的状态艺术,无需安静转换。代码在https://github.com/vicenteax/spiking_resnet上获得。
translated by 谷歌翻译
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning and rules on neuromorphic hardware. Other works connect spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that gradients of OTTT can provide a similar descent direction for optimization as gradients based on spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in small time steps.
translated by 谷歌翻译
在本文中,我们提出了一种节能的SNN体系结构,该体系结构可以通过提高的精度无缝地运行深度尖峰神经网络(SNN)。首先,我们提出了一个转换意识培训(CAT),以减少无硬件实施开销而无需安排SNN转换损失。在拟议的CAT中,可以有效利用用于在ANN训练过程中模拟SNN的激活函数,以减少转换后的数据表示误差。基于CAT技术,我们还提出了一项首要尖峰编码,该编码可以通过使用SPIKE时间信息来轻巧计算。支持提出技术的SNN处理器设计已使用28nm CMOS流程实施。该处理器的推理能量分别为486.7UJ,503.6UJ和1426UJ的最高1级准确性,分别为91.7%,67.9%和57.4%,分别为CIFAR-10,CIFAR-100和TININE-IMIMAGENET处理。16具有5位对数权重。
translated by 谷歌翻译
Event-based simulations of Spiking Neural Networks (SNNs) are fast and accurate. However, they are rarely used in the context of event-based gradient descent because their implementations on GPUs are difficult. Discretization with the forward Euler method is instead often used with gradient descent techniques but has the disadvantage of being computationally expensive. Moreover, the lack of precision of discretized simulations can create mismatches between the simulated models and analog neuromorphic hardware. In this work, we propose a new exact error-backpropagation through spikes method for SNNs, extending Fast \& Deep to multiple spikes per neuron. We show that our method can be efficiently implemented on GPUs in a fully event-based manner, making it fast to compute and precise enough for analog neuromorphic hardware. Compared to the original Fast \& Deep and the current state-of-the-art event-based gradient-descent algorithms, we demonstrate increased performance on several benchmark datasets with both feedforward and convolutional SNNs. In particular, we show that multi-spike SNNs can have advantages over single-spike networks in terms of convergence, sparsity, classification latency and sensitivity to the dead neuron problem.
translated by 谷歌翻译
尖峰 - 神经网络(SNNS)在边缘设备处具有前景,因为与模拟 - 神经网络(ANN)相比,SNN的事件驱动操作提供了显着较低的功率。虽然很难有效地训练SNN,但是已经开发了许多将培训的ANN转换为SNNS的技术。但是,在转换之后,SNN中的准确性和延迟之间存在权衡关系,在大尺寸数据集中导致诸如想象成的大尺寸数据集之间的相当大。我们提出了一种名为TCL的技术,以缓解权衡问题,使得73.87%(VGG-16)和70.37%(Reset-34)的准确性,在SNNS中的250个周期的中等潜伏期。
translated by 谷歌翻译
尖峰神经网络(SNN)是一种受脑启发的模型,具有更时空的信息处理能力和计算能效效率。但是,随着SNN深度的增加,由SNN​​的重量引起的记忆问题逐渐引起了人们的注意。受到人工神经网络(ANN)量化技术的启发,引入了二进制SNN(BSNN)来解决记忆问题。由于缺乏合适的学习算法,BSNN通常由ANN-SNN转换获得,其准确性将受到训练有素的ANN的限制。在本文中,我们提出了具有准确性损失估计器的超低潜伏期自适应局部二进制二进制尖峰神经网络(ALBSNN),该网络层动态选择要进行二进制的网络层,以通过评估由二进制重量引起的错误来确保网络的准确性在网络学习过程中。实验结果表明,此方法可以将存储空间降低超过20%,而不会丢失网络准确性。同时,为了加速网络的训练速度,引入了全球平均池(GAP)层,以通过卷积和合并的组合替换完全连接的层,以便SNN可以使用少量时间获得更好识别准确性的步骤。在仅使用一个时间步骤的极端情况下,我们仍然可以在三个不同的数据集(FashionMnist,CIFAR-10和CIFAR-10和CIFAR-100)上获得92.92%,91.63%和63.54%的测试精度。
translated by 谷歌翻译
Spiking Neural Networks (SNNs) have been studied over decades to incorporate their biological plausibility and leverage their promising energy efficiency. Throughout existing SNNs, the leaky integrate-and-fire (LIF) model is commonly adopted to formulate the spiking neuron and evolves into numerous variants with different biological features. However, most LIF-based neurons support only single biological feature in different neuronal behaviors, limiting their expressiveness and neuronal dynamic diversity. In this paper, we propose GLIF, a unified spiking neuron, to fuse different bio-features in different neuronal behaviors, enlarging the representation space of spiking neurons. In GLIF, gating factors, which are exploited to determine the proportion of the fused bio-features, are learnable during training. Combining all learnable membrane-related parameters, our method can make spiking neurons different and constantly changing, thus increasing the heterogeneity and adaptivity of spiking neurons. Extensive experiments on a variety of datasets demonstrate that our method obtains superior performance compared with other SNNs by simply changing their neuronal formulations to GLIF. In particular, we train a spiking ResNet-19 with GLIF and achieve $77.35\%$ top-1 accuracy with six time steps on CIFAR-100, which has advanced the state-of-the-art. Codes are available at \url{https://github.com/Ikarosy/Gated-LIF}.
translated by 谷歌翻译
尖峰神经网络(SNN)在各种智能场景中都表现出了出色的功能。大多数现有的训练SNN方法基于突触可塑性的概念。但是,在现实的大脑中学习还利用了神经元的内在非突触机制。生物神经元的尖峰阈值是一种关键的固有神经元特征,在毫秒的时间尺度上表现出丰富的动力学,并已被认为是一种促进神经信息处理的基本机制。在这项研究中,我们开发了一种新型的协同学习方法,该方法同时训练SNN中的突触权重和尖峰阈值。经过突触阈值协同学习(STL-SNN)训练的SNN在各种静态和神经形态数据集上的精度明显高于接受两种突触学习(SL)和阈值学习(TL)的单独学习模型(TL)的SNN。在训练过程中,协同学习方法优化了神经阈值,通过适当的触发速率为网络提供稳定的信号传输。进一步的分析表明,STL-SNN对嘈杂的数据是可靠的,并且对深网结构表现出低的能耗。此外,通过引入广义联合决策框架(JDF),可以进一步提高STL-SNN的性能。总体而言,我们的发现表明,突触和内在的非突触机制之间的生物学上合理的协同作用可能为开发高效的SNN学习方法提供了一种有希望的方法。
translated by 谷歌翻译
尽管神经形态计算的快速进展,但尖刺神经网络(SNNS)的能力不足和不足的表现力严重限制了其在实践中的应用范围。剩余学习和捷径被证明是培训深层神经网络的重要方法,但以前的工作评估了他们对基于尖峰的通信和时空动力学的特征的适用性。在本文中,我们首先确定这种疏忽导致受阻信息流程和伴随以前的残留SNN中的降解问题。然后,我们提出了一种新型的SNN定向的残余块MS-Reset,能够显着地扩展直接训练的SNN的深度,例如,在ImageNet上最多可在CiFar-10和104层上完成482层,而不会观察到任何轻微的降级问题。我们验证了基于帧和神经形态数据集的MS-Reset的有效性,并且MS-Resnet104在直接训练的SNN的域中的第一次实现了在ImageNet上的76.02%精度的优越结果。还观察到巨大的能量效率,平均仅需要每根神经元的一穗来分类输入样本。我们相信我们强大且可扩展的型号将为进一步探索SNN提供强大的支持。
translated by 谷歌翻译
尖峰神经网络(SNNS)模仿大脑计算策略,并在时空信息处理中表现出很大的功能。作为人类感知的基本因素,视觉关注是指生物视觉系统中显着区域的动态选择过程。尽管视觉注意力的机制在计算机视觉上取得了巨大成功,但很少会引入SNN中。受到预测注意重新映射的实验观察的启发,我们在这里提出了一种新的时空通道拟合注意力(SCTFA)模块,该模块可以通过使用历史积累的空间通道信息来指导SNN有效地捕获潜在的目标区域。通过在三个事件流数据集(DVS手势,SL-Animals-DVS和MNIST-DVS)上进行系统评估,我们证明了带有SCTFA模块(SCTFA-SNN)的SNN不仅显着超过了基线SNN(BL-SNN)(BL-SNN)(BL-SNN)以及其他两个具有退化注意力模块的SNN模型,但也通过现有最新方法实现了竞争精度。此外,我们的详细分析表明,所提出的SCTFA-SNN模型对噪声和出色的稳定性具有强大的稳健性,同时保持了可接受的复杂性和效率。总体而言,这些发现表明,适当纳入大脑的认知机制可能会提供一种有希望的方法来提高SNN的能力。
translated by 谷歌翻译