This paper presents a Neuromorphic Starter Kit, which has been designed to help a variety of research groups perform research, exploration and real-world demonstrations of brain-based, neuromorphic processors and hardware environments. A prototype kit has been built and tested. We explain the motivation behind the kit, its design and composition, and a prototype physical demonstration.
translated by 谷歌翻译
神经形态计算是一个新兴的研究领域,旨在通过整合来自神经科学和深度学习等多学科的理论和技术来开发新的智能系统。当前,已经为相关字段开发了各种软件框架,但是缺乏专门用于基于Spike的计算模型和算法的有效框架。在这项工作中,我们提出了一个基于Python的尖峰神经网络(SNN)模拟和培训框架,又名Spaic,旨在支持脑启发的模型和算法研究,并与深度学习和神经科学的特征集成在一起。为了整合两个压倒性学科的不同方法,以及灵活性和效率之间的平衡,SpaiC设计采用神经科学风格的前端和深度学习后端结构设计。我们提供了广泛的示例,包括神经回路模拟,深入的SNN学习和神经形态应用,展示了简洁的编码样式和框架的广泛可用性。 Spaic是一个专用的基于SPIKE的人工智能计算平台,它将显着促进新模型,理论和应用的设计,原型和验证。具有用户友好,灵活和高性能,它将有助于加快神经形态计算研究的快速增长和广泛的适用性。
translated by 谷歌翻译
我们在Nengo框架上介绍了基于纯净的神经网络(SNN)的基于稀疏分布式存储器(SDM)。我们基于Furber等人,2004年之前的工作,使用N-y-y of-of-modes实现SDM。作为SDM设计的组成部分,我们已经在Nengo上实现了使用SNN的相关矩阵存储器(CMM)。我们的SNN实施采用漏水集成和火(LIF)在Nengo上尖刺神经元模型。我们的目标是了解基于SNN的SDMS与传统SDMS相比如何进行。为此,我们在Nengo模拟了基于常规和基于SNN的SDM和CMM。我们观察到基于SNN的模型类似于传统的模型。为了评估不同SNN的性能,我们使用Adaptive-Lif,Spiking整流线性单元和Izhikevich模型重复实验并获得了类似的结果。我们得出结论,使用内存的神经元制定一些类型的关联存储器,其内存容量和其他功能类似于没有SNN的性能,确实可行。最后,我们已经实现了一个应用程序,其中使用N-M个代码编码的Mnist图像与其标签相关联并存储在基于SNN的SDM中。
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).
translated by 谷歌翻译
在本文中,我们提出了一种节能的SNN体系结构,该体系结构可以通过提高的精度无缝地运行深度尖峰神经网络(SNN)。首先,我们提出了一个转换意识培训(CAT),以减少无硬件实施开销而无需安排SNN转换损失。在拟议的CAT中,可以有效利用用于在ANN训练过程中模拟SNN的激活函数,以减少转换后的数据表示误差。基于CAT技术,我们还提出了一项首要尖峰编码,该编码可以通过使用SPIKE时间信息来轻巧计算。支持提出技术的SNN处理器设计已使用28nm CMOS流程实施。该处理器的推理能量分别为486.7UJ,503.6UJ和1426UJ的最高1级准确性,分别为91.7%,67.9%和57.4%,分别为CIFAR-10,CIFAR-100和TININE-IMIMAGENET处理。16具有5位对数权重。
translated by 谷歌翻译
神经形态计算机通过模拟人脑进行计算,并使用极低的功率。预计将来对于节能计算是必不可少的。尽管它们主要用于尖峰基于神经网络的机器学习应用程序,但已知神经形态计算机是Turing-Complete,因此能够进行通用计算。但是,为了充分意识到它们的通用,节能计算的潜力,重要的是要设计有效的编码数字机制。当前的编码方法的适用性有限,可能不适合通用计算。在本文中,我们将虚拟神经元视为整数和理性数字的编码机制。我们评估虚拟神经元在物理和模拟神经形态硬件上的性能,并表明它可以使用基于混合信号的Memristor神经形态处理器平均使用23 nj的能量执行加法操作。我们还通过在某些MU回复功能中使用它来证明其实用性,这些功能是通用计算的构建块。
translated by 谷歌翻译
尖峰神经网络(SNN)提供了一个新的计算范式,能够高度平行,实时处理。光子设备是设计与SNN计算范式相匹配的高带宽,平行体系结构的理想选择。 CMO和光子元件的协整允许将低损耗的光子设备与模拟电子设备结合使用,以更大的非线性计算元件的灵活性。因此,我们在整体硅光子学(SIPH)过程上设计和模拟了光电尖峰神经元电路,该过程复制了超出泄漏的集成和火(LIF)之外有用的尖峰行为。此外,我们探索了两种学习算法,具有使用Mach-Zehnder干涉法(MZI)网格作为突触互连的片上学习的潜力。实验证明了随机反向传播(RPB)的变体,并在简单分类任务上与标准线性回归的性能相匹配。同时,将对比性HEBBIAN学习(CHL)规则应用于由MZI网格组成的模拟神经网络,以进行随机输入输出映射任务。受CHL训练的MZI网络的性能比随机猜测更好,但不符合理想神经网络的性能(没有MZI网格施加的约束)。通过这些努力,我们证明了协调的CMO和SIPH技术非常适合可扩展的SNN计算体系结构的设计。
translated by 谷歌翻译
由于降低了von-neumann架构运行深度学习模型的功耗的基本限制,在聚光灯下,基于低功率尖刺神经网络的神经栓塞系统的研究。为了整合大量神经元,神经元需要设计占据一个小面积,而是随着技术缩小,模拟神经元难以缩放,并且它们遭受降低的电压净空/动态范围和电路非线性。鉴于此,本文首先模拟了在28nm工艺中设计的现有电流镜的电压域神经元的非线性行为,并显示了神经元非线性的效果严重降低了SNN推理精度。然后,为了减轻这个问题,我们提出了一种新的神经元,该新型神经元在时域中加入输入的尖峰,并且大大改善了线性度,从而改善了与现有电压域神经元相比的推理精度。在Mnist DataSet上进行测试,所提出的神经元的推理误差率与理想神经元的引起误差率不同于0.1%。
translated by 谷歌翻译
神经形态工程由于其作为研究领域的巨大潜力而​​集中了大量研究人员的努力,以寻找对生物神经系统的优势的利用,而整个大脑的优势是设计更有效,更真实的 - 有能力的应用程序。为了开发尽可能接近生物学的应用,使用了尖峰神经网络(SNN),被认为是生物学上的,并构成了第三代人工神经网络(ANN)。由于某些基于SNN的应用程序可能需要存储数据才能以后使用,因此在数字电路中既存在,又以某种形式,在生物学中,需要尖峰内存。这项工作介绍了内存的尖峰实现,这是计算机架构中最重要的组件之一,在设计完全尖峰计算机时可能至关重要。在设计这种尖峰内存的过程中,还实施了不同的中间组件和测试。测试是在大三角帆神经形态平台上进行的,并允许验证用于构建所构图的方法。此外,这项工作深入研究了如何使用这种方法构建尖峰块,并包括IT和其他类似作品中使用的方法的比较,该作品着重于尖峰组件的设计,其中包括尖峰逻辑门和尖峰记忆。所有实施的块和开发的测试均可在公共存储库中提供。
translated by 谷歌翻译
在本文中,我们介绍了RISP,这是一种减少的指令尖峰处理器。虽然大多数尖峰神经处理器都是基于大脑或大脑的概念,但我们为简化而不是复杂的尖峰处理器提供了案例。因此,它具有离散的集成周期,可配置的泄漏等等。我们介绍了RISP的计算模型,并突出了其简单性的好处。我们展示了它如何帮助开发用于简单计算任务的手部神经网络,并详细介绍如何使用它来简化使用更复杂的机器学习技术构建的神经网络,并演示其与其他尖峰神经过程相似的性能。
translated by 谷歌翻译
In the past years, artificial neural networks (ANNs) have become the de-facto standard to solve tasks in communications engineering that are difficult to solve with traditional methods. In parallel, the artificial intelligence community drives its research to biology-inspired, brain-like spiking neural networks (SNNs), which promise extremely energy-efficient computing. In this paper, we investigate the use of SNNs in the context of channel equalization for ultra-low complexity receivers. We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE). For conversion of real-world data into spike signals we introduce a novel ternary encoding and compare it with traditional log-scale encoding. We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels. We highlight that mainly the conversion of the channel output to spikes introduces a small performance penalty. The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
translated by 谷歌翻译
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.
translated by 谷歌翻译
Loihi is a 60-mm 2 chip fabricated in Intel's 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay product compared to conventional solvers running on a CPU isoprocess/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.Neuroscience offers a bountiful source of inspiration for novel hardware architectures and algorithms. Through their complex interactions at large scales, biological neurons exhibit an impressive range of behaviors and properties that we currently struggle to model with modern analytical tools, let alone replicate with our design and manufacturing technology. Some of the magic that we see in the brain undoubtedly stems from exotic device and material properties that will remain out of our fabs' reach for
translated by 谷歌翻译
我们介绍了基于优化的理论,描述了在视觉皮质中的经验观察到的尖刺皮质组合,其配备有尖峰定时依赖性塑性(STDP)学习。使用我们的方法,我们为基于事件的相机构建了一类完全连接的,基于卷积和动作的特征描述符,即我们分别评估N-Mnist,挑战Cifar10-DVS以及IBM DVS128手势数据集。与传统的最先进的事件的特征描述符相比,我们报告了显着的准确性改进(CIFAR10-DVS上的+ 8%)。与最先进的STDP的系统(在N-MNIST上+ 10%+ 10%+ 10%,在IBM DVS128手势上举报的准确性提高了大量改进)。除了神经形态边缘装置的超低功率学习之外,我们的作品还有助于铺平朝向基于生物学 - 基于的皮质视觉理论的方式。
translated by 谷歌翻译
我们提出了一种新的学习算法,使用传统的人工神经网络(ANN)作为代理训练尖刺神经网络(SNN)。我们分别与具有相同网络架构和共享突触权重的集成和火(IF)和Relu神经元进行两次SNN和ANN网络。两个网络的前进通过完全独立。通过假设具有速率编码的神经元作为Relu的近似值,我们将SNN中的SNN的误差进行了回复,以更新共享权重,只需用SNN的ANN最终输出替换ANN最终输出。我们将建议的代理学习应用于深度卷积的SNNS,并在Fahion-Mnist和CiFar10的两个基准数据集上进行评估,分别为94.56%和93.11%的分类准确性。所提出的网络可以优于培训的其他深鼻涕,训练,替代学习,代理梯度学习,或从深处转换。转换的SNNS需要长时间的仿真时间来达到合理的准确性,而我们的代理学习导致高效的SNN,模拟时间较短。
translated by 谷歌翻译
To increase the quality of citizens' lives, we designed a personalized smart chair system to recognize sitting behaviors. The system can receive surface pressure data from the designed sensor and provide feedback for guiding the user towards proper sitting postures. We used a liquid state machine and a logistic regression classifier to construct a spiking neural network for classifying 15 sitting postures. To allow this system to read our pressure data into the spiking neurons, we designed an algorithm to encode map-like data into cosine-rank sparsity data. The experimental results consisting of 15 sitting postures from 19 participants show that the prediction precision of our SNN is 88.52%.
translated by 谷歌翻译
Efficient and robust control using spiking neural networks (SNNs) is still an open problem. Whilst behaviour of biological agents is produced through sparse and irregular spiking patterns, which provide both robust and efficient control, the activity patterns in most artificial spiking neural networks used for control are dense and regular -- resulting in potentially less efficient codes. Additionally, for most existing control solutions network training or optimization is necessary, even for fully identified systems, complicating their implementation in on-chip low-power solutions. The neuroscience theory of Spike Coding Networks (SCNs) offers a fully analytical solution for implementing dynamical systems in recurrent spiking neural networks -- while maintaining irregular, sparse, and robust spiking activity -- but it's not clear how to directly apply it to control problems. Here, we extend SCN theory by incorporating closed-form optimal estimation and control. The resulting networks work as a spiking equivalent of a linear-quadratic-Gaussian controller. We demonstrate robust spiking control of simulated spring-mass-damper and cart-pole systems, in the face of several perturbations, including input- and system-noise, system disturbances, and neural silencing. As our approach does not need learning or optimization, it offers opportunities for deploying fast and efficient task-specific on-chip spiking controllers with biologically realistic activity.
translated by 谷歌翻译
穗状花序的神经形状硬件占据了深度神经网络(DNN)的更节能实现的承诺,而不是GPU的标准硬件。但这需要了解如何在基于事件的稀疏触发制度中仿真DNN,否则能量优势丢失。特别地,解决序列处理任务的DNN通常采用难以使用少量尖峰效仿的长短期存储器(LSTM)单元。我们展示了许多生物神经元的面部,在每个尖峰后缓慢的超积极性(AHP)电流,提供了有效的解决方案。 AHP电流可以轻松地在支持多舱神经元模型的神经形状硬件中实现,例如英特尔的Loihi芯片。滤波近似理论解释为什么AHP-Neurons可以模拟LSTM单元的功能。这产生了高度节能的时间序列分类方法。此外,它为实现了非常稀疏的大量大型DNN来实现基础,这些大型DNN在文本中提取单词和句子之间的关系,以便回答有关文本的问题。
translated by 谷歌翻译
尖峰神经网络的事件驱动性质使它们具有生物学上可符合的和比人工神经网络更节能。在这项工作中,我们展示了二维视野中对象的运动检测。这里呈现的网络架构是生物学卓越的,并使用CMOS模拟泄漏整合和灭火神经元和超低功耗多层RRAM突触。具体的跨晶体管纤维Spice模拟表明,所提出的结构可以在二维视野中准确可靠地检测物体的复杂运动。
translated by 谷歌翻译