在神经形态计算中,人工突触提供了一种基于来自神经元的输入来设置的多重导电状态,类似于大脑。可能需要超出多重权重的突触的附加属性,并且可以取决于应用程序,需要需要从相同材料生成不同的突触行为。这里,我们基于使用磁隧道结和磁畴壁的磁性材料测量人造突触。通过在单个磁隧道结下面的畴壁轨道中制造光刻槽口,我们实现了4-5个稳定的电阻状态,可以使用自旋轨道扭矩电气可重复控制。我们分析几何形状对突触行为的影响,表明梯形装置具有高可控性的不对称性重量,而直线装置具有较高的随机性,但具有稳定的电阻水平。设备数据被输入到神经形态计算模拟器中以显示特定于应用程序突触函数的有用性。实施应用于流式的时尚 - MNIST数据的人工神经网络,我们表明梯形磁突出可以用作高效在线学习的元塑功能。为CiFar-100图像识别实施卷积神经网络,我们表明直流突触由于其电阻水平的稳定性而达到近乎理想的推理精度。这项工作显示多重磁突触是神经形态计算的可行技术,并为新兴人工突触技术提供设计指南。
translated by 谷歌翻译
我们证明,与畴壁(DW)位置的大量随机变化的量化量(名义上是5态)突触的极低分辨率可以是节能的,并且与使用浮动精度相比,与类似尺寸的深度神经网络(DNN)相比具有相当高的测试精度。突触权重。具体地,电压控制的DW器件展示随机性的随机行为,与微磁性模拟严格,并且只能编码有限状态;但是,它们在训练和推论中都可以非常节能。我们表明,通过对学习算法实施合适的修改,我们可以解决随机行为以及减轻其低分辨率的影响,以实现高测试精度。在这项研究中,我们提出了原位和前地训练算法,基于Hubara等人提出的算法的修改。 [1]适用于突触权重的量化。我们使用2个,3和5状态DW设备作为Synapse培训Mnist DataSet上的几个5层DNN。对于原位训练,采用单独的高精度存储器单元来保护和累积重量梯度,然后被量化以编程低精密DW设备。此外,在训练期间使用尺寸的噪声公差余量来解决内部编程噪声。对于前训训练,首先基于所表征的DW设备模型和噪声公差余量进行前体DNN,其类似于原位培训。值得注意的是,对于原位推断,对设备的能量耗散装置仅是每次推断仅13页,因为在整个MNIST数据集上进行10个时期进行训练。
translated by 谷歌翻译
神经网络的越来越大的规模及其越来越多的应用空间对更高的能量和记忆有效的人工智能特定硬件产生了需求。 venues为了缓解主要问题,von neumann瓶颈,包括内存和近记忆架构,以及算法方法。在这里,我们利用磁隧道结(MTJ)的低功耗和固有的二进制操作来展示基于MTJ的无源阵列的神经网络硬件推断。通常,由于设备到装置的变化,写入误差,寄生电阻和非前沿,在性能下将训练的网络模型转移到推动的硬件。为了量化这些硬件现实的效果,我们将300个唯一重量矩阵解决方案的23个唯一的重量矩阵解决方案进行分类,以分类葡萄酒数据集,用于分类准确性和写真保真度。尽管设备不完美,我们可以实现高达95.3%的软件等效精度,并在15 x 15 MTJ阵列中正确调整具有一系列设备尺寸的阵列。此调谐过程的成功表明,需要新的指标来表征混合信号硬件中再现的网络的性能和质量。
translated by 谷歌翻译
Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations. As these tables rely on data interpolation, this work explores the open questions about their fidelity in relation to the stochastic device behavior they model. We study how various jump table device models impact the attained network performance estimates, a concept we define as modeling bias. Two methods of jump table device modeling, binning and Optuna-optimized binning, are explored using synthetic data with known distributions for benchmarking purposes, as well as experimental data obtained from TiOx ReRAM devices. Results on a multi-layer perceptron trained on MNIST show that device models based on binning can behave unpredictably particularly at low number of points in the device dataset, sometimes over-promising, sometimes under-promising target network accuracy. This paper also proposes device level metrics that indicate similar trends with the modeling bias metric at the network level. The proposed approach opens the possibility for future investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.
translated by 谷歌翻译
人工智能革命(AI)提出了巨大的存储和数据处理要求。大量的功耗和硬件开销已成为构建下一代AI硬件的主要挑战。为了减轻这种情况,神经形态计算引起了极大的关注,因为它在功耗非常低的功能方面具有出色的数据处理能力。尽管无情的研究已经进行了多年,以最大程度地减少神经形态硬件的功耗,但我们离达到人脑的能源效率还有很长的路要走。此外,设计复杂性和过程变化阻碍了当前神经形态平台的大规模实现。最近,由于其出色的速度和功率指标,在低温温度中实施神经形态计算系统的概念引起了人们的兴趣。可以设计几种低温装置,可作为具有超低功率需求的神经形态原始设备。在这里,我们全面回顾了低温神经形态硬件。我们将现有的低温神经形态硬件分类为几个分层类别,并根据关键性能指标绘制比较分析。我们的分析简洁地描述了相关电路拓扑的操作,并概述了最先进的技术平台遇到的优势和挑战。最后,我们提供了见解,以规避这些挑战,以实现未来的研究发展。
translated by 谷歌翻译
基于旋转扭矩振荡器的复合值Hopfield网络模拟可以恢复相位编码的图像。存储器增强逆变器的序列提供可调谐延迟元件,通过相位转换振荡器的振荡输出来实现复合权重的可调延迟元件。伪逆培训足以存储在一组192个振荡器中,至少代表16 $ \倍数为12个像素图像。恢复图像所需的能量取决于所需的错误级别。对于这里考虑的振荡器和电路,来自理想图像的5%均方方偏差需要大约5 00美元$ S并消耗大约130 NJ。模拟显示,当振荡器的谐振频率可以调整为具有小于10 ^ {-3} $的分数扩展时,网络功能良好,具体取决于反馈的强度。
translated by 谷歌翻译
为了寻求低功率,以生物启发的计算均基于回忆性和基于成年的人工神经网络(ANN)一直是对硬件实施神经形态计算的焦点的主题。进一步的一步,要求使用绝热计算的再生电容性神经网络,为降低能源消耗提供了诱人的途径,尤其是与“ Memimpedace”元素结合使用时。在这里,我们提出了一种人工神经元,具有绝热的突触电容器,以产生神经元的膜电位。后者通过动态闩锁比较器实现,并使用电阻随机访问存储器(RRAM)设备增强。我们最初的4位绝热电容性神经元概念验证示例显示了90%的突触能量节省。在4个突触/SOMA时,我们已经看到总体减少35%的能量。此外,工艺和温度对4位绝热突触的影响显示,在整个角落100度摄氏时,最大能量变化为30%,而没有任何功能损失。最后,我们对ANN的绝热方法的功效进行了512和1024突触/神经元的测试,最差和最佳的情况突触载荷条件以及可变的均衡电容的可变量化均等能力量化了均衡电容和最佳功率 - 电信频率范围之间的预期权衡。加载(即活动突触的百分比)。
translated by 谷歌翻译
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
translated by 谷歌翻译
Organic neuromorphic device networks can accelerate neural network algorithms and directly integrate with microfluidic systems or living tissues. Proposed devices based on the bio-compatible conductive polymer PEDOT:PSS have shown high switching speeds and low energy demand. However, as electrochemical systems, they are prone to self-discharge through parasitic electrochemical reactions. Therefore, the network's synapses forget their trained conductance states over time. This work integrates single-device high-resolution charge transport models to simulate neuromorphic device networks and analyze the impact of self-discharge on network performance. Simulation of a single-layer nine-pixel image classification network reveals no significant impact of self-discharge on training efficiency. And, even though the network's weights drift significantly during self-discharge, its predictions remain 100\% accurate for over ten hours. On the other hand, a multi-layer network for the approximation of the circle function is shown to degrade significantly over twenty minutes with a final mean-squared-error loss of 0.4. We propose to counter the effect by periodically reminding the network based on a map between a synapse's current state, the time since the last reminder, and the weight drift. We show that this method with a map obtained through validated simulations can reduce the effective loss to below 0.1 even with worst-case assumptions. Finally, while the training of this network is affected by self-discharge, a good classification is still obtained. Electrochemical organic neuromorphic devices have not been integrated into larger device networks. This work predicts their behavior under nonideal conditions, mitigates the worst-case effects of parasitic self-discharge, and opens the path toward implementing fast and efficient neural networks on organic neuromorphic hardware.
translated by 谷歌翻译
我们提出了Memprop,即采用基于梯度的学习来培训完全的申请尖峰神经网络(MSNNS)。我们的方法利用固有的设备动力学来触发自然产生的电压尖峰。这些由回忆动力学发出的尖峰本质上是类似物,因此完全可区分,这消除了尖峰神经网络(SNN)文献中普遍存在的替代梯度方法的需求。回忆性神经网络通常将备忘录集成为映射离线培训网络的突触,或者以其他方式依靠关联学习机制来训练候选神经元的网络。相反,我们直接在循环神经元和突触的模拟香料模型上应用了通过时间(BPTT)训练算法的反向传播。我们的实现是完全的综合性,因为突触重量和尖峰神经元都集成在电阻RAM(RRAM)阵列上,而无需其他电路来实现尖峰动态,例如模数转换器(ADCS)或阈值比较器。结果,高阶电物理效应被充分利用,以在运行时使用磁性神经元的状态驱动动力学。通过朝着非同一梯度的学习迈进,我们在以前报道的几个基准上的轻巧密集的完全MSNN中获得了高度竞争的准确性。
translated by 谷歌翻译
通过制造不精确和装置随机性来阻碍用于储存神经晶体系统中重量的模拟抗性状态,限制突触重量的精度。通过使用自旋转移扭矩磁阻随机接入存储器(STT-MRAM)的二进制状态的随机切换来模拟模拟行为来解决该挑战。然而,基于STT-MRAM的先前方法以异步方式操作,这难以通过实验实施。本文提出了一种具有时钟电路的同步尖峰神经网络系统,其执行无监督的学习利用STT-MRAM的随机切换。所提出的系统使单层网络能够在MNIST数据集上实现90%的推理准确性。
translated by 谷歌翻译
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that holds promise for efficiently running this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural simulator NEST. We investigate the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate resilience with respect to different on-off ratios, conductance resolutions, device variability, and synaptic failure.
translated by 谷歌翻译
这项研究提出了依赖电压突触可塑性(VDSP),这是一种新型的脑启发的无监督的本地学习规则,用于在线实施HEBB对神经形态硬件的可塑性机制。拟议的VDSP学习规则仅更新了突触后神经元的尖峰的突触电导,这使得相对于标准峰值依赖性可塑性(STDP)的更新数量减少了两倍。此更新取决于突触前神经元的膜电位,该神经元很容易作为神经元实现的一部分,因此不需要额外的存储器来存储。此外,该更新还对突触重量进行了正规化,并防止重复刺激时的重量爆炸或消失。进行严格的数学分析以在VDSP和STDP之间达到等效性。为了验证VDSP的系统级性能,我们训练一个单层尖峰神经网络(SNN),以识别手写数字。我们报告85.01 $ \ pm $ 0.76%(平均$ \ pm $ s.d。)对于MNIST数据集中的100个输出神经元网络的精度。在缩放网络大小时,性能会提高(400个输出神经元的89.93 $ \ pm $ 0.41%,500个神经元为90.56 $ \ pm $ 0.27),这验证了大规模计算机视觉任务的拟议学习规则的适用性。有趣的是,学习规则比STDP更好地适应输入信号的频率,并且不需要对超参数进行手动调整。
translated by 谷歌翻译
突触记忆巩固已被认为是支持神经形态人工智能(AI)系统中持续学习的关键机制之一。在这里,我们报告说,Fowler-Nordheim(FN)量子隧道设备可以实现突触存储器巩固,类似于通过算法合并模型(例如级联和弹性重量合并(EWC)模型)所能实现的。拟议的FN-Synapse不仅存储突触重量,而且还存储了Synapse在设备本身上的历史用法统计量。我们还表明,就突触寿命而言,FN合并的操作几乎是最佳的,并且我们证明了一个包含FN合成的网络在一个小基准测试持续学习任务上超过了可比的EWC网络。通过每次突触更新的Femtojoules的能量足迹,我们相信所提出的FN-Synapse为实施突触记忆巩固和持续学习提供了一种超能效率的方法。
translated by 谷歌翻译
尖峰神经网络(SNN)提供了一个新的计算范式,能够高度平行,实时处理。光子设备是设计与SNN计算范式相匹配的高带宽,平行体系结构的理想选择。 CMO和光子元件的协整允许将低损耗的光子设备与模拟电子设备结合使用,以更大的非线性计算元件的灵活性。因此,我们在整体硅光子学(SIPH)过程上设计和模拟了光电尖峰神经元电路,该过程复制了超出泄漏的集成和火(LIF)之外有用的尖峰行为。此外,我们探索了两种学习算法,具有使用Mach-Zehnder干涉法(MZI)网格作为突触互连的片上学习的潜力。实验证明了随机反向传播(RPB)的变体,并在简单分类任务上与标准线性回归的性能相匹配。同时,将对比性HEBBIAN学习(CHL)规则应用于由MZI网格组成的模拟神经网络,以进行随机输入输出映射任务。受CHL训练的MZI网络的性能比随机猜测更好,但不符合理想神经网络的性能(没有MZI网格施加的约束)。通过这些努力,我们证明了协调的CMO和SIPH技术非常适合可扩展的SNN计算体系结构的设计。
translated by 谷歌翻译
储层计算(RC)已经获得了最近的兴趣,因为无需培训储层权重,从而实现了极低的资源消费实施,这可能会对边缘计算和现场学习的影响有严格的限制。理想情况下,天然硬件储层应被动,最小,表现力和可行性。迄今为止,拟议的硬件水库很难满足所有这些标准。因此,我们建议通过利用偶极耦合,沮丧的纳米磁体的被动相互作用来符合所有这些标准的水库。挫败感大大增加了稳定的储层国家的数量,丰富了储层动力学,因此这些沮丧的纳米磁体满足了天然硬件储层的所有标准。同样,我们提出了具有低功率互补金属氧化物半导体(CMOS)电路的完全沮丧的纳米磁管储层计算(NMRC)系统与储层接口,并且初始实验结果证明了储层的可行性。在三个单独的任务上,通过微磁模拟对储层进行了验证。将所提出的系统与CMOS Echo-State网络(ESN)进行了比较,表明总体资源减少了10,000,000多倍,这表明,由于NMRC自然是被动的,而且最小的可能是具有极高资源效率的潜力。
translated by 谷歌翻译
在这项工作中,我们介绍了一种光电尖峰,能够以超速率($ \ \左右100磅/光学尖峰)和低能耗($ <$ PJ /秒码)运行。所提出的系统结合了具有负差分电导的可激发谐振隧道二极管(RTD)元件,耦合到纳米级光源(形成主节点)或光电探测器(形成接收器节点)。我们在数值上学习互连的主接收器RTD节点系统的尖峰动态响应和信息传播功能。使用脉冲阈值和集成的关键功能,我们利用单个节点来对顺序脉冲模式进行分类,并对图像特征(边缘)识别执行卷积功能。我们还展示了光学互连的尖峰神经网络模型,用于处理超过10 Gbps的时空数据,具有高推理精度。最后,我们展示了利用峰值定时依赖性可塑性的片外监督的学习方法,使能RTD的光子尖峰神经网络。这些结果证明了RTD尖峰节点用于低占地面积,低能量,高速光电实现神经形态硬件的潜在和可行性。
translated by 谷歌翻译
Event-based neuromorphic systems provide a low-power solution by using artificial neurons and synapses to process data asynchronously in the form of spikes. Ferroelectric Tunnel Junctions (FTJs) are ultra low-power memory devices and are well-suited to be integrated in these systems. Here, we present a hybrid FTJ-CMOS Integrate-and-Fire neuron which constitutes a fundamental building block for new-generation neuromorphic networks for edge computing. We demonstrate electrically tunable neural dynamics achievable by tuning the switching of the FTJ device.
translated by 谷歌翻译
我们介绍了具有磁隧道结(MTJ)突触的神经形态网络的第一个实验证明,其通过矢量矩阵乘法进行图像识别。我们还模拟了执行Mnist手写数字识别的大型MTJ网络,展示MTJ交叉栏可以匹配映射器精度,同时提供更高的精度,稳定性和耐久性。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译