大脑通过其复杂的尖峰网络的网络有效地执行非线性计算,但这是如何难以捉摸的。虽然可以在尖峰神经网络中成功实现非线性计算,但这需要监督培训,并且产生的连接可能很难解释。相反,可以用尖峰编码网络(SCN)框架直接导出和理解线性动力系统形式的任何计算的所需连通性。这些网络还具有生物学上的现实活动模式,对细胞死亡具有高度稳健的。在这里,我们将SCN框架扩展到直接实施任何多项式动态系统,而无需培训。这导致需要混合突触类型(快速,慢,乘法)的网络,我们术语乘以乘法峰值编码网络(MSCN)。使用MSCN,我们演示了如何直接导出几个非线性动态系统所需的连通性。我们还展示了如何执行高阶多项式,其中耦合网络仅使用配对乘法突触,并为每个突触类型提供预期的连接数。总体而言,我们的作品展示了一种新的用于在尖峰神经网络中实现非线性计算的新方法,同时保持标准SCNS(鲁棒性,现实活动模式和可解释连接)的吸引力特征。最后,我们讨论了我们方法的生物合理性,以及这种方法的高准确度和鲁棒性如何对神经形态计算感兴趣。
translated by 谷歌翻译
Efficient and robust control using spiking neural networks (SNNs) is still an open problem. Whilst behaviour of biological agents is produced through sparse and irregular spiking patterns, which provide both robust and efficient control, the activity patterns in most artificial spiking neural networks used for control are dense and regular -- resulting in potentially less efficient codes. Additionally, for most existing control solutions network training or optimization is necessary, even for fully identified systems, complicating their implementation in on-chip low-power solutions. The neuroscience theory of Spike Coding Networks (SCNs) offers a fully analytical solution for implementing dynamical systems in recurrent spiking neural networks -- while maintaining irregular, sparse, and robust spiking activity -- but it's not clear how to directly apply it to control problems. Here, we extend SCN theory by incorporating closed-form optimal estimation and control. The resulting networks work as a spiking equivalent of a linear-quadratic-Gaussian controller. We demonstrate robust spiking control of simulated spring-mass-damper and cart-pole systems, in the face of several perturbations, including input- and system-noise, system disturbances, and neural silencing. As our approach does not need learning or optimization, it offers opportunities for deploying fast and efficient task-specific on-chip spiking controllers with biologically realistic activity.
translated by 谷歌翻译
This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience. It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments. This path is demonstrated with respect to four key aspects of synaptic signaling: the connectivity of brain networks, synaptic transmission, synaptic plasticity, and the heterogeneity across synapses. Each step and aspect of the modeling and simulation workflow comes with its own challenges and pitfalls, which are highlighted and addressed in detail.
translated by 谷歌翻译
Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of powerful, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if and how they could be mapped onto neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide a comprehensive overview of representative brain-inspired synaptic plasticity models and mixed-signal CMOS neuromorphic circuits within a unified framework. We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and post-synaptic neuron information, which we propose as a fundamental requirement for physical implementations of synaptic plasticity. Based on this principle, we compare the properties of these models within the same framework, and describe the mixed-signal electronic circuits that implement their computing primitives, pointing out how these building blocks enable efficient on-chip and online learning in neuromorphic processing systems.
translated by 谷歌翻译
预测性编码提供了对皮质功能的潜在统一说明 - 假设大脑的核心功能是最小化有关世界生成模型的预测错误。该理论与贝叶斯大脑框架密切相关,在过去的二十年中,在理论和认知神经科学领域都产生了重大影响。基于经验测试的预测编码的改进和扩展的理论和数学模型,以及评估其在大脑中实施的潜在生物学合理性以及该理论所做的具体神经生理学和心理学预测。尽管存在这种持久的知名度,但仍未对预测编码理论,尤其是该领域的最新发展进行全面回顾。在这里,我们提供了核心数学结构和预测编码的逻辑的全面综述,从而补充了文献中最新的教程。我们还回顾了该框架中的各种经典和最新工作,从可以实施预测性编码的神经生物学现实的微电路到预测性编码和广泛使用的错误算法的重新传播之间的紧密关系,以及对近距离的调查。预测性编码和现代机器学习技术之间的关系。
translated by 谷歌翻译
过去十年来,人们对人工智能(AI)的兴趣激增几乎完全由人工神经网络(ANN)的进步驱动。尽管ANN为许多以前棘手的问题设定了最先进的绩效,但它们需要大量的数据和计算资源进行培训,并且由于他们采用了监督的学习,他们通常需要知道每个培训示例的正确标记的响应,并限制它们对现实世界域的可扩展性。尖峰神经网络(SNN)是使用更多类似脑部神经元的ANN的替代方法,可以使用无监督的学习来发现输入数据中的可识别功能,而又不知道正确的响应。但是,SNN在动态稳定性方面挣扎,无法匹配ANN的准确性。在这里,我们展示了SNN如何克服文献中发现的许多缺点,包括为消失的尖峰问题提供原则性解决方案,以优于所有现有的浅SNN,并等于ANN的性能。它在使用无标记的数据和仅1/50的训练时期使用无监督的学习时完成了这一点(标记数据仅用于最终的简单线性读数层)。该结果使SNN成为可行的新方法,用于使用未标记的数据集快速,准确,有效,可解释的机器学习。
translated by 谷歌翻译
穗状花序的神经形状硬件占据了深度神经网络(DNN)的更节能实现的承诺,而不是GPU的标准硬件。但这需要了解如何在基于事件的稀疏触发制度中仿真DNN,否则能量优势丢失。特别地,解决序列处理任务的DNN通常采用难以使用少量尖峰效仿的长短期存储器(LSTM)单元。我们展示了许多生物神经元的面部,在每个尖峰后缓慢的超积极性(AHP)电流,提供了有效的解决方案。 AHP电流可以轻松地在支持多舱神经元模型的神经形状硬件中实现,例如英特尔的Loihi芯片。滤波近似理论解释为什么AHP-Neurons可以模拟LSTM单元的功能。这产生了高度节能的时间序列分类方法。此外,它为实现了非常稀疏的大量大型DNN来实现基础,这些大型DNN在文本中提取单词和句子之间的关系,以便回答有关文本的问题。
translated by 谷歌翻译
经常性神经网络(RNNS)是强大的动态模型,广泛用于机器学习(ML)和神经科学。之前的理论作品集中在具有添加剂相互作用的RNN上。然而,门控 - 即乘法 - 相互作用在真神经元中普遍存在,并且也是ML中最佳性能RNN的中心特征。在这里,我们表明Gating提供灵活地控制集体动态的两个突出特征:i)时间尺寸和ii)维度。栅极控制时间尺度导致新颖的稳定状态,网络用作灵活积分器。与以前的方法不同,Gating允许这种重要功能而没有参数微调或特殊对称。门还提供一种灵活的上下文相关机制来重置存储器跟踪,从而补充存储器功能。调制维度的栅极可以诱导新颖的不连续的混沌转变,其中输入将稳定的系统推向强的混沌活动,与通常稳定的输入效果相比。在这种转变之上,与添加剂RNN不同,关键点(拓扑复杂性)的增殖与混沌动力学的外观解耦(动态复杂性)。丰富的动态总结在相图中,从而为ML从业者提供了一个原理参数初始化选择的地图。
translated by 谷歌翻译
平衡系统是表达神经计算的有力方法。作为特殊情况,它们包括对神经科学和机器学习的最新兴趣模型,例如平衡复发性神经网络,深度平衡模型或元学习。在这里,我们提出了一个新的原则,用于学习具有时间和空间本地规则的此类系统。我们的原理将学习作为一个最不控制的问题,我们首先引入一个最佳控制器,以将系统带入解决方案状态,然后将学习定义为减少达到这种状态所需的控制量。我们表明,将学习信号纳入动力学作为最佳控制可以以先前未知的方式传输信用分配信息,避免将中间状态存储在内存中,并且不依赖无穷小的学习信号。在实践中,我们的原理可以使基于梯度的学习方法的强大绩效匹配,该方法应用于涉及复发性神经网络和元学习的一系列问题。我们的结果阐明了大脑如何学习并提供解决广泛的机器学习问题的新方法。
translated by 谷歌翻译
Synaptic plasticity allows cortical circuits to learn new tasks and to adapt to changing environments. How do cortical circuits use plasticity to acquire functions such as decision-making or working memory? Neurons are connected in complex ways, forming recurrent neural networks, and learning modifies the strength of their connections. Moreover, neurons communicate emitting brief discrete electric signals. Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories, and how computations emerge in the trained networks. Surprisingly, artificial networks and real brains can use similar computational strategies.
translated by 谷歌翻译
尖峰神经网络(SNN)是大脑中低功率,耐断层的信息处理的基础,并且在适当的神经形态硬件加速器上实施时,可能构成传统深层神经网络的能力替代品。但是,实例化解决复杂的计算任务的SNN在Silico中仍然是一个重大挑战。替代梯度(SG)技术已成为培训SNN端到端的标准解决方案。尽管如此,它们的成功取决于突触重量初始化,类似于常规的人工神经网络(ANN)。然而,与ANN不同,它仍然难以捉摸地构成SNN的良好初始状态。在这里,我们为受到大脑中通常观察到的波动驱动的策略启发的SNN制定了一般初始化策略。具体而言,我们为数据依赖性权重初始化提供了实用的解决方案,以确保广泛使用的泄漏的集成和传火(LIF)神经元的波动驱动。我们从经验上表明,经过SGS培训时,SNN遵循我们的策略表现出卓越的学习表现。这些发现概括了几个数据集和SNN体系结构,包括完全连接,深度卷积,经常性和更具生物学上合理的SNN遵守Dale的定律。因此,波动驱动的初始化提供了一种实用,多功能且易于实现的策略,可改善神经形态工程和计算神经科学的不同任务的SNN培训绩效。
translated by 谷歌翻译
In the brain, information is encoded, transmitted and used to inform behaviour at the level of timing of action potentials distributed over population of neurons. To implement neural-like systems in silico, to emulate neural function, and to interface successfully with the brain, neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain. To facilitate the cross-talk between neuromorphic engineering and neuroscience, in this Review we first critically examine and summarize emerging recent findings about how population of neurons encode and transmit information. We examine the effects on encoding and readout of information for different features of neural population activity, namely the sparseness of neural representations, the heterogeneity of neural properties, the correlations among neurons, and the time scales (from short to long) at which neurons encode information and maintain it consistently over time. Finally, we critically elaborate on how these facts constrain the design of information coding in neuromorphic circuits. We focus primarily on the implications for designing neuromorphic circuits that communicate with the brain, as in this case it is essential that artificial and biological neurons use compatible neural codes. However, we also discuss implications for the design of neuromorphic systems for implementation or emulation of neural computation.
translated by 谷歌翻译
The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations, our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
translated by 谷歌翻译
在许多科学学科中,我们有兴趣推断一组观察到的时间序列的非线性动力学系统,这是面对混乱的行为和噪音,这是一项艰巨的任务。以前的深度学习方法实现了这一目标,通常缺乏解释性和障碍。尤其是,即使基本动力学生存在较低维的多种多样的情况下,忠实嵌入通常需要的高维潜在空间也会阻碍理论分析。在树突计算的新兴原则的推动下,我们通过线性样条基础扩展增强了动态解释和数学可牵引的分段线性(PL)复发性神经网络(RNN)。我们表明,这种方法保留了简单PLRNN的所有理论上吸引人的特性,但在相对较低的尺寸中提高了其近似任意非线性动态系统的能力。我们采用两个框架来训练该系统,一个将反向传播的时间(BPTT)与教师强迫结合在一起,另一个将基于快速可扩展的变异推理的基础。我们表明,树枝状扩展的PLRNN可以在各种动力学系统基准上获得更少的参数和尺寸,并与其他方法进行比较,同时保留了可拖动和可解释的结构。
translated by 谷歌翻译
为了在专门的神经形态硬件中进行节能计算,我们提出了尖峰神经编码,这是基于预测性编码理论的人工神经模型家族的实例化。该模型是同类模型,它是通过在“猜测和检查”的永无止境过程中运行的,神经元可以预测彼此的活动值,然后调整自己的活动以做出更好的未来预测。我们系统的互动性,迭代性质非常适合感官流预测的连续时间表述,并且如我们所示,模型的结构产生了局部突触更新规则,可以用来补充或作为在线峰值定位的替代方案依赖的可塑性。在本文中,我们对模型的实例化进行了实例化,该模型包括泄漏的集成和火灾单元。但是,我们系统所在的框架自然可以结合更复杂的神经元,例如Hodgkin-Huxley模型。我们在模式识别方面的实验结果证明了当二进制尖峰列车是通信间通信的主要范式时,模型的潜力。值得注意的是,尖峰神经编码在分类绩效方面具有竞争力,并且在从任务序列中学习时会降低遗忘,从而提供了更经济的,具有生物学上的替代品,可用于流行的人工神经网络。
translated by 谷歌翻译
尖峰神经网络(SNN)提供了一个新的计算范式,能够高度平行,实时处理。光子设备是设计与SNN计算范式相匹配的高带宽,平行体系结构的理想选择。 CMO和光子元件的协整允许将低损耗的光子设备与模拟电子设备结合使用,以更大的非线性计算元件的灵活性。因此,我们在整体硅光子学(SIPH)过程上设计和模拟了光电尖峰神经元电路,该过程复制了超出泄漏的集成和火(LIF)之外有用的尖峰行为。此外,我们探索了两种学习算法,具有使用Mach-Zehnder干涉法(MZI)网格作为突触互连的片上学习的潜力。实验证明了随机反向传播(RPB)的变体,并在简单分类任务上与标准线性回归的性能相匹配。同时,将对比性HEBBIAN学习(CHL)规则应用于由MZI网格组成的模拟神经网络,以进行随机输入输出映射任务。受CHL训练的MZI网络的性能比随机猜测更好,但不符合理想神经网络的性能(没有MZI网格施加的约束)。通过这些努力,我们证明了协调的CMO和SIPH技术非常适合可扩展的SNN计算体系结构的设计。
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
translated by 谷歌翻译
由于它们的低能量消耗,对神经形态计算设备上的尖刺神经网络(SNNS)越来越兴趣。最近的进展使培训SNNS在精度方面开始与传统人工神经网络(ANNS)进行竞争,同时在神经胸壁上运行时的节能。然而,培训SNNS的过程仍然基于最初为ANNS开发的密集的张量操作,这不利用SNN的时空稀疏性质。我们在这里介绍第一稀疏SNN BackPropagation算法,该算法与最新的现有技术实现相同或更好的准确性,同时显着更快,更高的记忆力。我们展示了我们对不同复杂性(时尚 - MNIST,神经影像学 - MNIST和Spiking Heidelberg数字的真实数据集的有效性,在不失精度的情况下实现了高达150倍的后向通行证的加速,而不会减少精度。
translated by 谷歌翻译
错误 - 背面范围(BackProp)算法仍然是人工神经网络中信用分配问题的最常见解决方案。在神经科学中,尚不清楚大脑是否可以采用类似的策略来纠正其突触。最近的模型试图弥合这一差距,同时与一系列实验观察一致。但是,这些模型要么无法有效地跨多层返回误差信号,要么需要多相学习过程,它们都不让人想起大脑中的学习。在这里,我们介绍了一种新模型,破裂的皮质皮质网络(BUSTCCN),该网络通过整合了皮质网络的已知特性,即爆发活动,短期可塑性(STP)和dendrite-target-targeting Interneurons来解决这些问题。 BUSTCCN依赖于连接型特异性STP的突发多路复用来传播深层皮质网络中的反向Prop样误差信号。这些误差信号是在远端树突上编码的,由于兴奋性抑制性抑制性倒入输入而诱导爆发依赖性可塑性。首先,我们证明我们的模型可以使用单相学习过程有效地通过多层回溯错误。接下来,我们通过经验和分析表明,在我们的模型中学习近似反向推广的梯度。最后,我们证明我们的模型能够学习复杂的图像分类任务(MNIST和CIFAR-10)。总体而言,我们的结果表明,跨细胞,细胞,微电路和系统水平的皮质特征共同基于大脑中的单相有效深度学习。
translated by 谷歌翻译