Artificial Intelligence (AI) and Machine Learning (ML) are weaving their way into the fabric of society, where they are playing a crucial role in numerous facets of our lives. As we witness the increased deployment of AI and ML in various types of devices, we benefit from their use into energy-efficient algorithms for low powered devices. In this paper, we investigate a scale and medium that is far smaller than conventional devices as we move towards molecular systems that can be utilized to perform machine learning functions, i.e., Molecular Machine Learning (MML). Fundamental to the operation of MML is the transport, processing, and interpretation of information propagated by molecules through chemical reactions. We begin by reviewing the current approaches that have been developed for MML, before we move towards potential new directions that rely on gene regulatory networks inside biological organisms as well as their population interactions to create neural networks. We then investigate mechanisms for training machine learning structures in biological cells based on calcium signaling and demonstrate their application to build an Analog to Digital Converter (ADC). Lastly, we look at potential future directions as well as challenges that this area could solve.
translated by 谷歌翻译
在本文中,我们以神经处理的水平垂直整合模型的形式阐述了一种新型的神经塑性模型。我们认为,一种新的神经建模方法将受益于第三波AI。水平面由通过传输链路连接的神经元的自适应网络组成,该网络由传播链路连接,该链接生成时空尖峰模式。这符合标准的计算神经科学方法。此外,对于每个单独的神经元,还有一个垂直部分,该部分由内部自适应参数组成,这些参数转向了与神经传播有关的外部膜表达参数。每个神经元都有一个与(a)在膜层处的外部参数相对应的参数的垂直模块化系统,分为隔室(刺,boutons)(b)串膜区域中的内部参数和带有其蛋白质信号网络和(C)的细胞质中的内部参数遗传和表观遗传信息的细胞核中的核心参数。在这样的模型中,水平网络中的每个节点(=神经元)都有其自己的内部内存。神经传播和信息存储是系统分开的,这是突触重量模型的重要概念前进。我们讨论了基于膜的(外部)滤波和外部信号的选择,以通过快速波动和神经元内计算策略从细胞内蛋白质信号传导到细胞核作为核心系统。我们想证明,单个神经元在信号的计算中具有重要作用,并且从突触重量调节假设中得出的许多假设可能无法在真实的大脑中保留。并非每个传输事件都会留下痕迹,而神经元是一种自我编程的设备,而不是由电流输入被动确定。最终,我们努力构建一个灵活的内存系统,该系统自动处理事实和事件。
translated by 谷歌翻译
基因调节网络是负责确定蛋白质和肽生产水平的生物生物体相互作用的网络。蛋白质是细胞工厂的工人,其生产定义了细胞及其开发的目标。已经进行了各种尝试来建模此类网络,以更好地了解这些生物系统,并利用了解它们的灵感来解决计算问题。在这项工作中,提出了一个针对基因调节网络的生物学上更现实的模型,该模型结合了细胞自动机和人工化学,以模拟称为转录因子和基因调节位点的调节蛋白之间的相互作用。这项工作的结果表明,复杂的动力学接近自然界中可以观察到的东西。在这里,对系统的初始状态对产生的动力学的影响进行了分析,这表明可以将这种可转化的模型针对产生所需的蛋白质动力学。
translated by 谷歌翻译
The applicability of computational models to the biological world is an active topic of debate. We argue that a useful path forward results from abandoning hard boundaries between categories and adopting an observer-dependent, pragmatic view. Such a view dissolves the contingent dichotomies driven by human cognitive biases (e.g., tendency to oversimplify) and prior technological limitations in favor of a more continuous, gradualist view necessitated by the study of evolution, developmental biology, and intelligent machines. Efforts to re-shape living systems for biomedical or bioengineering purposes require prediction and control of their function at multiple scales. This is challenging for many reasons, one of which is that living systems perform multiple functions in the same place at the same time. We refer to this as "polycomputing" - the ability of the same substrate to simultaneously compute different things. This ability is an important way in which living things are a kind of computer, but not the familiar, linear, deterministic kind; rather, living things are computers in the broad sense of computational materials as reported in the rapidly-growing physical computing literature. We argue that an observer-centered framework for the computations performed by evolved and designed systems will improve the understanding of meso-scale events, as it has already done at quantum and relativistic scales. Here, we review examples of biological and technological polycomputing, and develop the idea that overloading of different functions on the same hardware is an important design principle that helps understand and build both evolved and designed systems. Learning to hack existing polycomputing substrates, as well as evolve and design new ones, will have massive impacts on regenerative medicine, robotics, and computer engineering.
translated by 谷歌翻译
The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).
translated by 谷歌翻译
我们提出,生命的连续性及其演变来自称为合身生存的互动群体过程。该过程取代了达尔文人的个人斗争和适合生存理论,这是进化的主要机制。在这里,我们提出,自然过程与计算机自动编码功能相关。自动编码是一种机器学习技术,用于提取输入数据基本特征的紧凑表示。通过自动编码降低维度性,建立一个代码,该代码能够基于解码相关数据的各种应用程序。我们确定以下几点:(1)我们通过其物种相互作用代码定义一个物种,该物种由该物种与其外部环境和内部环境的基本核心相互作用组成;核心相互作用由包括分子细胞 - 生物在内的多尺度网络编码。 (2)通过可持续变化的物种相互作用代码进行进化;这些变化的代码既反映和构建物种环境。物种的生存是通过我们称为自然自动编码的内容来计算的:输入相互作用的阵列会产生物种代码,该代码通过解码为持续生态系统相互作用的网络而生存。 DNA只是天然自动编码的一个元素。 (3)自然自动编码和人工自动编码过程明确定义了相似性和差异。天然自动编码的生存为进化机理提供了新的启示,并解释了为什么可居住的生物圈需要多样化的拟合组相互作用。
translated by 谷歌翻译
生命的起源被神秘笼罩着,几乎没有生存线索,被进化竞争所掩盖。先前的评论涉及自上而下和自下而上的合成生物学的互补方法,以增强我们对生活系统的理解。在这里,我们指出这些领域之间的协同作用,尤其是自下而上的合成生物学和生命研究起源之间。我们探讨了与拥挤的细胞,其新陈代谢以及生长和分裂周期以及如何开始合并这些努力的人造细胞隔室取得的最新进展。尽管当前生活的复杂性是其最引人注目的特征之一,但人生的基本特征都不需要它,而且它们从一开始就不太可能出现因此而变得复杂。当前的研究不是通过恢复一个真正的起源而恢复真正的起源,而是通过挑出一组基本组成部分可能产生的复杂性和进化而融合了最小生命的出现。
translated by 谷歌翻译
受生物学最复杂的计算机的启发,大脑,神经网络构成了计算原理的深刻重新重新制定。值得注意的是,在活细胞内部的信息处理分子系统(例如信号转导级联和遗传调节网络)内,在信息处理的分子系统中也出现了类似的高维,高度相关的计算体系结构。在其他物理和化学过程中,即使表面上扮演非信息处理的角色,例如蛋白质合成,代谢或结构自组装等表面上,神经形态集体模式是否会更广泛地发现。在这里,我们检查了多组分结构自组装过程中的成核,表明可以以类似于神经网络计算的方式对高维浓度模式进行区分和分类。具体而言,我们设计了一组917个DNA瓷砖,可以以三种替代方式自组装,从而使竞争成核敏感地取决于三个结构中高分化瓷砖共定位的程度。该系统经过训练,以将18个灰度30 x 30像素图像分为三类。在150小时的退火过程中和之后,在实验上,荧光和原子力显微镜监测确定所有训练有素的图像均正确分类,而一组图像变化集探测了结果的鲁棒性。尽管与先前的生化神经网络相比缓慢,但我们的方法令人惊讶地紧凑,健壮且可扩展。这种成功表明,无处不在的物理现象(例如成核)在将高维多分量系统缩放时可能具有强大的信息处理能力。
translated by 谷歌翻译
Agent-based modeling (ABM) is a well-established paradigm for simulating complex systems via interactions between constituent entities. Machine learning (ML) refers to approaches whereby statistical algorithms 'learn' from data on their own, without imposing a priori theories of system behavior. Biological systems -- from molecules, to cells, to entire organisms -- consist of vast numbers of entities, governed by complex webs of interactions that span many spatiotemporal scales and exhibit nonlinearity, stochasticity and intricate coupling between entities. The macroscopic properties and collective dynamics of such systems are difficult to capture via continuum modelling and mean-field formalisms. ABM takes a 'bottom-up' approach that obviates these difficulties by enabling one to easily propose and test a set of well-defined 'rules' to be applied to the individual entities (agents) in a system. Evaluating a system and propagating its state over discrete time-steps effectively simulates the system, allowing observables to be computed and system properties to be analyzed. Because the rules that govern an ABM can be difficult to abstract and formulate from experimental data, there is an opportunity to use ML to help infer optimal, system-specific ABM rules. Once such rule-sets are devised, ABM calculations can generate a wealth of data, and ML can be applied there too -- e.g., to probe statistical measures that meaningfully describe a system's stochastic properties. As an example of synergy in the other direction (from ABM to ML), ABM simulations can generate realistic datasets for training ML algorithms (e.g., for regularization, to mitigate overfitting). In these ways, one can envision various synergistic ABM$\rightleftharpoons$ML loops. This review summarizes how ABM and ML have been integrated in contexts that span spatiotemporal scales, from cellular to population-level epidemiology.
translated by 谷歌翻译
Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of powerful, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if and how they could be mapped onto neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide a comprehensive overview of representative brain-inspired synaptic plasticity models and mixed-signal CMOS neuromorphic circuits within a unified framework. We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and post-synaptic neuron information, which we propose as a fundamental requirement for physical implementations of synaptic plasticity. Based on this principle, we compare the properties of these models within the same framework, and describe the mixed-signal electronic circuits that implement their computing primitives, pointing out how these building blocks enable efficient on-chip and online learning in neuromorphic processing systems.
translated by 谷歌翻译
基于von-neumann架构的传统计算系统,数据密集型工作负载和应用程序(如机器学习)和应用程序都是基本上限制的。随着数据移动操作和能量消耗成为计算系统设计中的关键瓶颈,对近数据处理(NDP),机器学习和特别是神经网络(NN)的加速器等非传统方法的兴趣显着增加。诸如Reram和3D堆叠的新兴内存技术,这是有效地架构基于NN的基于NN的加速器,因为它们的工作能力是:高密度/低能量存储和近记忆计算/搜索引擎。在本文中,我们提出了一种为NN设计NDP架构的技术调查。通过基于所采用的内存技术对技术进行分类,我们强调了它们的相似之处和差异。最后,我们讨论了需要探索的开放挑战和未来的观点,以便改进和扩展未来计算平台的NDP架构。本文对计算机学习领域的计算机架构师,芯片设计师和研究人员来说是有价值的。
translated by 谷歌翻译
在过去十年中,我们目睹了深度学习的兴起,以占据人工智能领域。人工神经网络的进步与具有大的内存容量大的硬件加速器的相应进步,以及大型数据集的可用性,使能研究人员和从业者能够培训和部署复杂的神经网络模型,这些模型在几个方面实现了最先进的性能跨越计算机视觉,自然语言处理和加强学习的领域。然而,由于这些神经网络变得更大,更复杂,更广泛地使用,目前深度学习模型的基本问题变得更加明显。已知最先进的深度学习模型遭受稳健性不良,无法适应新的任务设置的问题,以要求刚性和不灵活的配置假设。来自集体智能的想法,特别是来自复杂系统,如自组织,紧急行为,群优化和蜂窝系统的复杂系统的概念倾向于产生鲁棒,适应性,并且对环境配置具有较小的刚性假设的解决方案。因此,很自然地看到这些想法纳入更新的深度学习方法。在这篇综述中,我们将提供神经网络研究的历史背景,即神经网络研究的复杂系统的参与,并突出了现代深度学习研究中的几个活跃区域,这些研究融合了集体智能的原则,以推进其当前能力。为了促进双向思想流动,我们还讨论了利用现代深度学习模型的工作,以帮助推进复杂的系统研究。我们希望这次审查可以作为复杂系统和深度学习社区之间的桥梁,以促进思想的交叉授粉和促进跨学科的新合作。
translated by 谷歌翻译
尖峰神经网络(SNN)引起了脑启发的人工智能和计算神经科学的广泛关注。它们可用于在多个尺度上模拟大脑中的生物信息处理。更重要的是,SNN是适当的抽象水平,可以将大脑和认知的灵感带入人工智能。在本文中,我们介绍了脑启发的认知智力引擎(Braincog),用于创建脑启发的AI和脑模拟模型。 Braincog将不同类型的尖峰神经元模型,学习规则,大脑区域等作为平台提供的重要模块。基于这些易于使用的模块,BrainCog支持各种受脑启发的认知功能,包括感知和学习,决策,知识表示和推理,运动控制和社会认知。这些受脑启发的AI模型已在各种受监督,无监督和强化学习任务上有效验证,并且可以用来使AI模型具有多种受脑启发的认知功能。为了进行大脑模拟,Braincog实现了决策,工作记忆,神经回路的结构模拟以及小鼠大脑,猕猴大脑和人脑的整个大脑结构模拟的功能模拟。一个名为BORN的AI引擎是基于Braincog开发的,它演示了如何将Braincog的组件集成并用于构建AI模型和应用。为了使科学追求解码生物智能的性质并创建AI,Braincog旨在提供必要且易于使用的构件,并提供基础设施支持,以开发基于脑部的尖峰神经网络AI,并模拟认知大脑在多个尺度上。可以在https://github.com/braincog-x上找到Braincog的在线存储库。
translated by 谷歌翻译
This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience. It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments. This path is demonstrated with respect to four key aspects of synaptic signaling: the connectivity of brain networks, synaptic transmission, synaptic plasticity, and the heterogeneity across synapses. Each step and aspect of the modeling and simulation workflow comes with its own challenges and pitfalls, which are highlighted and addressed in detail.
translated by 谷歌翻译
尖峰神经网络(SNN)提供了一个新的计算范式,能够高度平行,实时处理。光子设备是设计与SNN计算范式相匹配的高带宽,平行体系结构的理想选择。 CMO和光子元件的协整允许将低损耗的光子设备与模拟电子设备结合使用,以更大的非线性计算元件的灵活性。因此,我们在整体硅光子学(SIPH)过程上设计和模拟了光电尖峰神经元电路,该过程复制了超出泄漏的集成和火(LIF)之外有用的尖峰行为。此外,我们探索了两种学习算法,具有使用Mach-Zehnder干涉法(MZI)网格作为突触互连的片上学习的潜力。实验证明了随机反向传播(RPB)的变体,并在简单分类任务上与标准线性回归的性能相匹配。同时,将对比性HEBBIAN学习(CHL)规则应用于由MZI网格组成的模拟神经网络,以进行随机输入输出映射任务。受CHL训练的MZI网络的性能比随机猜测更好,但不符合理想神经网络的性能(没有MZI网格施加的约束)。通过这些努力,我们证明了协调的CMO和SIPH技术非常适合可扩展的SNN计算体系结构的设计。
translated by 谷歌翻译
在过去的几十年中,人工智能领域大大进展,灵感来自生物学和神经科学领域的发现。这项工作的想法是由来自传入和横向/内部联系的人脑中皮质区域的自组织过程的过程启发。在这项工作中,我们开发了一个原始的脑激发神经模型,将自组织地图(SOM)和Hebbian学习在重新参与索马里(RESOM)模型中。该框架应用于多模式分类问题。与基于未经监督的学习的现有方法相比,该模型增强了最先进的结果。这项工作还通过在名为SPARP(自配置3D蜂窝自适应平台)的专用FPGA的平台上的模拟结果和硬件执行,演示了模型的分布式和可扩展性。头皮板可以以模块化方式互连,以支持神经模型的结构。这种统一的软件和硬件方法使得能够缩放处理并允许来自多个模态的信息进行动态合并。硬件板上的部署提供了在多个设备上并行执行的性能结果,通过专用串行链路在每个板之间的通信。由于多模式关联,所提出的统一架构,由RESOM模型和头皮硬件平台组成的精度显着提高,与集中式GPU实现相比,延迟和功耗之间的良好折衷。
translated by 谷歌翻译
In the brain, information is encoded, transmitted and used to inform behaviour at the level of timing of action potentials distributed over population of neurons. To implement neural-like systems in silico, to emulate neural function, and to interface successfully with the brain, neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain. To facilitate the cross-talk between neuromorphic engineering and neuroscience, in this Review we first critically examine and summarize emerging recent findings about how population of neurons encode and transmit information. We examine the effects on encoding and readout of information for different features of neural population activity, namely the sparseness of neural representations, the heterogeneity of neural properties, the correlations among neurons, and the time scales (from short to long) at which neurons encode information and maintain it consistently over time. Finally, we critically elaborate on how these facts constrain the design of information coding in neuromorphic circuits. We focus primarily on the implications for designing neuromorphic circuits that communicate with the brain, as in this case it is essential that artificial and biological neurons use compatible neural codes. However, we also discuss implications for the design of neuromorphic systems for implementation or emulation of neural computation.
translated by 谷歌翻译
随着Terahertz(THZ)信号产生和辐射方法的最新进展,关节通信和传感应用正在塑造无线系统的未来。为此,预计将在用户设备设备上携带THZ光谱,以识别感兴趣的材料和气态组件。 THZ特异性的信号处理技术应补充这种对THZ感应的重新兴趣,以有效利用THZ频带。在本文中,我们介绍了这些技术的概述,重点是信号预处理(标准的正常差异归一化,最小值 - 最大归一化和Savitzky-Golay滤波),功能提取(主成分分析,部分最小二乘,t,T,T部分,t部分,t部分正方形,T - 分布的随机邻居嵌入和非负矩阵分解)和分类技术(支持向量机器,k-nearest邻居,判别分析和天真的贝叶斯)。我们还通过探索他们在THZ频段的有希望的传感能力来解决深度学习技术的有效性。最后,我们研究了在联合通信和传感的背景下,研究方法的性能和复杂性权衡;我们激励相应的用例,并在该领域提供未来的研究方向。
translated by 谷歌翻译
讨论了与科学,工程,建筑和人为因素相关的月球表面上的运输设施问题。未来十年制造的后勤决策可能对财务成功至关重要。除了概述一些问题及其与数学和计算的关系外,本文还为决策者,科学家和工程师提供了有用的资源。
translated by 谷歌翻译
Guillain-Barre综合征是一种罕见的神经系统疾病,其中人免疫系统攻击周围神经系统。周围神经系统似乎是神经元模型的数学模型的扩散连接系统,并且该系统的周期比每个神经回路的周期都短。传导路径中的刺激将被轴突接收到失去其功能的髓鞘鞘,并在外部传递到靶器官,旨在解决降低神经传导的问题。在神经元模拟环境中,可以创建神经元模型并定义系统内发生的生物物理事件。在这种环境中,细胞和树突之间的信号传递是图形的。模拟的钾和钠电导是充分复制的,电子动作电位与实验测量的电位相当。在这项工作中,我们提出了一个模拟和数字耦合的神经元模型,该模型包括个人兴奋性和抑制性神经回路块,用于低成本和节能系统。与数字设计相比,我们的模拟设计的性能较低,但能源效率降低了32.3 \%。因此,所得的耦合模拟硬件神经元模型可以是模拟神经传导减少的模型。结果,模拟耦合的神经元(即使具有更大的设计复杂性)为未来开发的可穿戴传感器设备的竞争者,该设备可能有助于治疗吉兰 - 巴雷综合症和其他神经系统疾病。
translated by 谷歌翻译