具有最小延迟的人工神经网络的决策对于诸如导航,跟踪和实时机器动作系统之类的许多应用来说是至关重要的。这要求机器学习硬件以高吞吐量处理多维数据。不幸的是,处理卷积操作是数据分类任务的主要计算工具,遵循有挑战性的运行时间复杂性缩放法。然而,在傅立叶光学显示器 - 光处理器中同心地实现卷积定理,使得不迭代的O(1)运行时复杂度以超过1,000×1,000大矩阵的数据输入。在此方法之后,这里我们展示了具有傅里叶卷积神经网络(FCNN)加速器的数据流多核图像批处理。我们将大规模矩阵的图像批量处理显示为傅立叶域中的数字光处理模块执行的被动的2000万点产品乘法。另外,我们通过利用多种时空衍射令并进一步并行化该光学FCNN系统,从而实现了最先进的FCNN加速器的98倍的产量改进。综合讨论与系统能力边缘工作相关的实际挑战突出了傅立叶域和决议缩放法律的串扰问题。通过利用展示技术中的大规模平行性加速卷积带来了基于VAN Neuman的机器学习加速度。
translated by 谷歌翻译
随着深度神经网络(DNN)的发展以解决日益复杂的问题,它们正受到现有数字处理器的延迟和功耗的限制。为了提高速度和能源效率,已经提出了专门的模拟光学和电子硬件,但是可扩展性有限(输入矢量长度$ k $的数百个元素)。在这里,我们提出了一个可扩展的,单层模拟光学处理器,该光学处理器使用自由空间光学器件可重新配置输入向量和集成的光电,用于静态,可更新的加权和非线性 - 具有$ k \ \ 1,000 $和大约1,000美元和超过。我们通过实验测试MNIST手写数字数据集的分类精度,在没有数据预处理或在硬件上进行数据重新处理的情况下达到94.7%(地面真相96.3%)。我们还确定吞吐量($ \ sim $ 0.9 examac/s)的基本上限,由最大光带宽设置,然后大大增加误差。我们在兼容CMOS兼容系统中宽光谱和空间带宽的组合可以实现下一代DNN的高效计算。
translated by 谷歌翻译
The last few years have seen a lot of work to address the challenge of low-latency and high-throughput convolutional neural network inference. Integrated photonics has the potential to dramatically accelerate neural networks because of its low-latency nature. Combined with the concept of Joint Transform Correlator (JTC), the computationally expensive convolution functions can be computed instantaneously (time of flight of light) with almost no cost. This 'free' convolution computation provides the theoretical basis of the proposed PhotoFourier JTC-based CNN accelerator. PhotoFourier addresses a myriad of challenges posed by on-chip photonic computing in the Fourier domain including 1D lenses and high-cost optoelectronic conversions. The proposed PhotoFourier accelerator achieves more than 28X better energy-delay product compared to state-of-art photonic neural network accelerators.
translated by 谷歌翻译
光学成像通常用于行业和学术界的科学和技术应用。在图像传感中,通过数字化图像的计算分析来执行一个测量,例如对象的位置。新兴的图像感应范例通过设计光学组件来执行不进行成像而是编码,从而打破了数据收集和分析之间的描述。通过将图像光学地编码为适合有效分析后的压缩,低维的潜在空间,这些图像传感器可以以更少的像素和更少的光子来工作,从而可以允许更高的直通量,较低的延迟操作。光学神经网络(ONNS)提供了一个平台,用于处理模拟,光学域中的数据。然而,基于ONN的传感器仅限于线性处理,但是非线性是深度的先决条件,而多层NNS在许多任务上的表现都大大优于浅色。在这里,我们使用商业图像增强器作为平行光电子,光学到光学非线性激活函数,实现用于图像传感的多层预处理器。我们证明,非线性ONN前处理器可以达到高达800:1的压缩率,同时仍然可以在几个代表性的计算机视觉任务中高精度,包括机器视觉基准测试,流程度图像分类以及对对象中对象的识别,场景。在所有情况下,我们都会发现ONN的非线性和深度使其能够胜过纯线性ONN编码器。尽管我们的实验专门用于ONN传感器的光线图像,但替代ONN平台应促进一系列ONN传感器。这些ONN传感器可能通过在空间,时间和/或光谱尺寸中预处处理的光学信息来超越常规传感器,并可能具有相干和量子质量,所有这些都在光学域中。
translated by 谷歌翻译
由于深度学习在许多人工智能应用中显示了革命性的性能,其升级的计算需求需要用于巨大并行性的硬件加速器和改进的吞吐量。光学神经网络(ONN)是下一代神经关键组成的有希望的候选者,由于其高并行,低延迟和低能量消耗。在这里,我们设计了一个硬件高效的光子子空间神经网络(PSNN)架构,其针对具有比具有可比任务性能的前一个ONN架构的光学元件使用,区域成本和能量消耗。此外,提供了一种硬件感知培训框架,以最小化所需的设备编程精度,减少芯片区域,并提高噪声鲁棒性。我们在实验上展示了我们的PSNN在蝴蝶式可编程硅光子集成电路上,并在实用的图像识别任务中显示其实用性。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
本文介绍了有关如何架构,设计和优化深神经网络(DNN)的最新概述,以提高性能并保留准确性。该论文涵盖了一组跨越整个机器学习处理管道的优化。我们介绍两种类型的优化。第一个改变了DNN模型,需要重新训练,而第二个则不训练。我们专注于GPU优化,但我们认为提供的技术可以与其他AI推理平台一起使用。为了展示DNN模型优化,我们在流行的Edge AI推理平台(Nvidia Jetson Agx Xavier)上改善了光流的最先进的深层网络体系结构之一,RAFT ARXIV:2003.12039。
translated by 谷歌翻译
在小型电池约束的物流设备上部署现代TinyML任务需要高计算能效。使用非易失性存储器(NVM)的模拟内存计算(IMC)承诺在深神经网络(DNN)推理中的主要效率提高,并用作DNN权重的片上存储器存储器。然而,在系统级别尚未完全理解IMC的功能灵活性限制及其对性能,能量和面积效率的影响。为了目标实际的端到端的IOT应用程序,IMC阵列必须括在异构可编程系统中,引入我们旨在解决这项工作的新系统级挑战。我们介绍了一个非均相紧密的聚类架构,整合了8个RISC-V核心,内存计算加速器(IMA)和数字加速器。我们在高度异构的工作负载上基准测试,例如来自MobileNetv2的瓶颈层,显示出11.5倍的性能和9.5倍的能效改进,而在核心上高度优化并行执行相比。此外,我们通过将我们的异构架构缩放到多阵列加速器,探讨了在IMC阵列资源方面对全移动级DNN(MobileNetv2)的端到端推断的要求。我们的结果表明,我们的解决方案在MobileNetv2的端到端推断上,在执行延迟方面比现有的可编程架构更好,比最先进的异构解决方案更好的数量级集成内存计算模拟核心。
translated by 谷歌翻译
The ever-growing deep learning technologies are making revolutionary changes for modern life. However, conventional computing architectures are designed to process sequential and digital programs, being extremely burdened with performing massive parallel and adaptive deep learning applications. Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by its electronic counterparts, showing great potential in ultrafast and energy-free high-performance computing. Here, we propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU). We demonstrate that any real-valued convolution kernels can be exploited by OCU with a prominent computational throughput boosting via the concept of structral re-parameterization. With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression. For classification, Fashion-MNIST and CIFAR-4 datasets are tested with accuracy of 91.63% and 86.25%, respectively. For regression, we build an optical denoising convolutional neural network (oDnCNN) to handle Gaussian noise in gray scale images with noise level {\sigma} = 10, 15, 20, resulting clean images with average PSNR of 31.70dB, 29.39dB and 27.72dB, respectively. The proposed OCU presents remarkable performance of low energy consumption and high information density due to its fully passive nature and compact footprint, providing a highly parallel while lightweight solution for future computing architecture to handle high dimensional tensors in deep learning.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
波前调节器的限制空间散宽产品(SBP)阻碍了大型视野(FOV)上图像的高分辨率合成/投影。我们报告了一种深度学习的衍射显示设计,该设计基于一对训练的电子编码器和衍射光学解码器,用于合成/项目超级分辨图像,使用低分辨率波形调节器。由训练有素的卷积神经网络(CNN)组成的数字编码器迅速预处理了感兴趣的高分辨率图像,因此它们的空间信息被编码为低分辨率(LR)调制模式,该模式通过低SBP Wavefront调制器投影。衍射解码器使用薄的传播层处理该LR编码的信息,这些层是使用深度学习构成的,以在其输出FOV处进行全面合成和项目超级分辨图像。我们的结果表明,这种衍射图像显示可以达到〜4的超分辨率因子,表明SBP增加了约16倍。我们还使用3D打印的衍射解码器在THZ光谱上进行实验验证了这种衍射超分辨率显示器的成功。该衍射图像解码器可以缩放以在可见的波长下运行,并激发紧凑,低功率和计算效率的大型FOV和高分辨率显示器的设计。
translated by 谷歌翻译
当今的大多数计算机视觉管道都是围绕深神经网络构建的,卷积操作需要大部分一般的计算工作。与标准算法相比,Winograd卷积算法以更少的MAC计算卷积,当使用具有2x2尺寸瓷砖$ F_2 $的版本时,3x3卷积的操作计数为2.25倍。即使收益很大,Winograd算法具有较大的瓷砖尺寸,即$ f_4 $,在提高吞吐量和能源效率方面具有更大的潜力,因为它将所需的MAC降低了4倍。不幸的是,具有较大瓷砖尺寸的Winograd算法引入了数值问题,这些问题阻止了其在整数域特异性加速器上的使用和更高的计算开销,以在空间和Winograd域之间转换输入和输出数据。为了解锁Winograd $ F_4 $的全部潜力,我们提出了一种新颖的Tap-Wise量化方法,该方法克服了使用较大瓷砖的数值问题,从而实现了仅整数的推断。此外,我们介绍了以功率和区域效率的方式处理Winograd转换的自定义硬件单元,并展示了如何将此类自定义模块集成到工业级,可编程的DSA中。对大量最先进的计算机视觉基准进行了广泛的实验评估表明,Tap-Wise量化算法使量化的Winograd $ F_4 $网络几乎与FP32基线一样准确。 Winograd增强的DSA可实现高达1.85倍的能源效率,最高可用于最先进的细分和检测网络的端到端速度高达1.83倍。
translated by 谷歌翻译
The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).
translated by 谷歌翻译
In this work, we demonstrate the offline FPGA realization of both recurrent and feedforward neural network (NN)-based equalizers for nonlinearity compensation in coherent optical transmission systems. First, we present a realization pipeline showing the conversion of the models from Python libraries to the FPGA chip synthesis and implementation. Then, we review the main alternatives for the hardware implementation of nonlinear activation functions. The main results are divided into three parts: a performance comparison, an analysis of how activation functions are implemented, and a report on the complexity of the hardware. The performance in Q-factor is presented for the cases of bidirectional long-short-term memory coupled with convolutional NN (biLSTM + CNN) equalizer, CNN equalizer, and standard 1-StpS digital back-propagation (DBP) for the simulation and experiment propagation of a single channel dual-polarization (SC-DP) 16QAM at 34 GBd along 17x70km of LEAF. The biLSTM+CNN equalizer provides a similar result to DBP and a 1.7 dB Q-factor gain compared with the chromatic dispersion compensation baseline in the experimental dataset. After that, we assess the Q-factor and the impact of hardware utilization when approximating the activation functions of NN using Taylor series, piecewise linear, and look-up table (LUT) approximations. We also show how to mitigate the approximation errors with extra training and provide some insights into possible gradient problems in the LUT approximation. Finally, to evaluate the complexity of hardware implementation to achieve 400G throughput, fixed-point NN-based equalizers with approximated activation functions are developed and implemented in an FPGA.
translated by 谷歌翻译
近年来,人工智能(AI)的领域已经见证了巨大的增长,然而,持续发展的一些最紧迫的挑战是电子计算机架构所面临的基本带宽,能效和速度限制。利用用于执行神经网络推理操作的光子处理器越来越感兴趣,但是这些网络目前使用标准数字电子培训。这里,我们提出了由CMOS兼容的硅光子架构实现的神经网络的片上训练,以利用大规模平行,高效和快速数据操作的电位。我们的方案采用直接反馈对准训练算法,它使用错误反馈而不是错误反向化而培训神经网络,并且可以在每秒乘以数万亿乘以量的速度运行,同时每次MAC操作消耗小于一个微微约会。光子架构利用并行化矩阵 - 向量乘法利用微址谐振器阵列,用于沿着单个波导总线处理多通道模拟信号,以便原位计算每个神经网络层的梯度向量,这是在后向通过期间执行的最昂贵的操作。 。我们还通过片上MAC操作结果实验地示意使用MNIST数据集进行培训深度神经网络。我们的高效,超快速神经网络训练的新方法展示了光子学作为执行AI应用的有希望的平台。
translated by 谷歌翻译
在本文中,提出了一种新的方法,该方法允许基于神经网络(NN)均衡器的低复杂性发展,以缓解高速相干光学传输系统中的损伤。在这项工作中,我们提供了已应用于馈电和经常性NN设计的各种深层模型压缩方法的全面描述和比较。此外,我们评估了这些策略对每个NN均衡器的性能的影响。考虑量化,重量聚类,修剪和其他用于模型压缩的尖端策略。在这项工作中,我们提出并评估贝叶斯优化辅助压缩,其中选择了压缩的超参数以同时降低复杂性并提高性能。总之,通过使用模拟和实验数据来评估每种压缩方法的复杂性及其性能之间的权衡,以完成分析。通过利用最佳压缩方法,我们表明可以设计基于NN的均衡器,该均衡器比传统的数字背部传播(DBP)均衡器具有更好的性能,并且只有一个步骤。这是通过减少使用加权聚类和修剪算法后在NN均衡器中使用的乘数数量来完成的。此外,我们证明了基于NN的均衡器也可以实现卓越的性能,同时仍然保持与完整的电子色色散补偿块相同的复杂性。我们通过强调开放问题和现有挑战以及未来的研究方向来结束分析。
translated by 谷歌翻译
胶囊网络(CAPSNET)是图像处理的新兴趋势。与卷积神经网络相反,CAPSNET不容易受到对象变形的影响,因为对象的相对空间信息在整个网络中保存。但是,它们的复杂性主要与胶囊结构和动态路由机制有关,这使得以其原始形式部署封闭式以由小型微控制器(MCU)供电的设备几乎是不合理的。在一个智力从云到边缘迅速转移的时代,这种高复杂性对在边缘的采用capsnets的采用构成了严重的挑战。为了解决此问题,我们提出了一个API,用于执行ARM Cortex-M和RISC-V MCUS中的量化capsnet。我们的软件内核扩展了ARM CMSIS-NN和RISC-V PULP-NN,以用8位整数作为操作数支持胶囊操作。随之而来的是,我们提出了一个框架,以执行CAPSNET的训练后量化。结果显示,记忆足迹的减少近75%,准确性损失范围从0.07%到0.18%。在吞吐量方面,我们的ARM Cortex-M API可以分别在仅119.94和90.60毫秒(MS)的中型胶囊和胶囊层执行(STM32H7555ZIT6U,Cortex-M7 @ 480 MHz)。对于GAP-8 SOC(RISC-V RV32IMCXPULP @ 170 MHz),延迟分别降至7.02和38.03 ms。
translated by 谷歌翻译
基于von-neumann架构的传统计算系统,数据密集型工作负载和应用程序(如机器学习)和应用程序都是基本上限制的。随着数据移动操作和能量消耗成为计算系统设计中的关键瓶颈,对近数据处理(NDP),机器学习和特别是神经网络(NN)的加速器等非传统方法的兴趣显着增加。诸如Reram和3D堆叠的新兴内存技术,这是有效地架构基于NN的基于NN的加速器,因为它们的工作能力是:高密度/低能量存储和近记忆计算/搜索引擎。在本文中,我们提出了一种为NN设计NDP架构的技术调查。通过基于所采用的内存技术对技术进行分类,我们强调了它们的相似之处和差异。最后,我们讨论了需要探索的开放挑战和未来的观点,以便改进和扩展未来计算平台的NDP架构。本文对计算机学习领域的计算机架构师,芯片设计师和研究人员来说是有价值的。
translated by 谷歌翻译
在当今的数据密集型时代,深度学习非常普遍。特别是,卷积神经网络(CNN)在各种领域被广泛采用,以获得卓越的准确性。但是,计算传统CPU和GPU的深入CNN带来了几种性能和能量陷阱。最近已经证明了基于ASIC,FPGA和电阻内存设备的几种新型方法,并有令人鼓舞的结果。他们中的大多数仅针对深度学习的推理(测试)阶段。尝试设计能够培训和推理的全面深度学习加速器的尝试非常有限。这是由于训练阶段的高度计算和记忆密集型性质。在本文中,我们提出了一种新型的模拟光子CNN加速器Litecon。 Litecon使用基于硅微波炉的卷积,基于备忘录的内存和密集波长 - 划分的稳定和超快深度学习。我们使用商业CAD框架(IPKISS)评估LiteCon,该框架(IPKISS)在包括Lenet和VGG-NET在内的深度学习基准模型上评估。与最先进的情况相比,LiteCon分别将CNN的吞吐量,能源效率和计算效率提高了32倍,37倍和5倍,并具有微不足道的精度降解。
translated by 谷歌翻译
编译器框架对于广泛使用基于FPGA的深度学习加速器来说是至关重要的。它们允许研究人员和开发人员不熟悉硬件工程,以利用域特定逻辑所获得的性能。存在传统人工神经网络的各种框架。然而,没有多大的研究努力已经进入创建针对尖刺神经网络(SNNS)进行优化的框架。这种新一代的神经网络对于在边缘设备上部署AI的越来越有趣,其具有紧密的功率和资源约束。我们的端到端框架E3NE为FPGA自动生成高效的SNN推理逻辑。基于Pytorch模型和用户参数,它应用各种优化,并评估基于峰值的加速器固有的权衡。多个水平的并行性和新出现的神经编码方案的使用导致优于先前的SNN硬件实现的效率。对于类似的型号,E3NE使用的硬件资源的少于50%,功率较低20%,同时通过幅度降低延迟。此外,可扩展性和通用性允许部署大规模的SNN模型AlexNet和VGG。
translated by 谷歌翻译