自动驾驶(AD)相关功能代表了下一代移动机器人和专注于越来越智能,自主和互连系统的自动驾驶汽车的重要元素。根据定义,必须提供涉及使用这些功能的应用程序,并且此属性是避免灾难性事故的关键。此外,所有决策过程都必须需要低功耗,以增加电池驱动系统的寿命和自主权。这些挑战可以通过有效实施神经形态芯片上的尖峰神经网络(SNN)以及使用基于事件的摄像机而不是传统基于框架的摄像机来解决这些挑战。在本文中,我们提出了一种新的基于SNN的方法,称为Lanesnn,用于使用基于事件的相机输入来检测街道上标记的车道。我们开发了四种以低复杂性和快速响应为特征的小说SNN模型,并使用离线监督的学习规则训练它们。之后,我们将学习的SNNS模型实施并映射到Intel Loihi神经形态研究芯片上。对于损耗函数,我们基于加权二进制交叉熵(WCE)和均方误差(MSE)度量的线性组成而开发了一种新颖的方法。我们的实验结果表明,与联合(IOU)度量的最大交叉点约为0.62,功耗非常低约1W。最好的IOU是通过SNN实现实现的,该实现仅占据Loihi处理器上的36个神经可孔,同时提供低潜伏期少于8 ms识别图像,从而实现实时性能。我们网络提供的IOU措施与最先进的措施相当,但功率消耗为1W。
translated by 谷歌翻译
在当今智能网络物理系统时代,由于它们在复杂的现实世界应用中的最新性能,深度神经网络(DNN)已无处不在。这些网络的高计算复杂性转化为增加的能源消耗,这是在资源受限系统中部署大型DNN的首要障碍。通过培训后量化实现的定点(FP)实现通常用于减少这些网络的能源消耗。但是,FP中的均匀量化间隔将数据结构的位宽度限制为大值,因为需要以足够的分辨率来表示大多数数字并避免较高的量化误差。在本文中,我们利用了关键见解,即(在大多数情况下)DNN的权重和激活主要集中在零接近零,只有少数几个具有较大的幅度。我们提出了Conlocnn,该框架是通过利用来实现节能低精度深度卷积神经网络推断的框架:(1)重量的不均匀量化,以简化复杂的乘法操作的简化; (2)激活值之间的相关性,可以在低成本的情况下以低成本进行部分补偿,而无需任何运行时开销。为了显着从不均匀的量化中受益,我们还提出了一种新颖的数据表示格式,编码低精度二进制签名数字,以压缩重量的位宽度,同时确保直接使用编码的权重来使用新颖的多重和处理 - 积累(MAC)单元设计。
translated by 谷歌翻译
复杂的深层神经网络(例如胶囊网络(CAPSNET))以计算密集型操作为代价表现出较高的学习能力。为了使其在边缘设备上的部署,我们建议利用近似计算来设计诸如SoftMax和Squash等复杂操作的近似变体。在我们的实验中,与确切功能相比,我们评估了通过ASIC设计流实施的设计和量化capsnets的准确性的区域,功耗和关键路径延迟之间的权衡。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
The paper addresses the problem of time offset synchronization in the presence of temperature variations, which lead to a non-Gaussian environment. In this context, regular Kalman filtering reveals to be suboptimal. A functional optimization approach is developed in order to approximate optimal estimation of the clock offset between master and slave. A numerical approximation is provided to this aim, based on regular neural network training. Other heuristics are provided as well, based on spline regression. An extensive performance evaluation highlights the benefits of the proposed techniques, which can be easily generalized to several clock synchronization protocols and operating environments.
translated by 谷歌翻译
A significant drawback of eXplainable Artificial Intelligence (XAI) approaches is the assumption of feature independence. This paper focuses on integrating causal knowledge in XAI methods to increase trust and help users assess explanations' quality. We propose a novel extension to a widely used local and model-agnostic explainer that explicitly encodes causal relationships in the data generated around the input instance to explain. Extensive experiments show that our method achieves superior performance comparing the initial one for both the fidelity in mimicking the black-box and the stability of the explanations.
translated by 谷歌翻译
The paper addresses state estimation for clock synchronization in the presence of factors affecting the quality of synchronization. Examples are temperature variations and delay asymmetry. These working conditions make synchronization a challenging problem in many wireless environments, such as Wireless Sensor Networks or WiFi. Dynamic state estimation is investigated as it is essential to overcome non-stationary noises. The two-way timing message exchange synchronization protocol has been taken as a reference. No a-priori assumptions are made on the stochastic environments and no temperature measurement is executed. The algorithms are unequivocally specified offline, without the need of tuning some parameters in dependence of the working conditions. The presented approach reveals to be robust to a large set of temperature variations, different delay distributions and levels of asymmetry in the transmission path.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Feature selection is of great importance in Machine Learning, where it can be used to reduce the dimensionality of classification, ranking and prediction problems. The removal of redundant and noisy features can improve both the accuracy and scalability of the trained models. However, feature selection is a computationally expensive task with a solution space that grows combinatorically. In this work, we consider in particular a quadratic feature selection problem that can be tackled with the Quantum Approximate Optimization Algorithm (QAOA), already employed in combinatorial optimization. First we represent the feature selection problem with the QUBO formulation, which is then mapped to an Ising spin Hamiltonian. Then we apply QAOA with the goal of finding the ground state of this Hamiltonian, which corresponds to the optimal selection of features. In our experiments, we consider seven different real-world datasets with dimensionality up to 21 and run QAOA on both a quantum simulator and, for small datasets, the 7-qubit IBM (ibm-perth) quantum computer. We use the set of selected features to train a classification model and evaluate its accuracy. Our analysis shows that it is possible to tackle the feature selection problem with QAOA and that currently available quantum devices can be used effectively. Future studies could test a wider range of classification models as well as improve the effectiveness of QAOA by exploring better performing optimizers for its classical step.
translated by 谷歌翻译
这项工作提出了专门针对粒子探测器的低潜伏期图神经网络(GNN)设计的新型可重构体系结构。加速粒子探测器的GNN是具有挑战性的,因为它需要次微秒延迟才能在CERN大型强子撞机实验的级别1触发器中部署网络以进行在线事件选择。本文提出了一种自定义代码转换,并在基于互动网络的GNN中使用完全连接的图表中的矩阵乘法操作降低了强度,从而避免了昂贵的乘法。它利用了稀疏模式以及二进制邻接矩阵,并避免了不规则的内存访问,从而降低了延迟和硬件效率的提高。此外,我们引入了一种基于外部产品的基质乘法方法,该方法通过降低潜伏期设计的强度降低来增强。此外,引入了融合步骤,以进一步降低设计延迟。此外,提出了GNN特异性算法 - 硬件共同设计方法,该方法不仅找到了具有更好延迟的设计,而且在给定的延迟约束下发现了高精度的设计。最后,已经设计和开源了此低延迟GNN硬件体系结构的可自定义模板,该模板可以使用高级合成工具来生成低延迟的FPGA设计,并有效地利用资源。评估结果表明,我们的FPGA实施速度高24倍,并且消耗的功率比GPU实施少45倍。与我们以前的FPGA实施相比,这项工作的延迟降低了6.51至16.7倍。此外,我们的FPGA设计的延迟足以使GNN在亚微秒,实时撞机触发器系统中部署,从而使其能够从提高的精度中受益。
translated by 谷歌翻译