经常性的神经网络(RNNS)是用于顺序建模的强大工具,但通常需要显着的过分识别和正则化以实现最佳性能。这导致在资源限制的环境中部署大型RNN的困难,同时还引入了近似参数选择和培训的并发症。为了解决这些问题,我们介绍了一种“完全张化的”RNN架构,该架构使用轻质的张力列车(TT)分解在每个反复电池内联合编码单独的权重矩阵。该方法代表了一种重量共享的新形式,其减少了多个数量级的模型大小,同时与标准RNN相比保持相似或更好的性能。图像分类和扬声器验证任务的实验表明了减少推理时间和稳定模型培训和封闭表选择的进一步益处。
translated by 谷歌翻译
近年来,使用正交矩阵已被证明是通过训练,稳定性和收敛尤其是控制梯度来改善复发性神经网络(RNN)的一种有希望的方法。通过使用各种门和记忆单元,封闭的复发单元(GRU)和长期短期记忆(LSTM)体系结构解决了消失的梯度问题,但它们仍然容易出现爆炸梯度问题。在这项工作中,我们分析了GRU中的梯度,并提出了正交矩阵的使用,以防止梯度问题爆炸并增强长期记忆。我们研究了在哪里使用正交矩阵,并提出了基于Neumann系列的缩放尺度的Cayley转换,以训练GRU中的正交矩阵,我们称之为Neumann-cayley Orthoconal orthoconal Gru或简单的NC-GRU。我们介绍了有关几个合成和现实世界任务的模型的详细实验,这些实验表明NC-GRU明显优于GRU以及其他几个RNN。
translated by 谷歌翻译
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.
translated by 谷歌翻译
低秩张量压缩已被提议作为一个有前途的方法,以减少他们的边缘设备部署神经网络的存储和计算需求。张量压缩减少的通过假设网络的权重来表示神经网络权重所需的参数的数目具有一个粗糙的高级结构。此粗结构假设已经被应用到压缩大神经网络如VGG和RESNET。计算机视觉任务然而现代国家的最先进的神经网络(即MobileNet,EfficientNet)已经通过在深度方向上可分离卷积假定粗因式分解结构,使得纯张量分解较少有吸引力的方法。我们建议低张量分解稀疏修剪,以充分利用粗粒和细粒结构的压缩相结合。我们在压缩SOTA架构的权重(MobileNetv3,EfficientNet,视觉变压器),并比较这种方法来疏剪枝,独自张量分解。
translated by 谷歌翻译
本文介绍了一种通过张量 - 训练(TT)分解来更紧凑地表示图形神经网络(GNN)表的新方法。我们考虑(a)缺乏节点特征的图形数据,从而在训练过程中学习嵌入的情况; (b)我们希望利用GPU平台,即使对于大型内存GPU,也需要较小的桌子来减少主机到GPU的通信。 TT的使用实现了嵌入的紧凑参数化,使其足够小,甚至可以完全适合现代GPU,即使是大量图形。当与明智的初始化和分层图分区结合使用时,这种方法可以将嵌入矢量的大小降低1,659次,至81,362次,在大型公开可用的基准数据集中,可以实现可比性或更高的准确性或更高的准确性和在多GPU系统上的显着速度。在某些情况下,我们的模型在输入上没有明确的节点功能甚至可以匹配使用节点功能的模型的准确性。
translated by 谷歌翻译
物理知识的神经网络(PINN)由于对复杂物理系统进行建模的能力而越来越多地使用。为了获得更好的表现力,在许多问题中需要越来越大的网络大小。当我们需要培训有限的内存,计算和能源资源的边缘设备上的Pinns时,这引起了挑战。为了实现Edge设备上的训练PINN,本文提出了基于张量培训分解的端到端压缩PINN。在求解Helmholtz方程时,我们提出的模型显着优于原始PINN,几乎没有参数,并且可以实现令人满意的预测,最多可容纳15美元$ \ times $ $总体参数。
translated by 谷歌翻译
最先进的深神经网络(DNN)已广泛应用于各种现实世界应用,并实现了认知问题的显着性能。然而,架构中的DNNS宽度和深度的增量导致大量参数,以质询存储和内存成本,限制了DNN在资源受限平台上的使用,例如便携式设备。通过将冗余模型转换为紧凑的模型,压缩技术似乎是降低存储和存储器消耗的实用解决方案。在本文中,我们开发了一种非线性张量环网(NTRN),其中通过张量环分解压缩全连接和卷积层。此外,为了减轻压缩引起的精度损失,将非线性激活功能嵌入到压缩层内的张量收缩和卷积操作中。实验结果表明,使用两个基本神经网络,LENET-5和VGG-11在三个数据集,VIZ上使用两个基本的神经网络,LENET-5和VGG-11进行图像分类的有效性和优越性。 mnist,时尚mnist和cifar-10。
translated by 谷歌翻译
在本文中,我们在不同研究领域使用的三种模型之间存在联系:来自正式语言和语言学的加权有限自动机〜(WFA),机器学习中使用的经常性神经网络,以及张量网络,包括一组高处的优化技术量子物理学和数值分析中使用的顺序张量。我们首先介绍WFA与张力列车分解,特定形式的张量网络之间的内在关系。该关系允许我们展示由WFA计算的函数的Hankel矩阵的新型低级结构,并设计利用这种结构的有效光谱学习算法来扩展到非常大的Hankel矩阵。我们将解开基本连接在WFA和第二阶逆转神经网络之间〜(2-RNN):在离散符号的序列的情况下,具有线性激活功能的WFA和2-RNN是表现性的。利用该等效结果与加权自动机的经典频谱学习算法相结合,我们介绍了在连续输入向量序列上定义的线性2-RNN的第一可提供学习算法。本算法依赖于Hankel Tensor的低等级子块,可以从中可以从中恢复线性2-RNN的参数。在综合性和现实世界数据的仿真研究中评估了所提出的学习算法的性能。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
嘈杂的中间级量子(NISQ)计算机的出现对设计量子神经网络进行了全量子学习任务来提高一个至关重要的挑战。为了弥合差距,这项工作提出了一种通过在变分量子电路(VQC)上的Quantum嵌入量子嵌入量子嵌入的可训练量子张量网络(QTN)来提出名为QTN-VQC的端到端学习框架。QTN的架构由参数张力列车网络组成,用于特征提取和普罗斯嵌入的张量产品。我们在两个观点突出显示QTN的QTN:(1)我们通过分析输入特征的表示功率理论上表征QTN;(2)QTN使端到端的参数模型管道,即QTN-VQC,从生成估计输出测量。我们在Mnist DataSet上的实验证明了QTN对Quantum嵌入其他量子嵌入方法的优点。
translated by 谷歌翻译
长期记忆(LSTM)经常性网络经常用于涉及时间序列数据(例如语音识别)的任务。与以前的LSTM加速器相比,它可以利用空间重量稀疏性或时间激活稀疏性,本文提出了一种称为“ Spartus”的新加速器,该加速器可利用时空的稀疏性来实现超低潜伏期推断。空间稀疏性是使用新的圆柱平衡的靶向辍学(CBTD)结构化修剪法诱导的,从而生成平衡工作负载的结构化稀疏重量矩阵。在Spartus硬件上运行的修剪网络可实现高达96%和94%的重量稀疏度,而Timit和LibrisPeech数据集的准确性损失微不足道。为了在LSTM中诱导时间稀疏性,我们将先前的Deltagru方法扩展到Deltalstm方法。将时空的稀疏与CBTD和Deltalstm相结合,可以节省重量存储器访问和相关的算术操作。 Spartus体系结构是可扩展的,并且在大小FPGA上实现时支持实时在线语音识别。 1024个神经元的单个deltalstm层的Spartus每样本延迟平均1 US。使用TIMIT数据集利用我们的测试LSTM网络上的时空稀疏性导致Spartus在其理论硬件性能上达到46倍的加速,以实现9.4 TOP/S有效批次1吞吐量和1.1 TOP/S/W PARTIC效率。
translated by 谷歌翻译
Long short-term memory (LSTM) is a type of powerful deep neural network that has been widely used in many sequence analysis and modeling applications. However, the large model size problem of LSTM networks make their practical deployment still very challenging, especially for the video recognition tasks that require high-dimensional input data. Aiming to overcome this limitation and fully unlock the potentials of LSTM models, in this paper we propose to perform algorithm and hardware co-design towards high-performance energy-efficient LSTM networks. At algorithm level, we propose to develop fully decomposed hierarchical Tucker (FDHT) structure-based LSTM, namely FDHT-LSTM, which enjoys ultra-low model complexity while still achieving high accuracy. In order to fully reap such attractive algorithmic benefit, we further develop the corresponding customized hardware architecture to support the efficient execution of the proposed FDHT-LSTM model. With the delicate design of memory access scheme, the complicated matrix transformation can be efficiently supported by the underlying hardware without any access conflict in an on-the-fly way. Our evaluation results show that both the proposed ultra-compact FDHT-LSTM models and the corresponding hardware accelerator achieve very high performance. Compared with the state-of-the-art compressed LSTM models, FDHT-LSTM enjoys both order-of-magnitude reduction in model size and significant accuracy improvement across different video recognition datasets. Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieves better performance in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload. For LSTM-UCF workload, our proposed design also outperforms TIE with higher throughput, higher energy efficiency and comparable area efficiency.
translated by 谷歌翻译
自动语音识别(ASR)是一种能力,使程序能够将人类演讲进入书面形式。人工智能(AI)的最新发展导致基于深神经网络的高精度ASR系统,例如经常性神经网络传感器(RNN-T)。然而,这些方法的核心组件和所执行的操作从强大的生物对应,即人脑中脱离。另一方面,基于尖刺神经网络(SNNS)的生物启发模型中的当前发展,落后于准确性并主要关注小规模应用。在这项工作中,我们通过从大脑中发现的多样性神经和突触动态吸引灵感来重新审视生物学上可合理的模型并大大提高他们的能力。特别是,我们介绍了模拟轴体和轴突突触的神经连接概念。基于此,我们提出了具有丰富神经突触动态的新型深度学习单元,并将它们集成到RNN-T架构中。我们首次展示,与现有的深度学习模型相比,大规模ASR模型的生物学现实实际实施可以产生竞争性能水平。具体地,我们表明这种实现具有若干优点,例如降低的计算成本和更低的延迟,这对于语音识别应用至关重要。
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
我们提出了Tntorch,这是一个张量学习框架,该框架支持统一界面下的多个分解(包括CandeComp/Parafac,Tucker和Tensor Train)。借助我们的库,用户可以通过自动差异,无缝的GPU支持以及Pytorch的API的便利性学习和处理低排名的张量。除分解算法外,TNTORCH还实施可区分的张量代数,等级截断,交叉透视,批处理处理,全面的张量算术等。
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
一些神经网络的输入和/或输出是其他神经网的权重矩阵。重量矩阵的间接编码或端到端压缩可以有助于规模这些方法。我们的目标是开展关于该主题的讨论,从用于性格级语言建模的经常性神经网络开始,其权重矩阵由离散余弦变换编码。我们的快速重量形式使用经常性神经网络来参数化压缩的重量。我们在enwik8数据集上呈现实验结果。
translated by 谷歌翻译
Recurrent neural networks (RNN) are the backbone of many text and speech applications. These architectures are typically made up of several computationally complex components such as; non-linear activation functions, normalization, bi-directional dependence and attention. In order to maintain good accuracy, these components are frequently run using full-precision floating-point computation, making them slow, inefficient and difficult to deploy on edge devices. In addition, the complex nature of these operations makes them challenging to quantize using standard quantization methods without a significant performance drop. We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN). Our approach supports layer normalization, attention, and an adaptive piecewise linear (PWL) approximation of activation functions, to serve a wide range of state-of-the-art RNNs. The proposed method enables RNN-based language models to run on edge devices with $2\times$ improvement in runtime, and $4\times$ reduction in model size while maintaining similar accuracy as its full-precision counterpart.
translated by 谷歌翻译
Recent developments in quantum computing and machine learning have propelled the interdisciplinary study of quantum machine learning. Sequential modeling is an important task with high scientific and commercial value. Existing VQC or QNN-based methods require significant computational resources to perform the gradient-based optimization of a larger number of quantum circuit parameters. The major drawback is that such quantum gradient calculation requires a large amount of circuit evaluation, posing challenges in current near-term quantum hardware and simulation software. In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea to this RC approach is that the QRNN with randomly initialized weights is treated as a dynamical system and only the final classical linear layer is trained. Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series prediction tasks. Since the QRNN training complexity is significantly reduced, the proposed model trains notably faster. In this work we also compare to corresponding classical RNN-based RC implementations and show that the quantum version learns faster by requiring fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural network for sequential modeling with greater quantum hardware efficiency, an important design consideration for noisy intermediate-scale quantum (NISQ) computers.
translated by 谷歌翻译
A simple nonrecursive form of the tensor decomposition in d dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on lowrank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
translated by 谷歌翻译