具有诸如LSTM或GRU之类的门控机制的经常性神经网络是模拟顺序数据的强大工具。在机制中,最近被引入到RNN中的隐藏状态下控制信息流的忘记门被重新解释为状态的时间尺度的代表,即RNN保留信息的时间在输入上。在此解释的基础上,已经提出了几种参数初始化方法,以利用数据依赖于数据中的时间依赖性的知识,以提高可读性。然而,解释依赖于各种不切实际的假设,例如在一定时间点之后没有输入。在这项工作中,我们重新考虑了忘记门的解释,更现实的环境。我们首先概括了所存在的网格RNN理论,以便我们可以考虑连续给出输入的情况。然后,我们争论作为时间表示的忘记门的解释是有效的,当随着时间的推移时,当相对于国家的损失的梯度减小时是有效的。我们经验证明现有的RNNS在初始训练阶段满足了几个任务的初始训练阶段,这与先前的初始化方法很好。在此发现的基础上,我们提出了一种构建新的RNN的方法,可以代表比传统模型更长的时间级,这将提高长期顺序数据的可读性。通过使用现实世界数据集的实验,我们验证了我们的方法的有效性。
translated by 谷歌翻译
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism in order to improve the modeling of long-term dependencies in sequential data. This model is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs). By considering a suitable time-discretization scheme, we propose $\tau$-GRU, a discrete-time gated recurrent unit with delay. We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies. Our empirical results show that $\tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures on a range of tasks, including time-series classification, human activity recognition, and speech recognition.
translated by 谷歌翻译
There are two widely known issues with properly training Recurrent Neural Networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.
translated by 谷歌翻译
我们为研究通过将噪声注入隐藏状态而训练的经常性神经网络(RNN)提供了一般框架。具体地,我们考虑RNN,其可以被视为由输入数据驱动的随机微分方程的离散化。该框架允许我们通过在小噪声制度中导出近似显式规范器来研究一般噪声注入方案的隐式正则化效果。我们发现,在合理的假设下,这种隐含的正规化促进了更平坦的最小值;它偏向具有更稳定动态的模型;并且,在分类任务中,它有利于具有较大分类余量的模型。获得了全局稳定性的充分条件,突出了随机稳定的现象,其中噪音注入可以在训练期间提高稳定性。我们的理论得到了经验结果支持,证明RNN对各种输入扰动具有改善的鲁棒性。
translated by 谷歌翻译
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.
translated by 谷歌翻译
在部分可观察域中的预测和规划的常见方法是使用经常性的神经网络(RNN),其理想地开发和维持关于隐藏,任务相关因素的潜伏。我们假设物理世界中的许多这些隐藏因素随着时间的推移是恒定的,而只是稀疏变化。为研究这一假设,我们提出了Gated $ L_0 $正规化的动态(Gatel0rd),一种新的经常性架构,它包含归纳偏差,以保持稳定,疏口改变潜伏状态。通过新颖的内部门控功能和潜在状态变化的$ l_0 $ norm的惩罚来实现偏差。我们证明Gatel0rd可以在各种部分可观察到的预测和控制任务中与最先进的RNN竞争或优于最先进的RNN。 Gatel0rd倾向于编码环境的基础生成因子,忽略了虚假的时间依赖性,并概括了更好的,提高了基于模型的规划和加强学习任务中的采样效率和整体性能。此外,我们表明可以容易地解释开发的潜在状态,这是朝着RNN中更好地解释的步骤。
translated by 谷歌翻译
经常性神经网络(RNNS)是强大的动态模型,广泛用于机器学习(ML)和神经科学。之前的理论作品集中在具有添加剂相互作用的RNN上。然而,门控 - 即乘法 - 相互作用在真神经元中普遍存在,并且也是ML中最佳性能RNN的中心特征。在这里,我们表明Gating提供灵活地控制集体动态的两个突出特征:i)时间尺寸和ii)维度。栅极控制时间尺度导致新颖的稳定状态,网络用作灵活积分器。与以前的方法不同,Gating允许这种重要功能而没有参数微调或特殊对称。门还提供一种灵活的上下文相关机制来重置存储器跟踪,从而补充存储器功能。调制维度的栅极可以诱导新颖的不连续的混沌转变,其中输入将稳定的系统推向强的混沌活动,与通常稳定的输入效果相比。在这种转变之上,与添加剂RNN不同,关键点(拓扑复杂性)的增殖与混沌动力学的外观解耦(动态复杂性)。丰富的动态总结在相图中,从而为ML从业者提供了一个原理参数初始化选择的地图。
translated by 谷歌翻译
复发性神经网络(RNN)是用于建模顺序和时间序列数据的广泛机器学习工具。众所周知,他们很难训练,因为他们的损失梯度在训练过程中倾向于饱和或差异。这被称为爆炸和消失的梯度问题。对该问题的先前解决方案要么建立在具有门控内存缓冲区的相当复杂的,专门设计的体系结构上,要么 - 最近 - 施加的约束,以确保收敛到固定点或限制(限制复发矩阵)。然而,这种限制传达了对RNN表现性的严重局限性。绝对的内在动态(例如多稳定性或混乱)被禁用。这本质上是在大自然和社会中遇到的许多(如果不是大多数时间)的混乱性质的脱节性。在科学应用中,尤其是一个旨在重建基本动力学系统的科学应用程序。在这里,我们通过将RNN培训期间的损耗梯度与RNN生成的轨道的lyapunov谱相关联,对该问题提供了全面的理论处理。我们从数学上证明,产生稳定平衡或环状行为的RNN具有有限的梯度,而混沌动力学的RNN梯度总是不同。基于这些分析和见解,我们建议如何根据系统的Lyapunov Spectrum,如何优化混乱数据的训练过程,无论使用的RNN架构如何。
translated by 谷歌翻译
目标传播(TP)算法计算目标,而不是神经网络的梯度,并以与梯度反向传播(BP)类似但不同的方式向后传播它们。首先将该想法作为扰动替代的反向传播,当训练多层神经网络时可能在梯度评估中获得更高的准确性(Lecun等,1989)。然而,TP仍然是具有许多变体的模板算法,而不是良好识别的算法。重新审视Lecun等人的见解,(1989),最近的Lee等人。 (2015),我们介绍了一个简单版本的目标传播,基于网络层的正则化反转,可在可差异的编程框架中实现。我们将其计算复杂性与BP之一进行了比较,并与BP相比,描绘了TP可以吸引的制度。我们展示了我们的TP如何用于培训具有关于各种序列建模问题的长序列的经常性神经网络。实验结果强调了在实践中在TP中规范化的重要性。
translated by 谷歌翻译
Recent developments in quantum computing and machine learning have propelled the interdisciplinary study of quantum machine learning. Sequential modeling is an important task with high scientific and commercial value. Existing VQC or QNN-based methods require significant computational resources to perform the gradient-based optimization of a larger number of quantum circuit parameters. The major drawback is that such quantum gradient calculation requires a large amount of circuit evaluation, posing challenges in current near-term quantum hardware and simulation software. In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea to this RC approach is that the QRNN with randomly initialized weights is treated as a dynamical system and only the final classical linear layer is trained. Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series prediction tasks. Since the QRNN training complexity is significantly reduced, the proposed model trains notably faster. In this work we also compare to corresponding classical RNN-based RC implementations and show that the quantum version learns faster by requiring fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural network for sequential modeling with greater quantum hardware efficiency, an important design consideration for noisy intermediate-scale quantum (NISQ) computers.
translated by 谷歌翻译
在过去的几年里,通过微分方程激发的神经网络已经增殖。神经常规方程(节点)和神经控制微分方程(NCDE)是它们的两个代表性示例。理论上,NCDES提供比节点的时间序列数据更好的表示学习能力。特别地,已知NCDE适用于处理不规则的时间序列数据。然而,在采用关注之后,节点已成功扩展,但是尚未研究如何将注意力集成到NCDE中。为此,我们介绍了用于时间序列分类和预测的周度神经控制微分方程(ANCDES)的方法,其中使用了双nCDE:一个用于生成注意值,另一个用于改进下游机器学习任务的隐藏向量。我们用三个真实世界时间序列数据集和10个基线进行实验。丢弃一些值后,我们还进行不规则的时间序列实验。我们的方法一致地显示所有案例中的最佳准确性。我们的可视化还表明,通过专注于关键信息,所提出的注意机制如预期的工作。
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
近年来,使用正交矩阵已被证明是通过训练,稳定性和收敛尤其是控制梯度来改善复发性神经网络(RNN)的一种有希望的方法。通过使用各种门和记忆单元,封闭的复发单元(GRU)和长期短期记忆(LSTM)体系结构解决了消失的梯度问题,但它们仍然容易出现爆炸梯度问题。在这项工作中,我们分析了GRU中的梯度,并提出了正交矩阵的使用,以防止梯度问题爆炸并增强长期记忆。我们研究了在哪里使用正交矩阵,并提出了基于Neumann系列的缩放尺度的Cayley转换,以训练GRU中的正交矩阵,我们称之为Neumann-cayley Orthoconal orthoconal Gru或简单的NC-GRU。我们介绍了有关几个合成和现实世界任务的模型的详细实验,这些实验表明NC-GRU明显优于GRU以及其他几个RNN。
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
复发性神经网络(RNN)的可伸缩性受到每个时间步骤计算对先前时间步长输出的顺序依赖性的阻碍。因此,加快和扩展RNN的一种方法是减少每个时间步长所需的计算,而不是模型大小和任务。在本文中,我们提出了一个模型,该模型将封闭式复发单元(GRU)作为基于事件的活动模型,我们称为基于事件的GRU(EGRU),其中仅在收到输入事件(事件 - 基于其他单位。当与一次活跃的单位仅一小部分(活动 - 帕斯斯)相结合时,该模型具有比当前RNN的计算更高效的潜力。值得注意的是,我们模型中的活动 - 表格性也转化为梯度下降期间稀疏参数更新,从而将此计算效率扩展到训练阶段。我们表明,与现实世界中最新的经常性网络模型相比,EGRU表现出竞争性能,包括语言建模,同时在推理和培训期间自然保持高活动稀疏性。这为下一代重复网络奠定了基础,这些网络可扩展,更适合新型神经形态硬件。
translated by 谷歌翻译
在称为RNN(p)的几个时间滞后的复发神经网络是自然回归ARX(P)模型的自然概括。当不同的时间尺度会影响给定现象时,它是一种强大的预测工具,因为它发生在能源领域,每小时,每日,每周和每年的互动并存。具有成本效益的BPTT是RNN的学习算法的行业标准。我们证明,当训练RNN(P)模型时,其他学习算法在时间和空间复杂性方面都更加有效。我们还介绍了一种新的学习算法,即树木重组的重组学习,该算法利用了展开网络的树表示,并且似乎更有效。我们提出了RNN(P)模型的应用,以在每小时规模上进行功耗预测:实验结果证明了所提出的算法的效率以及所选模型在点和能源消耗的概率预测中实现的出色预测准确性。
translated by 谷歌翻译
Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provides useful insights for better understanding and utilization of missing values in time series analysis.
translated by 谷歌翻译
Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, they are often treated as black-box models and as such it is difficult to understand what exactly they learn as well as how they arrive at a particular prediction. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We evaluate state-regularized RNNs on (1) regular languages for the purpose of automata extraction; (2) non-regular languages such as balanced parentheses and palindromes where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition and text categorisation. We show that state-regularization (a) simplifies the extraction of finite state automata that display an RNN's state transition dynamic; (b) forces RNNs to operate more like automata with external memory and less like finite state machines, which potentiality leads to a more structural memory; (c) leads to better interpretability and explainability of RNNs by leveraging the probabilistic finite state transition mechanism over time steps.
translated by 谷歌翻译
Deep Neural Networks (DNNs) training can be difficult due to vanishing and exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stem from the discretization of continuous-time Hamiltonian systems and include several existing DNN architectures based on ordinary differential equations. Our main result is that a broad set of H-DNNs ensures non-vanishing gradients by design for an arbitrary network depth. This is obtained by proving that, using a semi-implicit Euler discretization scheme, the backward sensitivity matrices involved in gradient computations are symplectic. We also provide an upper-bound to the magnitude of sensitivity matrices and show that exploding gradients can be controlled through regularization. Finally, we enable distributed implementations of backward and forward propagation algorithms in H-DNNs by characterizing appropriate sparsity constraints on the weight matrices. The good performance of H-DNNs is demonstrated on benchmark classification problems, including image classification with the MNIST dataset.
translated by 谷歌翻译