We explore relations between the hyper-parameters of a recurrent neural network (RNN) and the complexity of string sequences it is able to memorize. We compare long short-term memory (LSTM) networks and gated recurrent units (GRUs). We find that an increase of RNN depth does not necessarily result in better memorization capability when the training time is constrained. Our results also indicate that the learning rate and the number of units per layer are among the most important hyper-parameters to be tuned. Generally, GRUs outperform LSTM networks on low complexity sequences while on high complexity sequences LSTMs perform better.
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
With the evolution of power systems as it is becoming more intelligent and interactive system while increasing in flexibility with a larger penetration of renewable energy sources, demand prediction on a short-term resolution will inevitably become more and more crucial in designing and managing the future grid, especially when it comes to an individual household level. Projecting the demand for electricity for a single energy user, as opposed to the aggregated power consumption of residential load on a wide scale, is difficult because of a considerable number of volatile and uncertain factors. This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem. LSTM and GRU are comparatively newer and among the most well-adopted deep learning approaches. The electricity consumption datasets were obtained from individual household smart meters. The comparison shows that the LSTM model performs better for home-level forecasting than alternative prediction techniques-GRU in this case. To compare the NN-based models with contrast to the conventional statistical technique-based model, ARIMA based model was also developed and benchmarked with LSTM and GRU model outcomes in this study to show the performance of the proposed model on the collected time series data.
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
近年来,使用正交矩阵已被证明是通过训练,稳定性和收敛尤其是控制梯度来改善复发性神经网络(RNN)的一种有希望的方法。通过使用各种门和记忆单元,封闭的复发单元(GRU)和长期短期记忆(LSTM)体系结构解决了消失的梯度问题,但它们仍然容易出现爆炸梯度问题。在这项工作中,我们分析了GRU中的梯度,并提出了正交矩阵的使用,以防止梯度问题爆炸并增强长期记忆。我们研究了在哪里使用正交矩阵,并提出了基于Neumann系列的缩放尺度的Cayley转换,以训练GRU中的正交矩阵,我们称之为Neumann-cayley Orthoconal orthoconal Gru或简单的NC-GRU。我们介绍了有关几个合成和现实世界任务的模型的详细实验,这些实验表明NC-GRU明显优于GRU以及其他几个RNN。
translated by 谷歌翻译
Recent developments in quantum computing and machine learning have propelled the interdisciplinary study of quantum machine learning. Sequential modeling is an important task with high scientific and commercial value. Existing VQC or QNN-based methods require significant computational resources to perform the gradient-based optimization of a larger number of quantum circuit parameters. The major drawback is that such quantum gradient calculation requires a large amount of circuit evaluation, posing challenges in current near-term quantum hardware and simulation software. In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea to this RC approach is that the QRNN with randomly initialized weights is treated as a dynamical system and only the final classical linear layer is trained. Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series prediction tasks. Since the QRNN training complexity is significantly reduced, the proposed model trains notably faster. In this work we also compare to corresponding classical RNN-based RC implementations and show that the quantum version learns faster by requiring fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural network for sequential modeling with greater quantum hardware efficiency, an important design consideration for noisy intermediate-scale quantum (NISQ) computers.
translated by 谷歌翻译
短期负荷预测(STLF)由于复杂的时间序列(TS)是一种表达三个季节性模式和非线性趋势的挑战。本文提出了一种新的混合分层深度学习模型,涉及多个季节性,并产生两点预测和预测间隔(PIS)。它结合了指数平滑(ES)和经常性神经网络(RNN)。 ES动态提取每个单独的TS的主要组件,并启用在飞行的临时化,这在相对较小的数据集上操作时特别有用。多层RNN配备了一种新型扩张的经常性电池,旨在有效地模拟TS中的短期和长期依赖性。为了改善内部TS表示,因此模型的性能,RNN同时学习ES参数和主要映射函数将输入转换为预测。我们比较我们对几种基线方法的方法,包括古典统计方法和机器学习(ML)方法,在35个欧洲国家的STLF问题。实证研究清楚地表明,该模型具有高表现力,以解决非线性随机预测问题,包括多个季节性和显着的随机波动。实际上,它在准确性方面优于统计和最先进的ML模型。
translated by 谷歌翻译
我们表明,LSTM和统一进化的复发神经网络(URN)都可以在两种类型的句法模式上实现令人鼓舞的准确性:无上下文的长距离一致性和轻微的上下文敏感的交叉序列依赖性。这项工作扩展了有关深层无上下文的长距离依赖性的最新实验,结果相似。URN与LSTM的不同之处在于它们避免了非线性激活函数,并且它们将矩阵乘法应用于编码为单位矩阵的单词嵌入。这使他们可以将所有信息保留在任意距离上的输入字符串中。这也使他们满足严格的组成性。在应用于NLP的深度学习中寻找可解释的模型时,URN构成了重大进步。
translated by 谷歌翻译
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.
translated by 谷歌翻译
In this paper, we propose a new short-term load forecasting (STLF) model based on contextually enhanced hybrid and hierarchical architecture combining exponential smoothing (ES) and a recurrent neural network (RNN). The model is composed of two simultaneously trained tracks: the context track and the main track. The context track introduces additional information to the main track. It is extracted from representative series and dynamically modulated to adjust to the individual series forecasted by the main track. The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive dilated recurrent cells. These cells enable the model to capture short-term, long-term and seasonal dependencies across time series as well as to weight dynamically the input information. The model produces both point forecasts and predictive intervals. The experimental part of the work performed on 35 forecasting problems shows that the proposed model outperforms in terms of accuracy its predecessor as well as standard statistical models and state-of-the-art machine learning models.
translated by 谷歌翻译
地震的预测和预测有很长的时间,在某些情况下有肮脏的历史,但是最近的工作重新点燃了基于预警的进步,诱发地震性的危害评估以及对实验室地震的成功预测。在实验室中,摩擦滑移事件为地震和地震周期提供了类似物。 Labquakes是机器学习(ML)的理想目标,因为它们可以在受控条件下以长序列生产。最近的作品表明,ML可以使用断层区的声学排放来预测实验室的几个方面。在这里,我们概括了这些结果,并探索了Labquake预测和自动回归(AR)预测的深度学习(DL)方法。 DL改善了现有的Labquake预测方法。 AR方法允许通过迭代预测在未来的视野中进行预测。我们证明,基于长期任期内存(LSTM)和卷积神经网络的DL模型可以预测在几种条件下实验室,并且可以以忠诚度预测断层区应力,证实声能是断层区应力的指纹。我们还预测了实验室的失败开始(TTSF)和失败结束(TTEF)的时间。有趣的是,在所有地震循环中都可以成功预测TTEF,而TTSF的预测随preseismisic断层蠕变的数量而变化。我们报告了使用三个序列建模框架:LSTM,时间卷积网络和变压器网络预测故障应力演变的AR方法。 AR预测与现有的预测模型不同,该模型仅在特定时间预测目标变量。超出单个地震周期的预测结果有限,但令人鼓舞。我们的ML/DL模型优于最先进的模型,我们的自回归模型代表了一个新颖的框架,可以增强当前的地震预测方法。
translated by 谷歌翻译
一般的ML应用程序中缺少数据方案非常常见,时间序列/序列应用也不例外。本文涉及基于新的复发神经网络(RNN)解决方案,用于丢失数据下的序列预测。我们的方法与所有现有方法不同。它试图直接编码数据中的丢失模式,而无需在模型构建之前或期间尝试将数据归为数据。我们的编码是无损的,并实现了压缩。它可以用于序列分类和预测。在存在可能的外源输入的情况下,我们将重点放在多步预测的一般背景下进行预测。特别是,我们为此提出了编码器码头(SEQ2SEQ)RNN的新型变体。这里的编码器采用上述模式编码,而在具有不同结构的解码器中,多个变体是可行的。我们通过对单个和多个序列(实际)数据集的多个实验来证明我们提出的体系结构的实用性。我们考虑两种情况,其中(i)数据自然缺少,并且(ii)数据被合成掩盖。
translated by 谷歌翻译
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful fANOVA framework. In total, we summarize the results of 5400 experimental runs (≈ 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.
translated by 谷歌翻译
21世纪的现代旅游面临着许多挑战。这些挑战之一是太空有限地区的游客数量迅速增长,例如历史城市中心,博物馆或地理瓶颈,例如狭窄的山谷。在这种情况下,对特定领域内的旅游量和旅游流程的正确准确预测对于游客管理任务,例如游客流量控制和预防人满为患至关重要。静态流量控制方法,例如限制对热点或使用常规低级控制器的访问,无法解决问题。在本文中,我们通过使用旅游区提供的可用粒状数据,并将结果与​​Arima进行比较,并将结果与​​Arima进行比较经典统计方法。我们的结果表明,与Arima方法相比,深度学习模型可以产生更好的预测,同时具有更快的推理时间和能够结合其他输入功能。
translated by 谷歌翻译
自回旋运动平均值(ARMA)模型是经典的,可以说是模型时间序列数据的最多研究的方法之一。它具有引人入胜的理论特性,并在从业者中广泛使用。最近的深度学习方法普及了经常性神经网络(RNN),尤其是长期记忆(LSTM)细胞,这些细胞已成为神经时间序列建模中最佳性能和最常见的构件之一。虽然对具有长期效果的时间序列数据或序列有利,但复杂的RNN细胞并不总是必须的,有时甚至可能不如更简单的复发方法。在这项工作中,我们介绍了ARMA细胞,这是一种在神经网络中的时间序列建模的更简单,模块化和有效的方法。该单元可以用于存在复发结构的任何神经网络体系结构中,并自然地使用矢量自动进程处理多元时间序列。我们还引入了Convarma细胞作为空间相关时间序列的自然继任者。我们的实验表明,所提出的方法在性能方面与流行替代方案具有竞争力,同时由于其简单性而变得更加强大和引人注目。
translated by 谷歌翻译
The time-series forecasting (TSF) problem is a traditional problem in the field of artificial intelligence. Models such as Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), and GRU (Gate Recurrent Units) have contributed to improving the predictive accuracy of TSF. Furthermore, model structures have been proposed to combine time-series decomposition methods, such as seasonal-trend decomposition using Loess (STL) to ensure improved predictive accuracy. However, because this approach is learned in an independent model for each component, it cannot learn the relationships between time-series components. In this study, we propose a new neural architecture called a correlation recurrent unit (CRU) that can perform time series decomposition within a neural cell and learn correlations (autocorrelation and correlation) between each decomposition component. The proposed neural architecture was evaluated through comparative experiments with previous studies using five univariate time-series datasets and four multivariate time-series data. The results showed that long- and short-term predictive performance was improved by more than 10%. The experimental results show that the proposed CRU is an excellent method for TSF problems compared to other neural architectures.
translated by 谷歌翻译
建立针对双狭窄的动脉模型的计算流体动力学(CFD)的患者特异性有限元分析(FEA)模型涉及时间和努力,限制医生在时间关键时间医疗应用中快速响应的能力。这些问题可能通过培训深度学习(DL)模型来解决,以使用由具有不同配置的简化双韵动脉模型的CFD模拟产生的数据集来学习和预测血流特性。当通过从IVUS成像的实际双狭窄的动脉模型进行血液流动模式时,揭示了狭窄的颈部几何形状的正弦逼近,这些颈部几何形状被广泛用于先前的研究作品,未能有效地代表真实的效果收缩。结果,提出了一种收缩颈的新型几何表示,其就广义简化模型而言,这始终是前者的假设。动脉腔直径和流量参数的顺序变化沿着船长的长度呈现使用LSTM和GRU DL模型的机会。然而,对于短长度的倍增血液动脉的小数据集,基本神经网络模型优于大多数流动性质的专用RNN。另一方面,LSTM对预测具有大波动的流动性能更好,例如在血管的长度上变化血压。尽管在数据集中的船舶的所有属性训练和测试方面具有良好的整体准确性,但GRU模型在所有情况下为单个血管流预测的表现不佳。结果还指向任何模型中每个属性的单独优化的超级参数,而不是旨在通过单一的HyperParameters来实现所有输出的整体良好性能。
translated by 谷歌翻译
研究了自闭症数据集,以确定自闭症和健康组之间的差异。为此,分析了这两组的静止状态功能磁共振成像(RS-FMRI)数据,并创建了大脑区域之间的连接网络。开发了几个分类框架,以区分组之间的连接模式。比较了统计推断和精度的最佳模型,并分析了精度和模型解释性之间的权衡。最后,据报道,分类精度措施证明了我们框架的性能。我们的最佳模型可以以71%的精度将自闭症和健康的患者分类为多站点I数据。
translated by 谷歌翻译
我们在在线环境中研究了非线性预测,并引入了混合模型,该模型通过端到端体系结构有效地减轻了对手工设计的功能的需求和传统非线性预测/回归方法的手动模型选择问题。特别是,我们使用递归结构从顺序信号中提取特征,同时保留状态信息,即历史记录和增强决策树以产生最终输出。该连接是以端到端方式的,我们使用随机梯度下降共同优化整个体系结构,我们还为此提供了向后的通过更新方程。特别是,我们采用了一个经常性的神经网络(LSTM)来从顺序数据中提取自适应特征,并提取梯度增强机械(Soft GBDT),以进行有效的监督回归。我们的框架是通用的,因此可以使用其他深度学习体系结构进行特征提取(例如RNN和GRU)和机器学习算法进行决策,只要它们是可区分的。我们证明了算法对合成数据的学习行为以及各种现实生活数据集对常规方法的显着性能改进。此外,我们公开分享提出的方法的源代码,以促进进一步的研究。
translated by 谷歌翻译
事实证明,诸如层归一化(LN)和批处理(BN)之类的方法可有效改善复发性神经网络(RNN)的训练。但是,现有方法仅在一个特定的时间步骤中仅使用瞬时信息进行归一化,而归一化的结果是具有时间无关分布的预反应状态。该实现无法解释RNN的输入和体系结构中固有的某些时间差异。由于这些网络跨时间步骤共享权重,因此也可能需要考虑标准化方案中时间步长之间的连接。在本文中,我们提出了一种称为“分类时间归一化”(ATN)的归一化方法,该方法保留了来自多个连续时间步骤的信息,并使用它们归一化。这种设置使我们能够将更长的时间依赖项引入传统的归一化方法,而无需引入任何新的可训练参数。我们介绍了梯度传播的理论推导,并证明了权重缩放不变属性。我们将ATN应用于LN的实验表明,对各种任务(例如添加,复制和DENOISE问题和语言建模问题)表现出一致的改进。
translated by 谷歌翻译