传统的机器学习方法已在金融创新中得到广泛研究。我的研究重点是深度学习方法在资产定价上的应用。我研究了资产定价的各种深度学习方法,尤其是用于风险溢价测量的方法。所有模型都采用相同的预测信号(公司特征,系统风险和宏观经济学)。我证明了各种最先进的(SOTA)深度学习方法的高性能,并确定具有记忆机制和注意力的RNN在预测性方面具有最佳性能。此外,我使用深度学习预测向投资者展示了巨大的经济收益。我的比较实验的结果突出了设计深度学习模型时领域知识和财务理论的重要性。我还显示回报预测任务为深度学习带来了新的挑战。变化分布的时间会导致分配转移问题,这对于财务时间序列预测至关重要。我证明,深度学习方法可以改善资产风险溢价测量。由于蓬勃发展的深度学习研究,他们可以不断促进对资产定价背后的基本财务机制的研究。我还提出了一种有前途的研究方法,该方法可以通过可解释的人工智能(AI)方法从数据学习并弄清基本的经济机制。我的发现不仅证明了深度学习在开花金融科技开发中的价值合理,而且还强调了他们比传统机器学习方法的前景和优势。
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
基于预测方法的深度学习已成为时间序列预测或预测的许多应用中的首选方法,通常通常优于其他方法。因此,在过去的几年中,这些方法现在在大规模的工业预测应用中无处不在,并且一直在预测竞赛(例如M4和M5)中排名最佳。这种实践上的成功进一步提高了学术兴趣,以理解和改善深厚的预测方法。在本文中,我们提供了该领域的介绍和概述:我们为深入预测的重要构建块提出了一定深度的深入预测;随后,我们使用这些构建块,调查了最近的深度预测文献的广度。
translated by 谷歌翻译
作为自然现象的地震,历史上不断造成伤害和人类生活的损失。地震预测是任何社会计划的重要方面,可以增加公共准备,并在很大程度上减少损坏。然而,由于地震的随机特征以及实现了地震预测的有效和可靠模型的挑战,迄今为止努力一直不足,需要新的方法来解决这个问题。本文意识到​​这些问题,提出了一种基于注意机制(AM),卷积神经网络(CNN)和双向长短期存储器(BILSTM)模型的新型预测方法,其可以预测数量和最大幅度中国大陆各地区的地震为基于该地区的地震目录。该模型利用LSTM和CNN具有注意机制,以更好地关注有效的地震特性并产生更准确的预测。首先,将零阶保持技术应用于地震数据上的预处理,使得模型的输入数据更适当。其次,为了有效地使用空间信息并减少输入数据的维度,CNN用于捕获地震数据之间的空间依赖性。第三,使用Bi-LSTM层来捕获时间依赖性。第四,引入了AM层以突出其重要的特征来实现更好的预测性能。结果表明,该方法具有比其他预测方法更好的性能和概括能力。
translated by 谷歌翻译
The stock market prediction has been a traditional yet complex problem researched within diverse research areas and application domains due to its non-linear, highly volatile and complex nature. Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods. Deep learning has dominated many domains, gained much success and popularity in recent years in stock market prediction. This motivates us to provide a structured and comprehensive overview of the research on stock market prediction focusing on deep learning techniques. We present four elaborated subtasks of stock market prediction and propose a novel taxonomy to summarize the state-of-the-art models based on deep neural networks from 2011 to 2022. In addition, we also provide detailed statistics on the datasets and evaluation metrics commonly used in the stock market. Finally, we highlight some open issues and point out several future directions by sharing some new perspectives on stock market prediction.
translated by 谷歌翻译
良好的研究努力致力于利用股票预测中的深度神经网络。虽然远程依赖性和混沌属性仍然是在预测未来价格趋势之前降低最先进的深度学习模型的表现。在这项研究中,我们提出了一个新的框架来解决这两个问题。具体地,在将时间序列转换为复杂网络方面,我们将市场价格系列转换为图形。然后,从映射的图表中提取参考时间点和节点权重之间的关联的结构信息以解决关于远程依赖性和混沌属性的问题。我们采取图形嵌入式以表示时间点之间的关联作为预测模型输入。节点重量被用作先验知识,以增强时间关注的学习。我们拟议的框架的有效性通过现实世界股票数据验证,我们的方法在几个最先进的基准中获得了最佳性能。此外,在进行的交易模拟中,我们的框架进一步获得了最高的累积利润。我们的结果补充了复杂网络方法在金融领域的现有应用,并为金融市场中决策支持的投资应用提供了富有识别的影响。
translated by 谷歌翻译
In this paper, we propose a new short-term load forecasting (STLF) model based on contextually enhanced hybrid and hierarchical architecture combining exponential smoothing (ES) and a recurrent neural network (RNN). The model is composed of two simultaneously trained tracks: the context track and the main track. The context track introduces additional information to the main track. It is extracted from representative series and dynamically modulated to adjust to the individual series forecasted by the main track. The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive dilated recurrent cells. These cells enable the model to capture short-term, long-term and seasonal dependencies across time series as well as to weight dynamically the input information. The model produces both point forecasts and predictive intervals. The experimental part of the work performed on 35 forecasting problems shows that the proposed model outperforms in terms of accuracy its predecessor as well as standard statistical models and state-of-the-art machine learning models.
translated by 谷歌翻译
各种深度学习模型,尤其是一些最新的基于变压器的方法,已大大改善了长期时间序列预测的最新性能。但是,这些基于变压器的模型遭受了严重的恶化性能,并延长了输入长度除了使用扩展的历史信息。此外,这些方法倾向于在长期预测中处理复杂的示例,并增加模型复杂性,这通常会导致计算的显着增加和性能较低的鲁棒性(例如,过度拟合)。我们提出了一种新型的神经网络架构,称为Treedrnet,以进行更有效的长期预测。受稳健回归的启发,我们引入了双重残差链接结构,以使预测更加稳健。对Kolmogorov-Arnold表示定理进行了明确的介绍,并明确介绍了特征选择,模型集合和树结构,以进一步利用扩展输入序列,从而提高了可靠的输入序列和Treedrnet的代表力。与以前的顺序预测工作的深层模型不同,Treedrnet完全建立在多层感知下,因此具有很高的计算效率。我们广泛的实证研究表明,Treedrnet比最先进的方法更有效,将预测错误降低了20%至40%。特别是,Treedrnet的效率比基于变压器的方法高10倍。该代码将很快发布。
translated by 谷歌翻译
在本文中,我们研究了使用深层学习技术预测外汇货币对未来波动性的问题。我们逐步展示如何通过对白天波动率的经验模式的指导来构建深度学习网络。数值结果表明,与传统的基线(即自回归和GARCH模型)相比,多尺寸长的短期内存(LSTM)模型与多货币对的输入相比一致地实现了最先进的准确性,即自动增加和加入模型其他深度学习模式。
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
在本文中,我们研究了中途公司,即在市场资本化少于100亿美元的公开交易公司。在30年内使用美国中载公司的大型数据集,我们期望通过中期预测默认的概率术语结构,了解哪些数据源(即基本,市场或定价数据)对违约风险贡献最多。然而,现有方法通常要求来自不同时间段的数据首先聚合并转变为横截面特征,我们将问题框架作为多标签时间级分类问题。我们适应变压器模型,从自然语言处理领域发出的最先进的深度学习模型,以信用风险建模设置。我们还使用注意热图解释这些模型的预测。为了进一步优化模型,我们为多标签分类和新型多通道架构提供了一种自定义损耗功能,具有差异训练,使模型能够有效地使用所有输入数据。我们的结果表明,拟议的深度学习架构的卓越性能,导致传统模型的AUC(接收器运行特征曲线下的区域)提高了13%。我们还展示了如何使用特定于这些模型的福利方法生成不同数据源和时间关系的重要性排名。
translated by 谷歌翻译
第三方评级机构颁发的公司信贷评级是对公司信誉的量化评估。信贷评级与公司违约债务义务的可能性高度相关。这些评级在投资决策中起关键作用,这是关键风险因素之一。它们也是监管框架的核心,例如在计算金融机构必要的资本中,巴塞尔二世。能够预测评级变化将极大地使投资者和监管机构受益。在本文中,我们考虑了公司信用评级移民早期预测问题,该问题预测发行人的信用等级将根据当时的最新财务报告信息在12个月后升级,不变或降级。我们研究了不同标准的机器学习算法的有效性,并得出结论这些模型表现不佳。作为我们贡献的一部分,我们提出了一个新的多任务设想基于变压器的自动编码器(META)模型,以解决这个具有挑战性的问题。 META包括位置编码,基于变压器的自动编码器和多任务预测,以学习迁移预测和评级预测的有效表示。这使得元可以更好地探索一年后预测的培训阶段的历史数据。实验结果表明,元表现优于所有基线模型。
translated by 谷歌翻译
如今,指数基金首选大量的股本基金,市场敏感性有助于管理它们。指数资金可能会相同复制该指数,但是,成本友善和不切实际。此外,要利用市场敏感性来部分复制索引,必须准确地预测或估计它们。因此,首先,我们研究了深度学习模型以预测市场敏感性。此外,我们提出了数据处理方法的务实应用,以帮助培训并为预测生成目标数据。然后,我们提出了一个部分控制投资组合和索引的净预测市场敏感性的部分索引跟踪优化模型。韩国股票价格指数200证实了这些过程的功效。与历史估计相比,我们的实验显示了预测错误的显着降低,以及使用整个组成部分中少于一半的一半来复制指数的竞争跟踪错误。因此,我们表明,应用深度学习来预测市场敏感性是有希望的,并且我们的投资组合构建方法实际上是有效的。此外,据我们所知,这是第一个针对集中于深度学习的市场敏感性的研究。
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM). Traditional machine health maintenance systems are often costly, requiring sufficient prior expertise, and are difficult to fit into highly complex and changing industrial scenarios. With the widespread deployment of sensors on industrial equipment, building the Industrial Internet of Things (IIoT) to interconnect these devices has become an inexorable trend in the development of the digital factory. Using the device's real-time operational data collected by IIoT to get the estimated RUL through the RUL prediction algorithm, the PHM system can develop proactive maintenance measures for the device, thus, reducing maintenance costs and decreasing failure times during operation. This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario. We investigated the mainstream RUL prediction models and summarized the basic steps of RUL prediction modeling in this scenario. On this basis, a data-driven approach for RUL estimation is proposed in this paper. It employs a Multi-Head Attention Mechanism to fuse the multi-dimensional time-series data output from multiple sensors, in which the attention on features is used to capture the interactions between features and attention on sequences is used to learn the weights of time steps. Then, the Long Short-Term Memory Network is applied to learn the features of time series. We evaluate the proposed model on two benchmark datasets (C-MAPSS and PHM08), and the results demonstrate that it outperforms the state-of-art models. Moreover, through the interpretability of the multi-head attention mechanism, the proposed model can provide a preliminary explanation of engine degradation. Therefore, this approach is promising for predictive maintenance in IIoT scenarios.
translated by 谷歌翻译
我们在在线环境中研究了非线性预测,并引入了混合模型,该模型通过端到端体系结构有效地减轻了对手工设计的功能的需求和传统非线性预测/回归方法的手动模型选择问题。特别是,我们使用递归结构从顺序信号中提取特征,同时保留状态信息,即历史记录和增强决策树以产生最终输出。该连接是以端到端方式的,我们使用随机梯度下降共同优化整个体系结构,我们还为此提供了向后的通过更新方程。特别是,我们采用了一个经常性的神经网络(LSTM)来从顺序数据中提取自适应特征,并提取梯度增强机械(Soft GBDT),以进行有效的监督回归。我们的框架是通用的,因此可以使用其他深度学习体系结构进行特征提取(例如RNN和GRU)和机器学习算法进行决策,只要它们是可区分的。我们证明了算法对合成数据的学习行为以及各种现实生活数据集对常规方法的显着性能改进。此外,我们公开分享提出的方法的源代码,以促进进一步的研究。
translated by 谷歌翻译
The time-series forecasting (TSF) problem is a traditional problem in the field of artificial intelligence. Models such as Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), and GRU (Gate Recurrent Units) have contributed to improving the predictive accuracy of TSF. Furthermore, model structures have been proposed to combine time-series decomposition methods, such as seasonal-trend decomposition using Loess (STL) to ensure improved predictive accuracy. However, because this approach is learned in an independent model for each component, it cannot learn the relationships between time-series components. In this study, we propose a new neural architecture called a correlation recurrent unit (CRU) that can perform time series decomposition within a neural cell and learn correlations (autocorrelation and correlation) between each decomposition component. The proposed neural architecture was evaluated through comparative experiments with previous studies using five univariate time-series datasets and four multivariate time-series data. The results showed that long- and short-term predictive performance was improved by more than 10%. The experimental results show that the proposed CRU is an excellent method for TSF problems compared to other neural architectures.
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
Wind power forecasting helps with the planning for the power systems by contributing to having a higher level of certainty in decision-making. Due to the randomness inherent to meteorological events (e.g., wind speeds), making highly accurate long-term predictions for wind power can be extremely difficult. One approach to remedy this challenge is to utilize weather information from multiple points across a geographical grid to obtain a holistic view of the wind patterns, along with temporal information from the previous power outputs of the wind farms. Our proposed CNN-RNN architecture combines convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract spatial and temporal information from multi-dimensional input data to make day-ahead predictions. In this regard, our method incorporates an ultra-wide learning view, combining data from multiple numerical weather prediction models, wind farms, and geographical locations. Additionally, we experiment with global forecasting approaches to understand the impact of training the same model over the datasets obtained from multiple different wind farms, and we employ a method where spatial information extracted from convolutional layers is passed to a tree ensemble (e.g., Light Gradient Boosting Machine (LGBM)) instead of fully connected layers. The results show that our proposed CNN-RNN architecture outperforms other models such as LGBM, Extra Tree regressor and linear regression when trained globally, but fails to replicate such performance when trained individually on each farm. We also observe that passing the spatial information from CNN to LGBM improves its performance, providing further evidence of CNN's spatial feature extraction capabilities.
translated by 谷歌翻译
As ride-hailing services become increasingly popular, being able to accurately predict demand for such services can help operators efficiently allocate drivers to customers, and reduce idle time, improve congestion, and enhance the passenger experience. This paper proposes UberNet, a deep learning Convolutional Neural Network for short-term prediction of demand for ride-hailing services. UberNet empploys a multivariate framework that utilises a number of temporal and spatial features that have been found in the literature to explain demand for ride-hailing services. The proposed model includes two sub-networks that aim to encode the source series of various features and decode the predicting series, respectively. To assess the performance and effectiveness of UberNet, we use 9 months of Uber pickup data in 2014 and 28 spatial and temporal features from New York City. By comparing the performance of UberNet with several other approaches, we show that the prediction quality of the model is highly competitive. Further, Ubernet's prediction performance is better when using economic, social and built environment features. This suggests that Ubernet is more naturally suited to including complex motivators in making real-time passenger demand predictions for ride-hailing services.
translated by 谷歌翻译