受微分方程式启发的深度学习是最近的研究趋势,它标志着许多机器学习任务的最先进的表现。其中,具有神经控制的微分方程(NCDE)的时间序列建模被认为是突破。在许多情况下,基于NCDE的模型不仅比复发性神经网络(RNN)提供了更好的准确性,而且还可以处理不规则的时间序列。在这项工作中,我们通过重新设计其核心部分,即从离散的时间序列输入产生连续路径来增强NCDES。 NCDE通常使用插值算法将离散的时间序列样本转换为连续路径。但是,我们向i)提出建议,使用编码器解码器体系结构生成另一个潜在的连续路径,该架构对应于NCDE的插值过程,即我们的基于神经网络的插值与现有的显式插值相对于现有的显式插值以及II)解码器的外推超出了原始数据的时域的外推。因此,我们的NCDE设计可以同时使用插值和外推信息进行下游机器学习任务。在我们使用5个现实世界数据集和12个基线的实验中,我们的外推和基于插值的NCDES超过了非平凡的边缘的现有基线。
translated by 谷歌翻译
在过去的几年里,通过微分方程激发的神经网络已经增殖。神经常规方程(节点)和神经控制微分方程(NCDE)是它们的两个代表性示例。理论上,NCDES提供比节点的时间序列数据更好的表示学习能力。特别地,已知NCDE适用于处理不规则的时间序列数据。然而,在采用关注之后,节点已成功扩展,但是尚未研究如何将注意力集成到NCDE中。为此,我们介绍了用于时间序列分类和预测的周度神经控制微分方程(ANCDES)的方法,其中使用了双nCDE:一个用于生成注意值,另一个用于改进下游机器学习任务的隐藏向量。我们用三个真实世界时间序列数据集和10个基线进行实验。丢弃一些值后,我们还进行不规则的时间序列实验。我们的方法一致地显示所有案例中的最佳准确性。我们的可视化还表明,通过专注于关键信息,所提出的注意机制如预期的工作。
translated by 谷歌翻译
交通预测是机器学习领域最受欢迎的时空任务之一。该领域的一种普遍方法是将图形卷积网络和经常性神经网络组合以进行时空处理。竞争激烈,提出了许多新的方法。在本文中,我们介绍了时空图神经控制微分方程(STG-NCDE)的方法。神经控制微分方程(NCDE)是用于处理顺序数据的突破性概念。我们扩展了概念和设计两个NCDES:一个用于时间处理,另一个用于空间处理。之后,我们将它们结合成一个框架。我们用6个基准数据集和20个基线进行实验。STG-NCDE在所有情况下显示最佳准确性,优于非琐碎的边缘的所有20个基线。
translated by 谷歌翻译
由于深层学习技术的显着发展,有一系列努力建立基于深入的学习的气候模型。然而,其中大多数利用经常性的神经网络和/或图形神经网络,我们设计了一种基于两个概念,神经常规差分方程(节点)和扩散方程的新型气候模型。可以通过扩散方程描述涉及棕色运动的许多物理过程,结果是广泛用于建模气候。另一方面,神经常规差分方程(节点)是学习来自数据的颂歌的潜在管理方程。在我们提出的方法中,我们将它们与一个框架相结合,并提出了一种称为神经扩散方程(NDE)的概念。我们的NDE配备了扩散方程和一个更额外的神经网络来模拟固有的不确定性,可以学习最能描述给定的气候数据集的适当潜在的控制方程。在我们用两个现实世界和一个合成数据集和11个基线的实验中,我们的方法始终如一地通过非琐碎的边缘地表达现有的基线。
translated by 谷歌翻译
Recommender systems are a long-standing research problem in data mining and machine learning. They are incremental in nature, as new user-item interaction logs arrive. In real-world applications, we need to periodically train a collaborative filtering algorithm to extract user/item embedding vectors and therefore, a time-series of embedding vectors can be naturally defined. We present a time-series forecasting-based upgrade kit (TimeKit), which works in the following way: it i) first decides a base collaborative filtering algorithm, ii) extracts user/item embedding vectors with the base algorithm from user-item interaction logs incrementally, e.g., every month, iii) trains our time-series forecasting model with the extracted time- series of embedding vectors, and then iv) forecasts the future embedding vectors and recommend with their dot-product scores owing to a recent breakthrough in processing complicated time- series data, i.e., neural controlled differential equations (NCDEs). Our experiments with four real-world benchmark datasets show that the proposed time-series forecasting-based upgrade kit can significantly enhance existing popular collaborative filtering algorithms.
translated by 谷歌翻译
许多美国都市城市因严重缺乏停车位而臭名昭著。为此,我们提出了一个主动的预测驱动优化框架,以动态调整停车价格。我们使用最先进的深度学习技术,例如神经普通微分方程(节点)来设计我们未来的停车占用率预测模型,鉴于历史占用率和价格信息。由于节点的持续和射击特性,因此,我们设计了一种单次价格优化方法,给定预训练的预测模型,该模型只需要一个迭代才能找到最佳解决方案。换句话说,我们优化了预先训练的预测模型的价格输入,以实现停车位的目标占用率。我们对在旧金山和西雅图收集的数据进行了实验多年。与各种时间或时空预测模型相比,我们的预测模型显示出最佳准确性。我们的单发优化方法在搜索时间方面极大地优于其他黑框和白色框搜索方法,并且始终返回最佳价格解决方案。
translated by 谷歌翻译
Ordinary Differential Equations (ODE)-based models have become popular foundation models to solve many time-series problems. Combining neural ODEs with traditional RNN models has provided the best representation for irregular time series. However, ODE-based models require the trajectory of hidden states to be defined based on the initial observed value or the last available observation. This fact raises questions about how long the generated hidden state is sufficient and whether it is effective when long sequences are used instead of the typically used shorter sequences. In this article, we introduce CrossPyramid, a novel ODE-based model that aims to enhance the generalizability of sequences representation. CrossPyramid does not rely only on the hidden state from the last observed value; it also considers ODE latent representations learned from other samples. The main idea of our proposed model is to define the hidden state for the unobserved values based on the non-linear correlation between samples. Accordingly, CrossPyramid is built with three distinctive parts: (1) ODE Auto-Encoder to learn the best data representation. (2) Pyramidal attention method to categorize the learned representations (hidden state) based on the relationship characteristics between samples. (3) Cross-level ODE-RNN to integrate the previously learned information and provide the final latent state for each sample. Through extensive experiments on partially-observed synthetic and real-world datasets, we show that the proposed architecture can effectively model the long gaps in intermittent series and outperforms state-of-the-art approaches. The results show an average improvement of 10\% on univariate and multivariate datasets for both forecasting and classification tasks.
translated by 谷歌翻译
像长期短期内存网络(LSTMS)和门控复发单元(GRUS)相同的经常性神经网络(RNN)是建模顺序数据的流行选择。它们的门控机构允许以来自传入观测的新信息在隐藏状态中编码的先前历史。在许多应用程序中,例如医疗记录,观察时间是不规则的并且携带重要信息。然而,LSTM和GRUS在观察之间假设恒定的时间间隔。为了解决这一挑战,我们提出了连续的经常性单位(CRU)-A神经结构,可以自然地处理观察之间的不规则时间间隔。 CRU的浇注机制采用卡尔曼滤波器的连续制剂,并且根据线性随机微分方程(SDE)和(2)潜伏状态在新观察进入时,在(1)之间的连续潜在传播之间的交替。在实证研究,我们表明CRU可以比神经常规差分方程(神经颂歌)的模型更好地插值不规则时间序列。我们还表明,我们的模型可以从IM-AGES推断动力学,并且卡尔曼有效地单挑出候选人的候选人,从而从嘈杂的观察中获得有价值的状态更新。
translated by 谷歌翻译
Recurrent neural networks (RNNs) have brought a lot of advancements in sequence labeling tasks and sequence data. However, their effectiveness is limited when the observations in the sequence are irregularly sampled, where the observations arrive at irregular time intervals. To address this, continuous time variants of the RNNs were introduced based on neural ordinary differential equations (NODE). They learn a better representation of the data using the continuous transformation of hidden states over time, taking into account the time interval between the observations. However, they are still limited in their capability as they use the discrete transformations and a fixed discrete number of layers (depth) over an input in the sequence to produce the output observation. We intend to address this limitation by proposing RNNs based on differential equations which model continuous transformations over both depth and time to predict an output for a given input in the sequence. Specifically, we propose continuous depth recurrent neural differential equations (CDR-NDE) which generalizes RNN models by continuously evolving the hidden states in both the temporal and depth dimensions. CDR-NDE considers two separate differential equations over each of these dimensions and models the evolution in the temporal and depth directions alternatively. We also propose the CDR-NDE-heat model based on partial differential equations which treats the computation of hidden states as solving a heat equation over time. We demonstrate the effectiveness of the proposed models by comparing against the state-of-the-art RNN models on real world sequence labeling problems and data.
translated by 谷歌翻译
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism in order to improve the modeling of long-term dependencies in sequential data. This model is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs). By considering a suitable time-discretization scheme, we propose $\tau$-GRU, a discrete-time gated recurrent unit with delay. We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies. Our empirical results show that $\tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures on a range of tasks, including time-series classification, human activity recognition, and speech recognition.
translated by 谷歌翻译
异步时间序列是一个多元时间序列,在该时间序列中,所有通道都被观察到异步独立的,使得时间序列在对齐时极为稀疏。我们经常在具有复杂的观察过程(例如医疗保健,气候科学和天文学)的应用中观察到这种影响,仅举几例。由于异步性质,它们对深度学习体系结构构成了重大挑战,假定给他们的时间序列定期采样,完全观察并与时间对齐。本文提出了一个新颖的框架,我们称深卷积集功能(DCSF),该功能高度可扩展且有效,对于异步时间序列分类任务。随着深度学习体系结构的最新进展,我们引入了一个模型,该模型不变了,在此订单中呈现了时间序列的频道。我们探索卷积神经网络,该网络对定期采样和完全观察到的时间序列的紧密相关的问题分类进行了很好的研究,以编码设置元素。我们评估DCSF的ASTS分类和在线(每个时间点)ASTS分类。我们在多个现实世界和合成数据集上进行的广泛实验验证了建议的模型在准确性和运行时间方面的表现优于一系列最新模型。
translated by 谷歌翻译
基于签名的技术使数学洞察力洞悉不断发展的数据的复杂流之间的相互作用。这些见解可以自然地转化为理解流数据的数值方法,也许是由于它们的数学精度,已被证明在数据不规则而不是固定的情况下分析流的数据以及数据和数据的尺寸很有用样本量均为中等。了解流的多模式数据是指数的:$ d $ d $的字母中的$ n $字母中的一个单词可以是$ d^n $消息之一。签名消除了通过采样不规则性引起的指数级噪声,但仍然存在指数量的信息。这项调查旨在留在可以直接管理指数缩放的域中。在许多问题中,可伸缩性问题是一个重要的挑战,但需要另一篇调查文章和进一步的想法。这项调查描述了一系列环境集足够小以消除大规模机器学习的可能性,并且可以有效地使用一小部分免费上下文和原则性功能。工具的数学性质可以使他们对非数学家的使用恐吓。本文中介绍的示例旨在弥合此通信差距,并提供从机器学习环境中绘制的可进行的工作示例。笔记本可以在线提供这些示例中的一些。这项调查是基于伊利亚·雪佛兰(Ilya Chevryev)和安德烈·科米利津(Andrey Kormilitzin)的早期论文,它们在这种机械开发的较早时刻大致相似。本文说明了签名提供的理论见解是如何在对应用程序数据的分析中简单地实现的,这种方式在很大程度上对数据类型不可知。
translated by 谷歌翻译
虽然外源变量对时间序列分析的性能改善有重大影响,但在当前的连续方法中很少考虑这些序列间相关性和时间依赖性。多元时间序列的动力系统可以用复杂的未知偏微分方程(PDE)进行建模,这些方程(PDE)在科学和工程的许多学科中都起着重要作用。在本文中,我们提出了一个任意步骤预测的连续时间模型,以学习多元时间序列中的未知PDE系统,其管理方程是通过自我注意和封闭的复发神经网络参数化的。所提出的模型\下划线{变量及其对目标系列的影响。重要的是,使用特殊设计的正则化指南可以将模型简化为正则化的普通微分方程(ODE)问题,这使得可以触犯的PDE问题以获得数值解决方案,并且可行,以预测目标序列的多个未来值。广泛的实验表明,我们提出的模型可以在强大的基准中实现竞争精度:平均而言,它通过降低RMSE的$ 9.85 \%$和MAE的MAE $ 13.98 \%$的基线表现优于最佳基准,以获得任意步骤预测的MAE $。
translated by 谷歌翻译
基于预测方法的深度学习已成为时间序列预测或预测的许多应用中的首选方法,通常通常优于其他方法。因此,在过去的几年中,这些方法现在在大规模的工业预测应用中无处不在,并且一直在预测竞赛(例如M4和M5)中排名最佳。这种实践上的成功进一步提高了学术兴趣,以理解和改善深厚的预测方法。在本文中,我们提供了该领域的介绍和概述:我们为深入预测的重要构建块提出了一定深度的深入预测;随后,我们使用这些构建块,调查了最近的深度预测文献的广度。
translated by 谷歌翻译
不规则的时间序列数据在现实世界中很普遍,并且具有简单的复发性神经网络(RNN)的建模具有挑战性。因此,提出了一种结合使用普通微分方程(ODE)和RNN使用的模型(ODE-RNN),以模拟不规则时间序列的精度,但其计算成本很高。在本文中,我们通过使用不同的有效批处理策略提出了ODE-RNN的运行时间的改进。我们的实验表明,新模型将ODE-RNN的运行时间显着从2次降低到49次,具体取决于数据的不规则性,同时保持可比较的精度。因此,我们的模型可以对建模更大的不规则数据集建模。
translated by 谷歌翻译
Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series. In addition to high computational overhead due to explicitly computing hidden states recurrence, existing ODE-based models fall short in learning sequence data with sharp transitions - common in many real-world systems - due to numerical challenges during optimization. In this work, we propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE to increase modeling capacity. Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4 which bypasses the explicit evaluation of hidden states. We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets in the Monash Forecasting Repository, and is capable of modeling highly stochastic data with sharp temporal transitions. LS4 sets state-of-the-art for continuous-time latent generative models, with significant improvement of mean squared error and tighter variational lower bounds on irregularly-sampled datasets, while also being x100 faster than other baselines on long sequences.
translated by 谷歌翻译
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
translated by 谷歌翻译
With the evolution of power systems as it is becoming more intelligent and interactive system while increasing in flexibility with a larger penetration of renewable energy sources, demand prediction on a short-term resolution will inevitably become more and more crucial in designing and managing the future grid, especially when it comes to an individual household level. Projecting the demand for electricity for a single energy user, as opposed to the aggregated power consumption of residential load on a wide scale, is difficult because of a considerable number of volatile and uncertain factors. This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem. LSTM and GRU are comparatively newer and among the most well-adopted deep learning approaches. The electricity consumption datasets were obtained from individual household smart meters. The comparison shows that the LSTM model performs better for home-level forecasting than alternative prediction techniques-GRU in this case. To compare the NN-based models with contrast to the conventional statistical technique-based model, ARIMA based model was also developed and benchmarked with LSTM and GRU model outcomes in this study to show the performance of the proposed model on the collected time series data.
translated by 谷歌翻译
动态系统参见在物理,生物学,化学等自然科学中广泛使用,以及电路分析,计算流体动力学和控制等工程学科。对于简单的系统,可以通过应用基本物理法来导出管理动态的微分方程。然而,对于更复杂的系统,这种方法变得非常困难。数据驱动建模是一种替代范式,可以使用真实系统的观察来了解系统的动态的近似值。近年来,对数据驱动的建模技术的兴趣增加,特别是神经网络已被证明提供了解决广泛任务的有效框架。本文提供了使用神经网络构建动态系统模型的不同方式的调查。除了基础概述外,我们还审查了相关的文献,概述了这些建模范式必须克服的数值模拟中最重要的挑战。根据审查的文献和确定的挑战,我们提供了关于有前途的研究领域的讨论。
translated by 谷歌翻译
复发性神经网络(RNN)的可伸缩性受到每个时间步骤计算对先前时间步长输出的顺序依赖性的阻碍。因此,加快和扩展RNN的一种方法是减少每个时间步长所需的计算,而不是模型大小和任务。在本文中,我们提出了一个模型,该模型将封闭式复发单元(GRU)作为基于事件的活动模型,我们称为基于事件的GRU(EGRU),其中仅在收到输入事件(事件 - 基于其他单位。当与一次活跃的单位仅一小部分(活动 - 帕斯斯)相结合时,该模型具有比当前RNN的计算更高效的潜力。值得注意的是,我们模型中的活动 - 表格性也转化为梯度下降期间稀疏参数更新,从而将此计算效率扩展到训练阶段。我们表明,与现实世界中最新的经常性网络模型相比,EGRU表现出竞争性能,包括语言建模,同时在推理和培训期间自然保持高活动稀疏性。这为下一代重复网络奠定了基础,这些网络可扩展,更适合新型神经形态硬件。
translated by 谷歌翻译