大量的时间序列数据通常被组织成具有不同聚集水平的横截面结构。示例包括产品和地理组。与此类数据集相干决策和计划的必要条件是针对分散的系列的预测,可以准确地添加到汇总的系列预测中,这激发了新型层次结构预测算法的创建。机器学习社区对横截面层次预测系统的兴趣日益增长,我们正处于一个有利的时刻,以确保科学的努力基于声音基线。因此,我们提出了层次Forecast库,该库包含预处理的公开可用数据集,评估指标和一组编译的统计基线模型。我们基于Python的框架旨在弥合统计,计量经济学建模和机器学习预测研究之间的差距。代码和文档可在https://github.com/nixtla/hierarchicalforecast中找到。
translated by 谷歌翻译
神经预测的最新进展加速了大规模预测系统的性能。然而,长途预测仍然是一项非常艰巨的任务。困扰任务的两个常见挑战是预测的波动及其计算复杂性。我们介绍了N-HITS,该模型通过结合新的分层插值和多率数据采样技术来解决挑战。这些技术使提出的方法能够顺序组装其预测,并在分解输入信号并合成预测的同时强调不同频率和尺度的组件。我们证明,在平稳性的情况下,层次结构插值技术可以有效地近似于任意长的视野。此外,我们从长远的预测文献中进行了广泛的大规模数据集实验,证明了我们方法比最新方法的优势,在该方法中,N-HITS可提供比最新的16%的平均准确性提高。变压器体系结构在减少计算时间的同时(50次)。我们的代码可在https://bit.ly/3jlibp8上找到。
translated by 谷歌翻译
当时间序列具有自然组结构时,出现分层预测问题,并且需要在多个聚集水平和对组中分类的预测。在这些问题中,通常希望满足给定层次结构中的聚合约束,称为文献中的分层一致性。在生产准确的预测的同时保持层次连贯可能是一个具有挑战性的问题,特别是在概率预测的情况下。我们提出了一种能够对等级序列准确和相干的概率预测的新方法。我们称之为Deep Poisson混合网络(DPMN)。它依赖于神经网络的组合和用于分层多变量时间序列结构的关节分布的统计模型。通过施工,模型可确保分层一致性,并为预测分布的聚集和分解提供简单的规则。我们进行广泛的实证评估,将DPMN与其他最先进的方法进行比较,该方法在多个公共数据集上产生分层相干的概率预测。与现有的相干概率模型相比,我们在澳大利亚国内旅游数据的总体连续排名概率评分(CRP)的总体连续排名概率评分(CRP)的相对改善,24.2位于青年杂货店销售数据集中,6.9%在旧金山湾区公路交通数据集。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
translated by 谷歌翻译
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译