Information on the grass growth over a year is essential for some models simulating the use of this resource to feed animals on pasture or at barn with hay or grass silage. Unfortunately, this information is rarely available. The challenge is to reconstruct grass growth from two sources of information: usual daily climate data (rainfall, radiation, etc.) and cumulative growth over the year. We have to be able to capture the effect of seasonal climatic events which are known to distort the growth curve within the year. In this paper, we formulate this challenge as a problem of disaggregating the cumulative growth into a time series. To address this problem, our method applies time series forecasting using climate information and grass growth from previous time steps. Several alternatives of the method are proposed and compared experimentally using a database generated from a grassland process-based model. The results show that our method can accurately reconstruct the time series, independently of the use of the cumulative growth information.
translated by 谷歌翻译
地下水位预测是一个应用时间序列预测任务,具有重要的社会影响,以优化水管理以及防止某些自然灾害:例如,洪水或严重的干旱。在文献中已经报告了机器学习方法以实现这项任务,但它们仅专注于单个位置的地下水水平的预测。一种全球预测方法旨在利用从各个位置的地下水级时序列序列,一次在一个地方或一次在几个地方产生预测。鉴于全球预测方法在著名的竞争中取得了成功,因此在地下水级别的预测上进行评估并查看它们与本地方法的比较是有意义的。在这项工作中,我们创建了一个1026地下水级时序列的数据集。每个时间序列都是由每日测量地下水水平和两个外源变量,降雨和蒸散量制成的。该数据集可向社区提供可重现性和进一步评估。为了确定最佳的配置,可以有效地预测完整的时间序列的地下水水平,我们比较了包括本地和全球时间序列预测方法在内的不同预测因子。我们评估了外源变量的影响。我们的结果分析表明,通过训练过去的地下水位和降雨数据的全球方法获得最佳预测。
translated by 谷歌翻译
中期地平线(几个月到一年)功耗预测是能源部门的主要挑战,特别是当考虑概率预测时。我们提出了一种新的建模方法,该方法包含趋势,季节性和天气条件,作为具有自回归特征的浅神经网络中的解析变量。我们在将其应用于新英格兰的日常电力消耗的一年试验集上获得优异的效果预测。一方面已经验证了实现的电力消耗概率预测的质量,将结果与其他标准进行比较密度预测模型,另一方面,考虑在能量扇区中经常使用的措施,作为弹球损失和CI逆退。
translated by 谷歌翻译
近期不同尺度电力消耗的丰富数据开辟了新的挑战,并强调了新技术的需求,以利用更精细的尺度提供的信息,以便改善更广泛的尺度预测。在这项工作中,我们利用该分层预测问题与多尺度传输学习之间的相似性。我们分别开发了两种分层转移学习方法,分别基于广义添加剂模型和随机林的堆叠,以及专家聚合的使用。我们将这些方法应用于在第一种情况下使用智能仪表数据,以及第二种情况下的区域数据的智能仪表数据将这些方法应用于两种电力负荷预测。对于这两个useCases,我们将我们的方法的表现与基准算法的表演进行比较,我们使用可变重要性分析调查其行为。我们的结果表明了两种方法的兴趣,这导致预测的重大改善。
translated by 谷歌翻译
Algorithms that involve both forecasting and optimization are at the core of solutions to many difficult real-world problems, such as in supply chains (inventory optimization), traffic, and in the transition towards carbon-free energy generation in battery/load/production scheduling in sustainable energy systems. Typically, in these scenarios we want to solve an optimization problem that depends on unknown future values, which therefore need to be forecast. As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area. This paper presents the findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling," held in 2021. We present a comparison and evaluation of the seven highest-ranked solutions in the competition, to provide researchers with a benchmark problem and to establish the state of the art for this benchmark, with the aim to foster and facilitate research in this area. The competition used data from the Monash Microgrid, as well as weather data and energy market data. It then focused on two main challenges: forecasting renewable energy production and demand, and obtaining an optimal schedule for the activities (lectures) and on-site batteries that lead to the lowest cost of energy. The most accurate forecasts were obtained by gradient-boosted tree and random forest models, and optimization was mostly performed using mixed integer linear and quadratic programming. The winning method predicted different scenarios and optimized over all scenarios jointly using a sample average approximation method.
translated by 谷歌翻译
Forecasting time series with extreme events has been a challenging and prevalent research topic, especially when the time series data are affected by complicated uncertain factors, such as is the case in hydrologic prediction. Diverse traditional and deep learning models have been applied to discover the nonlinear relationships and recognize the complex patterns in these types of data. However, existing methods usually ignore the negative influence of imbalanced data, or severe events, on model training. Moreover, methods are usually evaluated on a small number of generally well-behaved time series, which does not show their ability to generalize. To tackle these issues, we propose a novel probability-enhanced neural network model, called NEC+, which concurrently learns extreme and normal prediction functions and a way to choose among them via selective back propagation. We evaluate the proposed model on the difficult 3-day ahead hourly water level prediction task applied to 9 reservoirs in California. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art baselines and exhibits superior generalization ability on data with diverse distributions.
translated by 谷歌翻译
Platelet products are both expensive and have very short shelf lives. As usage rates for platelets are highly variable, the effective management of platelet demand and supply is very important yet challenging. The primary goal of this paper is to present an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). To accomplish this goal, four different demand forecasting methods, ARIMA (Auto Regressive Moving Average), Prophet, lasso regression (least absolute shrinkage and selection operator) and LSTM (Long Short-Term Memory) networks are utilized and evaluated. We use a large clinical dataset for a centralized blood distribution centre for four hospitals in Hamilton, Ontario, spanning from 2010 to 2018 and consisting of daily platelet transfusions along with information such as the product specifications, the recipients' characteristics, and the recipients' laboratory test results. This study is the first to utilize different methods from statistical time series models to data-driven regression and a machine learning technique for platelet transfusion using clinical predictors and with different amounts of data. We find that the multivariate approaches have the highest accuracy in general, however, if sufficient data are available, a simpler time series approach such as ARIMA appears to be sufficient. We also comment on the approach to choose clinical indicators (inputs) for the multivariate models.
translated by 谷歌翻译
预测可帮助企业分配资源并实现目标。在LinkedIn,产品所有者使用预测来设定业务目标,跟踪前景和监视健康。工程师使用预测有效地提供硬件。开发一种预测解决方案来满足这些需求,需要对各种时间序列进行准确,可解释的预测,并以次数至季度的频率。我们提出了Greykite,这是一个用于预测的开源Python库,已在LinkedIn上部署了二十多种用例。它的旗舰算法Silverkite提供了可解释的,快速且高度灵活的单变量预测,可捕获诸如时期增长和季节性,自相关,假期和回归剂等效果。该库通过促进数据探索,模型配置,执行和解释来实现自我服务的准确性和信任。我们的基准结果显示了来自各个域的数据集的现成速度和准确性。在过去的两年中,金融,工程和产品团队的资源计划和分配,目标设置和进度跟踪,异常检测和根本原因分析的资源团队一直信任灰金矿的预测。我们希望灰金矿对具有类似应用的预测从业者有用,这些应用需要准确,可解释的预测,这些预测捕获了与人类活动相关的时间序列共有的复杂动力学。
translated by 谷歌翻译
分布式的小型太阳能光伏(PV)系统正在以快速增加的速度安装。这可能会对分销网络和能源市场产生重大影响。结果,在不同时间分辨率和视野中,非常需要改善对这些系统发电的预测。但是,预测模型的性能取决于分辨率和地平线。在这种情况下,将多个模型的预测结合到单个预测中的预测组合(合奏)可能是鲁棒的。因此,在本文中,我们提供了对五个最先进的预测模型的性能以及在多个分辨率和视野下的现有预测组合的比较和见解。我们提出了一种基于粒子群优化(PSO)的预测组合方法,该方法将通过加权单个模型产生的预测来使预报掌握能够为手头的任务产生准确的预测。此外,我们将提出的组合方法的性能与现有的预测组合方法进行了比较。使用现实世界中的PV电源数据集进行了全面的评估,该数据集在美国三个位置的25个房屋中测得。在四种不同的分辨率和四个不同视野之间的结果表明,基于PSO的预测组合方法的表现优于使用任何单独的预测模型和其他预测组合的使用,而平均平均绝对规模误差降低了3.81%,而最佳性能则最佳性能单个个人模型。我们的方法使太阳预报员能够为其应用产生准确的预测,而不管预测分辨率或视野如何。
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
在称为RNN(p)的几个时间滞后的复发神经网络是自然回归ARX(P)模型的自然概括。当不同的时间尺度会影响给定现象时,它是一种强大的预测工具,因为它发生在能源领域,每小时,每日,每周和每年的互动并存。具有成本效益的BPTT是RNN的学习算法的行业标准。我们证明,当训练RNN(P)模型时,其他学习算法在时间和空间复杂性方面都更加有效。我们还介绍了一种新的学习算法,即树木重组的重组学习,该算法利用了展开网络的树表示,并且似乎更有效。我们提出了RNN(P)模型的应用,以在每小时规模上进行功耗预测:实验结果证明了所提出的算法的效率以及所选模型在点和能源消耗的概率预测中实现的出色预测准确性。
translated by 谷歌翻译
货运运营商依靠战术规划,以以成本效益的方式设计他们的服务网络以满足需求。对于计算途径,确定性和循环服务网络设计(SND)配方用于解决大规模问题。中央投入是定期需求,即预期在规划地平线的每个时期中重复的需求。在实践中,通过时间序列预测模型预测需求,周期性需求是这些预测的平均值。然而,这只是许多可能的映射中的一个。在文献中忽略了选择该映射的问题。我们建议使用下游决策问题的结构来选择一个良好的映射。为此目的,我们介绍了一种多级数学编程制定,明确地将时间序列预测的时间序列联系起来对此感兴趣的SND问题。解决方案是定期要求估计,以最大限度地减少战术规划地平线的成本。我们报告了对加拿大国家铁路公司大规模申请的广泛实证研究。他们清楚地表明了定期需求估算问题的重要性。实际上,规划成本对不同的定期需求估计和不同于平均预测的估计产生了重要的变化,可能导致成本较低。此外,基于预测的定期需求估计相关的成本与使用实际需求的平均值获得的比较或甚至更好。
translated by 谷歌翻译
Crop phenology is crucial information for crop yield estimation and agricultural management. Traditionally, phenology has been observed from the ground; however Earth observation, weather and soil data have been used to capture the physiological growth of crops. In this work, we propose a new approach for the within-season phenology estimation for cotton at the field level. For this, we exploit a variety of Earth observation vegetation indices (derived from Sentinel-2) and numerical simulations of atmospheric and soil parameters. Our method is unsupervised to address the ever-present problem of sparse and scarce ground truth data that makes most supervised alternatives impractical in real-world scenarios. We applied fuzzy c-means clustering to identify the principal phenological stages of cotton and then used the cluster membership weights to further predict the transitional phases between adjacent stages. In order to evaluate our models, we collected 1,285 crop growth ground observations in Orchomenos, Greece. We introduced a new collection protocol, assigning up to two phenology labels that represent the primary and secondary growth stage in the field and thus indicate when stages are transitioning. Our model was tested against a baseline model that allowed to isolate the random agreement and evaluate its true competence. The results showed that our model considerably outperforms the baseline one, which is promising considering the unsupervised nature of the approach. The limitations and the relevant future work are thoroughly discussed. The ground observations are formatted in an ready-to-use dataset and will be available at https://github.com/Agri-Hub/cotton-phenology-dataset upon publication.
translated by 谷歌翻译
在这项工作中,我们评估了人口模型和机器学习模型的合奏,以预测COVID-19大流行的不久的将来的演变,并在西班牙有特殊的用例。我们仅依靠开放和公共数据集,将发生率,疫苗接种,人类流动性和天气数据融合来喂养我们的机器学习模型(随机森林,梯度增强,K-Nearest邻居和内核岭回归)。我们使用发病率数据来调整经典人群模型(Gompertz,Logistic,Richards,Bertalanffy),以便能够更好地捕获数据的趋势。然后,我们整合了这两个模型家族,以获得更强大,更准确的预测。此外,我们已经观察到,当我们添加新功能(疫苗,移动性,气候条件)时,使用机器学习模型获得的预测有所改善,使用Shapley添加说明值分析了每个功能的重要性。就像在任何其他建模工作中一样,数据和预测质量都有多个局限性,因此必须从关键的角度看待它们,如我们在文本中所讨论的那样。我们的工作得出的结论是,这些模型的合奏使用可以改善单个预测(仅使用机器学习模型或仅使用人口模型),并且在由于缺乏相关数据而无法使用隔室模型的情况下,可以谨慎地应用。
translated by 谷歌翻译
The cyber-physical convergence is opening up new business opportunities for industrial operators. The need for deep integration of the cyber and the physical worlds establishes a rich business agenda towards consolidating new system and network engineering approaches. This revolution would not be possible without the rich and heterogeneous sources of data, as well as the ability of their intelligent exploitation, mainly due to the fact that data will serve as a fundamental resource to promote Industry 4.0. One of the most fruitful research and practice areas emerging from this data-rich, cyber-physical, smart factory environment is the data-driven process monitoring field, which applies machine learning methodologies to enable predictive maintenance applications. In this paper, we examine popular time series forecasting techniques as well as supervised machine learning algorithms in the applied context of Industry 4.0, by transforming and preprocessing the historical industrial dataset of a packing machine's operational state recordings (real data coming from the production line of a manufacturing plant from the food and beverage domain). In our methodology, we use only a single signal concerning the machine's operational status to make our predictions, without considering other operational variables or fault and warning signals, hence its characterization as ``agnostic''. In this respect, the results demonstrate that the adopted methods achieve a quite promising performance on three targeted use cases.
translated by 谷歌翻译
股票市场的不可预测性和波动性使得使用任何广义计划赚取可观的利润具有挑战性。许多先前的研究尝试了不同的技术来建立机器学习模型,这可以通过进行实时交易来在美国股票市场赚取可观的利润。但是,很少有研究重点是在特定交易期找到最佳功能的重要性。我们的顶级方法使用该性能将功能从总共148缩小到大约30。此外,在每次训练我们的机器学习模型之前,都会动态选择前25个功能。它与四个分类器一起使用合奏学习:高斯天真贝叶斯,决策树,带L1正则化的逻辑回归和随机梯度下降,以决定是长时间还是短的特定股票。我们的最佳模型在2011年7月至2019年1月之间进行的每日交易,可获得54.35%的利润。最后,我们的工作表明,加权分类器的混合物的表现要比任何在股票市场做出交易决策的个人预测指标更好。
translated by 谷歌翻译
我们基准了一个简单学习模型的亚季节预测工具包,该工具包优于操作实践和最先进的机器学习和深度学习方法。这些模型,由Mouatadid等人引入。 (2022),包括(a)气候++,这是气候学的一种适应性替代品,对于降水而言,准确性9%,比美国运营气候预测系统(CFSV2)高9%,熟练250%; (b)CFSV2 ++,一种学习的CFSV2校正,可将温度和降水精度提高7-8%,技能提高50-275%; (c)持久性++是一种增强的持久性模型,将CFSV2预测与滞后测量相结合,以将温度和降水精度提高6-9%,技能提高40-130%。在整个美国,气候++,CFSV2 ++和持久性++工具包始终优于标准气象基准,最先进的机器和深度学习方法,以及欧洲中等范围的天气预报集合中心。
translated by 谷歌翻译
随着高级数字技术的蓬勃发展,用户以及能源分销商有可能获得有关家庭用电的详细信息。这些技术也可以用来预测家庭用电量(又称负载)。在本文中,我们研究了变分模式分解和深度学习技术的使用,以提高负载预测问题的准确性。尽管在文献中已经研究了这个问题,但选择适当的分解水平和提供更好预测性能的深度学习技术的关注较少。这项研究通过研究六个分解水平和五个不同的深度学习网络的影响来弥合这一差距。首先,使用变分模式分解将原始负载轮廓分解为固有模式函数,以减轻其非平稳方面。然后,白天,小时和过去的电力消耗数据作为三维输入序列馈送到四级小波分解网络模型。最后,将与不同固有模式函数相关的预测序列组合在一起以形成聚合预测序列。使用摩洛哥建筑物的电力消耗数据集(MORED)的五个摩洛哥家庭的负载曲线评估了该方法,并根据最新的时间序列模型和基线持久性模型进行了基准测试。
translated by 谷歌翻译
We introduce a machine-learning (ML)-based weather simulator--called "GraphCast"--which outperforms the most accurate deterministic operational medium-range weather forecasting system in the world, as well as all previous ML baselines. GraphCast is an autoregressive model, based on graph neural networks and a novel high-resolution multi-scale mesh representation, which we trained on historical weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day forecasts, at 6-hour time intervals, of five surface variables and six atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer resolution at the equator. Our results show GraphCast is more accurate than ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the 2760 variable and lead time combinations we evaluated. GraphCast also outperforms the most accurate previous ML-based weather forecasting model on 99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast (35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike traditional forecasting methods, ML-based forecasting scales well with data: by training on bigger, higher quality, and more recent data, the skill of the forecasts can improve. Together these results represent a key step forward in complementing and improving weather modeling with ML, open new opportunities for fast, accurate forecasting, and help realize the promise of ML-based simulation in the physical sciences.
translated by 谷歌翻译
冰雹风险评估对于估计和减少对农作物,果园和基础设施的破坏是必要的。此外,它有助于估计和减少企业,尤其是保险公司的损失。但是冰雹预测具有挑战性。用于此目的的设计模型的数据是树维的地理空间时间序列。关于可用数据集的分辨率,冰雹是一个非常本地的事件。同样,冰雹事件很少见 - 观测中只有1%的目标标记为“冰雹”。现象和短期冰雹预测的模型正在改善。将机器学习模型引入气象学领域并不是什么新鲜事。还有各种气候模型反映了未来气候变化的可能情况。但是,没有用于数据驱动的机器学习模型来预测给定区域的冰雹频率变化。后一项任务的第一种可能方法是忽略空间和时间结构,并开发一种能够将气象变量的给定垂直轮廓分类为有利于冰雹形成的模型。尽管这种方法肯定忽略了重要的信息,但它的加权非常轻,很容易扩展,因为它将观察值视为彼此独立的。更高级的方法是设计能够处理地理空间数据的神经网络。我们在这里的想法是将负责处理空间数据处理的卷积层与能够使用时间结构工作的复发神经网络块相结合。这项研究比较了两种方法,并引入了一个适合预测冰雹频率变化的任务的模型。
translated by 谷歌翻译