Covid-19的反复暴发对全球社会产生了持久的影响,该社会呼吁使用具有早期可用性的各种数据来预测大流行波。现有的预测模型可以预测使用移动性数据的第一次爆发浪潮可能不适用于多波预测,因为美国和日本的证据表明,不同波浪之间的流动性模式在感染情况下与波动表现出不同的关系。因此,为了预测多波大流行,我们提出了一个基于社会意识的图形神经网络(SAB-GNN),它考虑了与症状相关的Web搜索频率的衰减,以捕获多个波浪中公共意识的变化。我们的模型结合了GNN和LSTM,以建模城市地区之间的复杂关系,跨区域的移动性模式,Web搜索历史记录和未来的Covid-19感染。我们训练我们的模型,从2020年4月至2021年5月,在雅虎日本公司根据严格的隐私保护规则中收集的四个大流行浪潮中,使用其移动性和Web搜索数据来预测东京地区的未来大流行爆发。结果证明了我们的模型优于最先进的基线,例如ST-GNN,MPNN和GraphLSTM。尽管我们的模型在计算上并不昂贵(只有3层和10个隐藏的神经元),但提出的模型使公共机构能够预料并为将来的大流行爆发做准备。
translated by 谷歌翻译
预测抗流动过程中感染的数量对政府制定抗流动策略极为有益,尤其是在细粒度的地理单位中。以前的工作着重于低空间分辨率预测,例如县级和预处理数据到同一地理水平,这将失去一些有用的信息。在本文中,我们提出了一个基于两个地理水平的数据,用于社区级别的COVID-19预测,该模型(FGC-COVID)基于数据。我们使用比社区更细粒度的地理水平(CBG)之间的人口流动数据来构建图形,并使用图形神经网络(GNN)构建图形并捕获CBG之间的依赖关系。为了预测,为了预测更细粒度的模式,引入了空间加权聚合模块,以将CBG的嵌入基于其地理隶属关系和空间自相关,将CBG的嵌入到社区水平上。在300天LA COVID-19数据中进行的大量实验表明,我们的模型的表现优于社区级Covid-19预测的现有预测模型。
translated by 谷歌翻译
COVID-19的大流行提出了对多个领域决策者的流行预测的重要性,从公共卫生到整个经济。虽然预测流行进展经常被概念化为类似于天气预测,但是它具有一些关键的差异,并且仍然是一项非平凡的任务。疾病的传播受到人类行为,病原体动态,天气和环境条件的多种混杂因素的影响。由于政府公共卫生和资助机构的倡议,捕获以前无法观察到的方面的丰富数据来源的可用性增加了研究的兴趣。这尤其是在“以数据为中心”的解决方案上进行的一系列工作,这些解决方案通过利用非传统数据源以及AI和机器学习的最新创新来增强我们的预测能力的潜力。这项调查研究了各种数据驱动的方法论和实践进步,并介绍了一个概念框架来导航它们。首先,我们列举了与流行病预测相关的大量流行病学数据集和新的数据流,捕获了各种因素,例如有症状的在线调查,零售和商业,流动性,基因组学数据等。接下来,我们将讨论关注最近基于数据驱动的统计和深度学习方法的方法和建模范式,以及将机械模型知识域知识与统计方法的有效性和灵活性相结合的新型混合模型类别。我们还讨论了这些预测系统的现实部署中出现的经验和挑战,包括预测信息。最后,我们重点介绍了整个预测管道中发现的一些挑战和开放问题。
translated by 谷歌翻译
Pandemic(epidemic) modeling, aiming at disease spreading analysis, has always been a popular research topic especially following the outbreak of COVID-19 in 2019. Some representative models including SIR-based deep learning prediction models have shown satisfactory performance. However, one major drawback for them is that they fall short in their long-term predictive ability. Although graph convolutional networks (GCN) also perform well, their edge representations do not contain complete information and it can lead to biases. Another drawback is that they usually use input features which they are unable to predict. Hence, those models are unable to predict further future. We propose a model that can propagate predictions further into the future and it has better edge representations. In particular, we model the pandemic as a spatial-temporal graph whose edges represent the transition of infections and are learned by our model. We use a two-stream framework that contains GCN and recursive structures (GRU) with an attention mechanism. Our model enables mobility analysis that provides an effective toolbox for public health researchers and policy makers to predict how different lock-down strategies that actively control mobility can influence the spread of pandemics. Experiments show that our model outperforms others in its long-term predictive power. Moreover, we simulate the effects of certain policies and predict their impacts on infection control.
translated by 谷歌翻译
流行预测是有效控制流行病的关键,并帮助世界缓解威胁公共卫生的危机。为了更好地了解流行病的传播和演变,我们提出了Epignn,这是一种基于图神经网络的流行病预测模型。具体而言,我们设计了一个传输风险编码模块,以表征区域在流行过程中的局部和全局空间效应,并将其纳入模型。同时,我们开发了一个区域感知的图形学习者(RAGL),该图形将传播风险,地理依赖性和时间信息考虑在内,以更好地探索时空依赖性,并使地区意识到相关地区的流行状况。 RAGL还可以与外部资源(例如人类流动性)相结合,以进一步提高预测性能。对五个现实世界流行有关的数据集(包括流感和Covid-19)进行的全面实验证明了我们提出的方法的有效性,并表明Epignn在RMSE中优于最先进的基线。
translated by 谷歌翻译
人口级社会事件,如民事骚乱和犯罪,往往对我们的日常生活产生重大影响。预测此类事件对于决策和资源分配非常重要。由于缺乏关于事件发生的真实原因和潜在机制的知识,事件预测传统上具有挑战性。近年来,由于两个主要原因,研究事件预测研究取得了重大进展:(1)机器学习和深度学习算法的开发和(2)社交媒体,新闻来源,博客,经济等公共数据的可访问性指标和其他元数据源。软件/硬件技术中的数据的爆炸性增长导致了社会事件研究中的深度学习技巧的应用。本文致力于提供社会事件预测的深层学习技术的系统和全面概述。我们专注于两个社会事件的域名:\ Texit {Civil unrest}和\ texit {犯罪}。我们首先介绍事件预测问题如何作为机器学习预测任务制定。然后,我们总结了这些问题的数据资源,传统方法和最近的深度学习模型的发展。最后,我们讨论了社会事件预测中的挑战,并提出了一些有希望的未来研究方向。
translated by 谷歌翻译
随着Covid-19影响每个国家的全球和改变日常生活,预测疾病的传播的能力比任何先前的流行病更重要。常规的疾病 - 展开建模方法,隔间模型,基于对病毒的扩散的时空均匀性的假设,这可能导致预测到欠低,特别是在高空间分辨率下。本文采用替代技术 - 时空机器学习方法。我们提出了Covid-LSTM,一种基于长期短期内存深度学习架构的数据驱动模型,用于预测Covid-19在美国县级的发病率。我们使用每周数量的新阳性案例作为时间输入,以及来自Facebook运动和连通数据集的手工工程空间特征,以捕捉时间和空间的疾病的传播。 Covid-LSTM在我们的17周的评估期间优于Covid-19预测集线器集合模型(CovidHub-Ensemble),使其首先比一个或多个预测期更准确的模型。在4周的预测地平线上,我们的型号平均每县平均50例比CovidHub-Ensemble更准确。我们强调,在Covid-19之前,在Covid-19之前的数据驱动预测的未充分利用疾病传播的预测可能是由于以前疾病缺乏足够的数据,除了最近的时尚预测方法的机器学习方法的进步。我们讨论了更广泛的数据驱动预测的障碍,以及将来将使用更多的基于学习的模型。
translated by 谷歌翻译
Accurate short-term traffic prediction plays a pivotal role in various smart mobility operation and management systems. Currently, most of the state-of-the-art prediction models are based on graph neural networks (GNNs), and the required training samples are proportional to the size of the traffic network. In many cities, the available amount of traffic data is substantially below the minimum requirement due to the data collection expense. It is still an open question to develop traffic prediction models with a small size of training data on large-scale networks. We notice that the traffic states of a node for the near future only depend on the traffic states of its localized neighborhoods, which can be represented using the graph relational inductive biases. In view of this, this paper develops a graph network (GN)-based deep learning model LocaleGN that depicts the traffic dynamics using localized data aggregating and updating functions, as well as the node-wise recurrent neural networks. LocaleGN is a light-weighted model designed for training on few samples without over-fitting, and hence it can solve the problem of few-sample traffic prediction. The proposed model is examined on predicting both traffic speed and flow with six datasets, and the experimental results demonstrate that LocaleGN outperforms existing state-of-the-art baseline models. It is also demonstrated that the learned knowledge from LocaleGN can be transferred across cities. The research outcomes can help to develop light-weighted traffic prediction systems, especially for cities lacking historically archived traffic data.
translated by 谷歌翻译
Accurate activity location prediction is a crucial component of many mobility applications and is particularly required to develop personalized, sustainable transportation systems. Despite the widespread adoption of deep learning models, next location prediction models lack a comprehensive discussion and integration of mobility-related spatio-temporal contexts. Here, we utilize a multi-head self-attentional (MHSA) neural network that learns location transition patterns from historical location visits, their visit time and activity duration, as well as their surrounding land use functions, to infer an individual's next location. Specifically, we adopt point-of-interest data and latent Dirichlet allocation for representing locations' land use contexts at multiple spatial scales, generate embedding vectors of the spatio-temporal features, and learn to predict the next location with an MHSA network. Through experiments on two large-scale GNSS tracking datasets, we demonstrate that the proposed model outperforms other state-of-the-art prediction models, and reveal the contribution of various spatio-temporal contexts to the model's performance. Moreover, we find that the model trained on population data achieves higher prediction performance with fewer parameters than individual-level models due to learning from collective movement patterns. We also reveal mobility conducted in the recent past and one week before has the largest influence on the current prediction, showing that learning from a subset of the historical mobility is sufficient to obtain an accurate location prediction result. We believe that the proposed model is vital for context-aware mobility prediction. The gained insights will help to understand location prediction models and promote their implementation for mobility applications.
translated by 谷歌翻译
Deep learning approaches for spatio-temporal prediction problems such as crowd-flow prediction assumes data to be of fixed and regular shaped tensor and face challenges of handling irregular, sparse data tensor. This poses limitations in use-case scenarios such as predicting visit counts of individuals' for a given spatial area at a particular temporal resolution using raster/image format representation of the geographical region, since the movement patterns of an individual can be largely restricted and localized to a certain part of the raster. Additionally, current deep-learning approaches for solving such problem doesn't account for the geographical awareness of a region while modelling the spatio-temporal movement patterns of an individual. To address these limitations, there is a need to develop a novel strategy and modeling approach that can handle both sparse, irregular data while incorporating geo-awareness in the model. In this paper, we make use of quadtree as the data structure for representing the image and introduce a novel geo-aware enabled deep learning layer, GA-ConvLSTM that performs the convolution operation based on a novel geo-aware module based on quadtree data structure for incorporating spatial dependencies while maintaining the recurrent mechanism for accounting for temporal dependencies. We present this approach in the context of the problem of predicting spatial behaviors of an individual (e.g., frequent visits to specific locations) through deep-learning based predictive model, GADST-Predict. Experimental results on two GPS based trace data shows that the proposed method is effective in handling frequency visits over different use-cases with considerable high accuracy.
translated by 谷歌翻译
本研究的目的是通过整合基于物理和人类感知的特征来开发和测试城市洪播北卡斯的新型结构化深度学习建模框架。我们提出了一种新的计算建模框架,包括基于关注的空间 - 时间图卷积网络(ASTGCN)模型以及实时收集的不同数据流,并在模型中收集,以考虑空间和时间信息和依赖项这改善了洪涝灾害。计算建模框架的新颖性是三倍;首先,由于空间和时间图卷积模块,该模型能够考虑淹没传播中的空间和时间依赖性;其次,它使得能够捕获异构时间数据流的影响,这些数据流可以发挥洪水状态,包括基于物理的特征,例如降雨强度和水高度,以及人类感知数据,例如洪水报告和人类活动的波动。第三,其注意机制使模型能够将其关注最有影响力的特征指示。我们展示了建模框架在德克萨斯州哈里斯县的背景下作为洪水事件的案例研究和飓风。结果表明,该模型为人口普查道级别的城市洪水淹没了卓越的性能,精度为0.808,并召回0.891,这与其他一些新颖的模型相比表现出更好的表现更好。此外,ASTGCN模型性能提高了异构动态功能,仅依赖于基于物理的特征,这表明了使用异源人类感测数据的洪水截图,
translated by 谷歌翻译
为了解决疫苗犹豫不决,这会损害COVID-19疫苗接种运动的努力,必须了解公共疫苗接种态度并及时掌握其变化。尽管具有可靠性和可信赖性,但基于调查的传统态度收集是耗时且昂贵的,无法遵循疫苗接种态度的快速发展。我们利用社交媒体上的文本帖子通过提出深入学习框架来实时提取和跟踪用户的疫苗接种立场。为了解决与疫苗相关话语中常用的讽刺和讽刺性的语言特征的影响,我们将用户社交网络邻居的最新帖子集成到框架中,以帮助检测用户的真实态度。根据我们从Twitter的注释数据集,与最新的仅文本模型相比,从我们框架实例化的模型可以提高态度提取的性能高达23%。使用此框架,我们成功地验证了使用社交媒体跟踪现实生活中疫苗接种态度的演变的可行性。我们进一步显示了对我们的框架的一种实际用途,它可以通过从社交媒体中感知到的信息来预测用户疫苗犹豫的变化的可能性。
translated by 谷歌翻译
在撰写本文时,Covid-19(2019年冠状病毒病)已扩散到220多个国家和地区。爆发后,大流行的严肃性使人们在社交媒体上更加活跃,尤其是在Twitter和Weibo等微博平台上。现在,大流行特定的话语一直在这些平台上持续数月。先前的研究证实了这种社会产生的对话对危机事件的情境意识的贡献。案件的早期预测对于当局估算应对病毒的生长所需的资源要求至关重要。因此,这项研究试图将公共话语纳入预测模型的设计中,特别针对正在进行的波浪的陡峭山路区域。我们提出了一种基于情感的主题方法,用于设计与公开可用的Covid-19相关Twitter对话中的多个时间序列。作为用例,我们对澳大利亚Covid-19的日常案例和该国境内产生的Twitter对话实施了拟议的方法。实验结果:(i)显示了Granger导致每日COVID-19确认案例的潜在社交媒体变量的存在,并且(ii)确认这些变量为预测模型提供了其他预测能力。此外,结果表明,用于建模的社交媒体变量包含了48.83--51.38%的RMSE比基线模型的改善。我们还向公众发布了大型Covid-19特定地理标记的全球推文数据集Megocov,预计该量表的地理标记数据将有助于通过其他空间和时间上下文理解大流行的对话动态。
translated by 谷歌翻译
时空人群流量预测(STCFP)问题是一种经典问题,具有丰富的现有研究工作,这些努力受益于传统的统计学习和最近的深度学习方法。虽然STCFP可以参考许多现实世界问题,但大多数现有研究都侧重于相当特定的应用,例如预测出租车需求,乘资顺序等。这会阻碍STCFP研究作为针对不同应用的方法几乎没有比较,因此如何将应用驱动的方法概括为其他场景尚不清楚。要填补这一差距,这篇论文进行了两项努力:(i)我们提出了一个叫做STANALYTIC的分析框架,以定性地调查其关于各种空间和时间因素的设计考虑的STCFP方法,旨在使不同的应用驱动的方法进行不同的方法; (ii)(ii)我们构建一个广泛的大型STCFP基准数据集,具有四种不同的场景(包括RideSharing,Bikesharing,Metro和电动车辆充电),其流量高达数亿个流量记录,以定量测量STCFP方法的普遍性。此外,为了详细说明STANalytic在帮助设计上推广的STCFP方法方面的有效性,我们提出了一种通过整合STANALYTIC鉴定的可推广的时间和空间知识来提出一种称为STETA的时空元模型。我们利用不同的深度学习技术实施STMETA的三种变体。通过数据集,我们证明Stmeta变体可以优于最先进的STCFP方法5%。
translated by 谷歌翻译
在清晨预测交通动态时,传统交通预测方法的有效性通常非常有限。原因是在清晨通勤期间交通可能会彻底分解,这个分解的时间和持续时间大幅度从日常生活中变化。清晨的交通预测是通知午餐的交通管理至关重要,但他们通常会提前预测,特别是在午夜预测。在本文中,我们建议将Twitter消息作为探测方法,了解在前一天晚上/午夜的人们工作和休息模式的影响到下一天的早晨交通。该模型在匹兹堡的高速公路网络上进行了测试,作为实验。由此产生的关系令人惊讶地简单且强大。我们发现,一般来说,早些时候的人休息如推文所示,即第二天早上就越拥挤的道路就越多。之前的大事发生了大事,由更高或更低的Tweet情绪表示,比正常,通常意味着在第二天早上的旅行需求较低。此外,人们在前一天晚上和清晨的鸣叫活动与早晨高峰时段的拥堵有统计学相关。我们利用这种关系来构建一个预测框架,预测早晨的通勤充血使用5时或早晨午夜提取的人的推特型材。匹兹堡研究支持我们的框架可以精确预测早晨拥塞,特别是对于具有大型日常充血变异的道路瓶颈上游的一些道路段。我们的方法在没有Twitter消息功能的情况下大大差异,可以从提供管理洞察力的推文配置文件中学习有意义的需求表示。
translated by 谷歌翻译
量化城市道路网络(URNS)不同部分的拓扑相似之处使我们能够了解城市成长模式。虽然传统统计信息提供有关单个节点的直接邻居或整个网络的特性的有用信息,但是这种度量无法衡量考虑本地间接邻域关系的子网的相似性。在这项研究中,我们提出了一种基于图的机器学习方法来量化子网的空间均匀性。我们将该方法应用于全球30个城市的11,790个城市道路网络,以衡量每个城市和不同城市的道路网络的空间均匀性。我们发现,城市内的空间均匀性与诸如GDP和人口增长的社会经济地位高度相关。此外,通过在不同城市转移模型获得的城市间空间均匀性揭示了欧洲的城市网络结构的城市网络结构间相似性,传递给美国和亚洲的城市。可以利用使用我们的方法揭示的社会经济发展和城市间相似性,以了解和转移城市的洞察力。它还使我们能够解决城市政策挑战,包括在迅速城市化地区的网络规划,并打击区域不平等。
translated by 谷歌翻译
Traffic state prediction in a transportation network is paramount for effective traffic operations and management, as well as informed user and system-level decision-making. However, long-term traffic prediction (beyond 30 minutes into the future) remains challenging in current research. In this work, we integrate the spatio-temporal dependencies in the transportation network from network modeling, together with the graph convolutional network (GCN) and graph attention network (GAT). To further tackle the dramatic computation and memory cost caused by the giant model size (i.e., number of weights) caused by multiple cascaded layers, we propose sparse training to mitigate the training cost, while preserving the prediction accuracy. It is a process of training using a fixed number of nonzero weights in each layer in each iteration. We consider the problem of long-term traffic speed forecasting for a real large-scale transportation network data from the California Department of Transportation (Caltrans) Performance Measurement System (PeMS). Experimental results show that the proposed GCN-STGT and GAT-STGT models achieve low prediction errors on short-, mid- and long-term prediction horizons, of 15, 30 and 45 minutes in duration, respectively. Using our sparse training, we could train from scratch with high sparsity (e.g., up to 90%), equivalent to 10 times floating point operations per second (FLOPs) reduction on computational cost using the same epochs as dense training, and arrive at a model with very small accuracy loss compared with the original dense training
translated by 谷歌翻译
Real-time air pollution monitoring is a valuable tool for public health and environmental surveillance. In recent years, there has been a dramatic increase in air pollution forecasting and monitoring research using artificial neural networks (ANNs). Most of the prior work relied on modeling pollutant concentrations collected from ground-based monitors and meteorological data for long-term forecasting of outdoor ozone, oxides of nitrogen, and PM2.5. Given that traditional, highly sophisticated air quality monitors are expensive and are not universally available, these models cannot adequately serve those not living near pollutant monitoring sites. Furthermore, because prior models were built on physical measurement data collected from sensors, they may not be suitable for predicting public health effects experienced from pollution exposure. This study aims to develop and validate models to nowcast the observed pollution levels using Web search data, which is publicly available in near real-time from major search engines. We developed novel machine learning-based models using both traditional supervised classification methods and state-of-the-art deep learning methods to detect elevated air pollution levels at the US city level, by using generally available meteorological data and aggregate Web-based search volume data derived from Google Trends. We validated the performance of these methods by predicting three critical air pollutants (ozone (O3), nitrogen dioxide (NO2), and fine particulate matter (PM2.5)), across ten major U.S. metropolitan statistical areas (MSAs) in 2017 and 2018.
translated by 谷歌翻译
背景:最近,在疫苗接种率相对较高的地区,已经报告了大量的每日CoVID-19例阳性病例。因此,助推器疫苗接种已成为必要。此外,尚未深入讨论由不同变体和相关因素引起的感染。具有较大的变异性和不同的共同因素,很难使用常规数学模型来预测Covid-19的发生率。方法:基于长期短期记忆的机器学习被应用于预测新每日阳性病例(DPC),严重病例,住院病例和死亡的时间序列。从以色列等疫苗接种率高的地区获得的数据与日本其他地区的当前数据混合在一起,以考虑疫苗接种的潜在影响。还考虑了症状感染提供的保护,从疫苗接种的人口效力以及病毒变异的减弱保护,比率和感染性的降低。为了代表公共行为的变化,分析还包括通过社交媒体进行的公共流动性和通过社交媒体的互动。研究结果:比较特拉维夫,以色列观察到的新DPC,表征疫苗接种效果的参数和免受感染的减弱保护; 5个月后第二剂量的疫苗接种效率和三角变体感染后两周后的第三剂量分别为0.24和0.95。使用有关疫苗接种效果的提取参数,复制了日本三个县的新病例。
translated by 谷歌翻译
准确性和可解释性是犯罪预测模型的两个基本属性。由于犯罪可能对人类生命,经济和安全的不利影响,我们需要一个可以尽可能准确地预测未来犯罪的模型,以便可以采取早期步骤来避免犯罪。另一方面,可解释的模型揭示了模型预测背后的原因,确保其透明度并允许我们相应地规划预防犯罪步骤。开发模型的关键挑战是捕获特定犯罪类别的非线性空间依赖和时间模式,同时保持模型的底层结构可解释。在本文中,我们开发AIST,一种用于犯罪预测的注意力的可解释的时空时间网络。基于过去的犯罪发生,外部特征(例如,流量流量和兴趣点(POI)信息)和犯罪趋势,AICT模拟了犯罪类别的动态时空相关性。广泛的实验在使用真实数据集的准确性和解释性方面表现出我们模型的优越性。
translated by 谷歌翻译