基于参数统计模型的经验贝叶斯(EB)方法如负二项式(NB)已广泛用于道路网络安全筛选过程中的排名位点。本文是提出基于条件生成对冲网络(CGAN)的新型非参数EB方法的新型研究,其中提出了一种基于条件生成的对冲网络(CGAN)的模拟频率数据数据。与参数方法不同,在提议的CGAN-EB中,无所决的和独立变量之间不需要预先指定的底层关系,他们能够建模任何类型的分布。该拟议的方法现在应用于从2012年至2017年在华盛顿州的道路段收集的真实数据集。与模型拟合,预测性能和网络筛查结果的Cgan-EB的性能与作为基准的传统方法(NB-EB)进行比较。结果表明,在预测权力和热点识别测试方面,所提出的Cgan-EB方法优于NB-EB。
translated by 谷歌翻译
在本文中,提出了一种称为Cgan-EB的新非参数型经验贝叶斯方法,用于近似经验贝叶斯(EB)估计,这些经验贝叶斯(例如,道路段)中的估计是深度神经网络的建模优势,其性能基于负二项式模型(NB-EB)的传统方法模拟研究比较。 NB-EB使用负二项式模型来模拟崩溃数据,并且是实践中最常见的方法。为了在所提出的Cgan-EB中模拟崩溃数据,使用条件生成的对抗网络,这是一种强大的深度神经网络的方法,可以模拟任何类型的分布。设计并进行了许多仿真实验,以评估不同条件下的Cgan-EB性能,并将其与NB-EB进行比较。结果表明,当条件有利于NB-EB模型时,Cgan-EB执行以及NB-EB的表现(即数据符合NB模型的假设),并且在实验中的实验中占NB-EB的胜度,特别是低于实际遇到的条件样本方式,当碰撞频率不遵循与协变量的对数线性关系。
translated by 谷歌翻译
本文介绍了一种基于条件生成的对抗网络的碰撞频率数据增强方法,以改善碰撞频率模型。通过比较基本SPF(使用原始数据开发)和增强SPF(使用原始数据加合成数据开发)的性能来评估所提出的方法,以便在热点识别性能,模型预测精度和色散参数估计精度方面。使用模拟和现实世界崩溃数据集进行实验。结果表明,通过CGAN的合成崩溃数据具有与原始数据相同的分布,并且在分散参数低时,在几乎所有方面都占据了基础SPF的增强SPF。
translated by 谷歌翻译
多种统计和机器学习方法用于使用机器学习方法在特定道路上建模崩溃频率,通常具有更高的预测准确性。最近,包括堆叠在内的异质集合方法(HEM)已成为更准确和强大的智能技术,并且通常通过提供更可靠和准确的预测来解决模式识别问题。在这项研究中,我们将堆叠的关键下摆方法之一应用于城市和郊区动脉的五个车道段(5T)上的崩溃频率。将堆叠的预测性能与参数统计模型(泊松和负二项式)和三种最先进的机器学习技术(决策树,随机森林和梯度增强)进行了比较,每种技术都被称为基础学习者。通过采用最佳的体重方案通过堆叠结合单个基础学习者,由于规格和预测准确性的差异,各个基础学习者中有偏见的预测问题可以避免。从2013年到2017年收集并集成了包括崩溃,流量和道路清单在内的数据。数据分为培训,验证和测试数据集。统计模型的估计结果表明,除其他因素外,崩溃随着不同类型的车道的密度(每英里数)的增加而增加。各种模型的样本外预测的比较证实了堆叠优于所考虑的替代方法的优越性。从实际的角度来看,堆叠可以提高预测准确性(与仅使用具有特定规范的基本学习者相比)。当系统地应用时,堆叠可以帮助确定更合适的对策。
translated by 谷歌翻译
“轨迹”是指由地理空间中的移动物体产生的迹线,通常由一系列按时间顺序排列的点表示,其中每个点由地理空间坐标集和时间戳组成。位置感应和无线通信技术的快速进步使我们能够收集和存储大量的轨迹数据。因此,许多研究人员使用轨迹数据来分析各种移动物体的移动性。在本文中,我们专注于“城市车辆轨迹”,这是指城市交通网络中车辆的轨迹,我们专注于“城市车辆轨迹分析”。城市车辆轨迹分析提供了前所未有的机会,可以了解城市交通网络中的车辆运动模式,包括以用户为中心的旅行经验和系统范围的时空模式。城市车辆轨迹数据的时空特征在结构上相互关联,因此,许多先前的研究人员使用了各种方法来理解这种结构。特别是,由于其强大的函数近似和特征表示能力,深度学习模型是由于许多研究人员的注意。因此,本文的目的是开发基于深度学习的城市车辆轨迹分析模型,以更好地了解城市交通网络的移动模式。特别是,本文重点介绍了两项研究主题,具有很高的必要性,重要性和适用性:下一个位置预测,以及合成轨迹生成。在这项研究中,我们向城市车辆轨迹分析提供了各种新型模型,使用深度学习。
translated by 谷歌翻译
We consider the problem of dynamic pricing of a product in the presence of feature-dependent price sensitivity. Developing practical algorithms that can estimate price elasticities robustly, especially when information about no purchases (losses) is not available, to drive such automated pricing systems is a challenge faced by many industries. Based on the Poisson semi-parametric approach, we construct a flexible yet interpretable demand model where the price related part is parametric while the remaining (nuisance) part of the model is non-parametric and can be modeled via sophisticated machine learning (ML) techniques. The estimation of price-sensitivity parameters of this model via direct one-stage regression techniques may lead to biased estimates due to regularization. To address this concern, we propose a two-stage estimation methodology which makes the estimation of the price-sensitivity parameters robust to biases in the estimators of the nuisance parameters of the model. In the first-stage we construct estimators of observed purchases and prices given the feature vector using sophisticated ML estimators such as deep neural networks. Utilizing the estimators from the first-stage, in the second-stage we leverage a Bayesian dynamic generalized linear model to estimate the price-sensitivity parameters. We test the performance of the proposed estimation schemes on simulated and real sales transaction data from the Airline industry. Our numerical studies demonstrate that our proposed two-stage approach reduces the estimation error in price-sensitivity parameters from 25\% to 4\% in realistic simulation settings. The two-stage estimation techniques proposed in this work allows practitioners to leverage modern ML techniques to robustly estimate price-sensitivities while still maintaining interpretability and allowing ease of validation of its various constituent parts.
translated by 谷歌翻译
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data is stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of the data (synopsis) that closely replicates the behavior of the actual data, which can be useful where an approximate answer to the queries would be acceptable in a fraction of the real execution time. In this paper, we discuss the use of Generative Adversarial Networks (GANs) for generating tabular data that can be employed in AQP for synopsis construction. We first discuss the challenges associated with constructing synopses in relational databases and then introduce solutions to those challenges. Following that, we organized statistical metrics to evaluate the quality of the generated synopses. We conclude that tabular data complexity makes it difficult for algorithms to understand relational database semantics during training, and improved versions of tabular GANs are capable of constructing synopses to revolutionize data-driven decision-making systems.
translated by 谷歌翻译
由于精确定位传感器,人工智能(AI)的安全功能,自动驾驶系统,连接的车辆,高通量计算和边缘计算服务器的技术进步,驾驶安全分析最近经历了前所未有的改进。特别是,深度学习(DL)方法授权音量视频处理,从路边单元(RSU)捕获的大型视频中提取与安全相关的功能。安全指标是调查崩溃和几乎冲突事件的常用措施。但是,这些指标提供了对整个网络级流量管理的有限见解。另一方面,一些安全评估工作致力于处理崩溃报告,并确定与道路几何形状,交通量和天气状况相关的崩溃的空间和时间模式。这种方法仅依靠崩溃报告,而忽略了交通视频的丰富信息,这些信息可以帮助确定违规行为在崩溃中的作用。为了弥合这两个观点,我们定义了一组新的网络级安全指标(NSM),以通过处理RSU摄像机拍摄的图像来评估交通流的总体安全性。我们的分析表明,NSM显示出与崩溃率的显着统计关联。这种方法与简单地概括单个崩溃分析的结果不同,因为所有车辆都有助于计算NSM,而不仅仅是碰撞事件所涉及的NSM。该视角将交通流量视为一个复杂的动态系统,其中某些节点的动作可以通过网络传播并影响其他节点的崩溃风险。我们还提供了附录A中的代孕安全指标(SSM)的全面审查。
translated by 谷歌翻译
Uncertainty quantification (UQ) has increasing importance in building robust high-performance and generalizable materials property prediction models. It can also be used in active learning to train better models by focusing on getting new training data from uncertain regions. There are several categories of UQ methods each considering different types of uncertainty sources. Here we conduct a comprehensive evaluation on the UQ methods for graph neural network based materials property prediction and evaluate how they truly reflect the uncertainty that we want in error bound estimation or active learning. Our experimental results over four crystal materials datasets (including formation energy, adsorption energy, total energy, and band gap properties) show that the popular ensemble methods for uncertainty estimation is NOT the best choice for UQ in materials property prediction. For the convenience of the community, all the source code and data sets can be accessed freely at \url{https://github.com/usccolumbia/materialsUQ}.
translated by 谷歌翻译
Modeling lies at the core of both the financial and the insurance industry for a wide variety of tasks. The rise and development of machine learning and deep learning models have created many opportunities to improve our modeling toolbox. Breakthroughs in these fields often come with the requirement of large amounts of data. Such large datasets are often not publicly available in finance and insurance, mainly due to privacy and ethics concerns. This lack of data is currently one of the main hurdles in developing better models. One possible option to alleviating this issue is generative modeling. Generative models are capable of simulating fake but realistic-looking data, also referred to as synthetic data, that can be shared more freely. Generative Adversarial Networks (GANs) is such a model that increases our capacity to fit very high-dimensional distributions of data. While research on GANs is an active topic in fields like computer vision, they have found limited adoption within the human sciences, like economics and insurance. Reason for this is that in these fields, most questions are inherently about identification of causal effects, while to this day neural networks, which are at the center of the GAN framework, focus mostly on high-dimensional correlations. In this paper we study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions. This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions. We consider the cross-sectional case, the time series case and the case with a complete structural model. It is shown that in the simple cross-sectional scenario where correlation equals causation the GAN preserves causality, but that challenges arise for more advanced analyses.
translated by 谷歌翻译
异构表格数据是最常用的数据形式,对于众多关键和计算要求的应用程序至关重要。在同质数据集上,深度神经网络反复显示出卓越的性能,因此被广泛采用。但是,它们适应了推理或数据生成任务的表格数据仍然具有挑战性。为了促进该领域的进一步进展,这项工作概述了表格数据的最新深度学习方法。我们将这些方法分为三组:数据转换,专业体系结构和正则化模型。对于每个小组,我们的工作提供了主要方法的全面概述。此外,我们讨论了生成表格数据的深度学习方法,并且还提供了有关解释对表格数据的深层模型的策略的概述。因此,我们的第一个贡献是解决上述领域中的主要研究流和现有方法,同时强调相关的挑战和开放研究问题。我们的第二个贡献是在传统的机器学习方法中提供经验比较,并在五个流行的现实世界中的十种深度学习方法中,具有不同规模和不同的学习目标的经验比较。我们已将作为竞争性基准公开提供的结果表明,基于梯度增强的树合奏的算法仍然大多在监督学习任务上超过了深度学习模型,这表明对表格数据的竞争性深度学习模型的研究进度停滞不前。据我们所知,这是对表格数据深度学习方法的第一个深入概述。因此,这项工作可以成为有价值的起点,以指导对使用表格数据深入学习感兴趣的研究人员和从业人员。
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
交通预测模型依赖需要感测,处理和存储的数据。这需要部署和维护交通传感基础设施,往往导致不适合的货币成本。缺乏感测的位置可以与合成数据模拟相辅相成,进一步降低交通监测所需的经济投资。根据类似道路的数据分布,其中最常见的数据生成方法之一包括产生实际的流量模式。检测具有相似流量的道路的过程是这些系统的关键点。但是,在不收集目标位置收集数据,没有用于该相似性的搜索可以使用流量度量。我们提出了一种通过检查道路段的拓扑特征来发现具有可用流量数据的方法的方法。相关的拓扑功能被提取为数值表示(嵌入式)以比较不同的位置,并最终根据其嵌入之间的相似性找到最相似的道路。检查该新颖选择系统的性能,并与更简单的流量估计方法进行比较。找到类似的数据源后,使用生成方法来合成流量配置文件。根据感知道路的交通行为的相似性,可以使用一条路的数据来馈送生成方法。在合成样品的精度方面分析了几种代理方法。最重要的是,这项工作打算促进进一步的研究努力提高综合交通样本的质量,从而降低对传感基础设施的需求。
translated by 谷歌翻译
Dengue fever is a virulent disease spreading over 100 tropical and subtropical countries in Africa, the Americas, and Asia. This arboviral disease affects around 400 million people globally, severely distressing the healthcare systems. The unavailability of a specific drug and ready-to-use vaccine makes the situation worse. Hence, policymakers must rely on early warning systems to control intervention-related decisions. Forecasts routinely provide critical information for dangerous epidemic events. However, the available forecasting models (e.g., weather-driven mechanistic, statistical time series, and machine learning models) lack a clear understanding of different components to improve prediction accuracy and often provide unstable and unreliable forecasts. This study proposes an ensemble wavelet neural network with exogenous factor(s) (XEWNet) model that can produce reliable estimates for dengue outbreak prediction for three geographical regions, namely San Juan, Iquitos, and Ahmedabad. The proposed XEWNet model is flexible and can easily incorporate exogenous climate variable(s) confirmed by statistical causality tests in its scalable framework. The proposed model is an integrated approach that uses wavelet transformation into an ensemble neural network framework that helps in generating more reliable long-term forecasts. The proposed XEWNet allows complex non-linear relationships between the dengue incidence cases and rainfall; however, mathematically interpretable, fast in execution, and easily comprehensible. The proposal's competitiveness is measured using computational experiments based on various statistical metrics and several statistical comparison tests. In comparison with statistical, machine learning, and deep learning methods, our proposed XEWNet performs better in 75% of the cases for short-term and long-term forecasting of dengue incidence.
translated by 谷歌翻译
缺失数据的归责是在许多工程和科学应用中发挥着重要作用的任务。通常,这种缺失的数据来自传感器的限制或后处理转换误差的实验观察中。其他时间从计算机模拟中的数值和算法约束产生。本文的一个这样的实例和应用重点是风暴浪涌的数值模拟。模拟数据对应于感兴趣的地理领域内的多个保存点的时间序列浪涌预测,创建了浪涌点在空间且时间上大量相关的时空呈现问题,并且缺失的值区域在结构上分布随机的。最近,已经开发了机器学习技术,例如神经网络方法,并用于缺少数据归档任务。生成的对抗网(GAN)和基于GAN的技术是特别引起了无监督机器学习方法的关注。在这项研究中,通过应用卷积神经网络而不是完全连接的层来改善生成的对抗性归纳网(增益)性能,以更好地捕获数据的相关性并从相邻的浪涌点促进学习。对所研究的数据所需的方法的另一调整是考虑点作为附加特征的点的坐标,以通过卷积层提供更多信息。我们将所提出的方法称为卷积生成的对抗性普通网(CONV-GAIL)。通过考虑风暴浪涌数据所需的改进和适应来评估和与原始增益和其他一些技术进行评估,提出的方法的表现。结果表明,CONV增益比研究数据上的替代方法具有更好的性能。
translated by 谷歌翻译
保险公司经常使用的广义线性模型(GLM)的质量取决于相互作用变量的选择。搜索互动是耗时的,尤其是对于具有大量变量的数据集,这取决于精算师的专家判断,并且通常依赖于视觉性能指标。因此,我们提出了一种方法,可以自动化寻找相互作用的过程,这些过程应添加到GLM中以提高其预测能力。我们的方法依赖于神经网络和一种特定于模型的交互检测方法,该方法在计算上比传统使用的方法更快。在数值研究中,我们在不同的数据集上提供了方法的结果:开源数据,人工数据和专有数据。
translated by 谷歌翻译
Flooding is one of the most disastrous natural hazards, responsible for substantial economic losses. A predictive model for flood-induced financial damages is useful for many applications such as climate change adaptation planning and insurance underwriting. This research assesses the predictive capability of regressors constructed on the National Flood Insurance Program (NFIP) dataset using neural networks (Conditional Generative Adversarial Networks), decision trees (Extreme Gradient Boosting), and kernel-based regressors (Gaussian Process). The assessment highlights the most informative predictors for regression. The distribution for claims amount inference is modeled with a Burr distribution permitting the introduction of a bias correction scheme and increasing the regressor's predictive capability. Aiming to study the interaction with physical variables, we incorporate Daymet rainfall estimation to NFIP as an additional predictor. A study on the coastal counties in the eight US South-West states resulted in an $R^2=0.807$. Further analysis of 11 counties with a significant number of claims in the NFIP dataset reveals that Extreme Gradient Boosting provides the best results, that bias correction significantly improves the similarity with the reference distribution, and that the rainfall predictor strengthens the regressor performance.
translated by 谷歌翻译
剪切粘度虽然是所有液体的基本特性,但在计算上估计分子动力学模拟的计算昂贵。最近,机器学习(ML)方法已被用于在许多情况下增强分子模拟,从而显示出以相对廉价的方式估算粘度的希望。但是,ML方法面临重大挑战,例如当数据集的大小很小时,粘度也很小。在这项工作中,我们训练多个ML模型,以预测Lennard-Jones(LJ)流体的剪切粘度,特别强调解决由小型数据集引起的问题。具体而言,研究了与模型选择,绩效估计和不确定性定量有关的问题。首先,我们表明使用单个看不见的数据集的广泛使用的性能估计步骤显示了小数据集的广泛可变性。在这种情况下,可以使用交叉验证(CV)选择超参数(模型选择)的常见实践,以估算概括误差(性能估计)。我们比较了两个简单的简历程序,以便他们同时选择模型选择和性能估计的能力,并发现基于K折CV的过程显示出较低的误差估计差异。我们讨论绩效指标在培训和评估中的作用。最后,使用高斯工艺回归(GPR)和集合方法来估计单个预测的不确定性。 GPR的不确定性估计还用于构建适用性域,使用ML模型对本工作中生成的另一个小数据集提供了更可靠的预测。总体而言,这项工作中规定的程序共同导致了针对小型数据集的强大ML模型。
translated by 谷歌翻译
设计精确预测的设计模型是机器学习的基本目标。这项工作提出了表明,当可以从感兴趣的过程中提取目标变量相对于输入的衍生物,可以利用它们以提高可视机器学习模型的准确性。探索了四个关键思路:(1)提高线性回归模型和前馈神经网络(NNS)的预测精度;(2)使用培训的前馈NNS的性能之间的差异,没有梯度信息来调谐NN复杂度(以隐藏节点号的形式);(3)使用梯度信息来正规化线性回归;(4)使用梯度信息来改善生成图像模型。在这种应用中,梯度信息显示为增强每个预测模型,展示其对各种应用的价值。
translated by 谷歌翻译
因果关系的概念在人类认知中起着重要作用。在过去的几十年中,在许多领域(例如计算机科学,医学,经济学和教育)中,因果推论已经得到很好的发展。随着深度学习技术的发展,它越来越多地用于针对反事实数据的因果推断。通常,深层因果模型将协变量的特征映射到表示空间,然后设计各种客观优化函数,以根据不同的优化方法公正地估算反事实数据。本文重点介绍了深层因果模型的调查,其核心贡献如下:1)我们在多种疗法和连续剂量治疗下提供相关指标; 2)我们从时间开发和方法分类的角度综合了深层因果模型的全面概述; 3)我们协助有关相关数据集和源代码的详细且全面的分类和分析。
translated by 谷歌翻译