Measuring and monitoring soil organic carbon is critical for agricultural productivity and for addressing critical environmental problems. Soil organic carbon not only enriches nutrition in soil, but also has a gamut of co-benefits such as improving water storage and limiting physical erosion. Despite a litany of work in soil organic carbon estimation, current approaches do not generalize well across soil conditions and management practices. We empirically show that explicit modeling of cause-and-effect relationships among the soil processes improves the out-of-distribution generalizability of prediction models. We provide a comparative analysis of soil organic carbon estimation models where the skeleton is estimated using causal discovery methods. Our framework provide an average improvement of 81% in test mean squared error and 52% in test mean absolute error.
translated by 谷歌翻译
了解因果关系有助于构建干预措施,以实现特定的目标并在干预下实现预测。随着学习因果关系的越来越重要,因果发现任务已经从使用传统方法推断出潜在的因果结构从观察数据到深度学习涉及的模式识别领域。大量数据的快速积累促进了具有出色可扩展性的因果搜索方法的出现。因果发现方法的现有摘要主要集中在基于约束,分数和FCM的传统方法上,缺乏针对基于深度学习的方法的完美分类和阐述,还缺乏一些考虑和探索因果关系的角度来探索因果发现方法范式。因此,我们根据变量范式将可能的因果发现任务分为三种类型,并分别给出三个任务的定义,定义和实例化每个任务的相关数据集以及同时构建的最终因果模型,然后审查不同任务的主要因果发现方法。最后,我们从不同角度提出了一些路线图,以解决因果发现领域的当前研究差距,并指出未来的研究方向。
translated by 谷歌翻译
成功的材料选择对于设计和制造产品的设计自动化至关重要。设计师通过通过性能,制造性和可持续性评估选择最合适的材料来利用他们的知识和经验来创建高质量的设计。智能工具可以通过提供从先前的设计中学到的建议来帮助具有不同专业知识的设计师。为了实现这一目标,我们介绍了一个图表表示学习框架,该框架支持组装中身体的物质预测。我们将材料选择任务作为节点级预测任务,对CAD模型的汇编图表示,并使用图形神经网络(GNN)对其进行处理。在Fusion 360画廊数据集上执行的三个实验协议的评估表明我们的方法的可行性,达到了0.75 TOP-3 Micro-F1分数。提出的框架可以扩展到大型数据集,并将设计师的知识纳入学习过程。这些功能使该框架可以作为设计自动化的推荐系统以及未来工作的基准,从而缩小了人类设计师与智能设计代理之间的差距。
translated by 谷歌翻译
Agriculture is at the heart of the solution to achieve sustainability in feeding the world population, but advancing our understanding on how agricultural output responds to climatic variability is still needed. Precision Agriculture (PA), which is a management strategy that uses technology such as remote sensing, Geographical Information System (GIS), and machine learning for decision making in the field, has emerged as a promising approach to enhance crop production, increase yield, and reduce water and nutrient losses and environmental impacts. In this context, multiple models to predict agricultural phenotypes, such as crop yield, from genomics (G), environment (E), weather and soil, and field management practices (M) have been developed. These models have traditionally been based on mechanistic or statistical approaches. However, AI approaches are intrinsically well-suited to model complex interactions and have more recently been developed, outperforming classical methods. Here, we present a Natural Language Processing (NLP)-based neural network architecture to process the G, E and M inputs and their interactions. We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments and similarly to other approaches for unseen seed varieties.
translated by 谷歌翻译
在制造过程中通常检查因果关系,以支持故障调查,进行干预并做出战略决策。行业4.0已获得越来越多的数据,可实现数据驱动的因果发现(CD)。考虑到最近提出的CD方法的数量越来越多,有必要在公开可用的数据集上引入严格的基准测试程序,因为它们代表了公平比较和验证不同方法的基础。这项工作在连续制造过程中介绍了两个用于CD的新型公共数据集。第一个数据集使用著名的田纳西州伊士曼模拟器进行故障检测和过程控制。第二个数据集是从超级加工的食品制造厂中提取的,其中包括对该工厂的描述以及多个地面真相。这些数据集用于基于不同的指标提出基准测试程序,并对多种CD算法进行了评估。这项工作允许在现实条件下测试CD方法,从而为特定目标应用程序选择最合适的方法。数据集可在以下链接中找到:https://github.com/giovannimen
translated by 谷歌翻译
Climate change, population growth, and water scarcity present unprecedented challenges for agriculture. This project aims to forecast soil moisture using domain knowledge and machine learning for crop management decisions that enable sustainable farming. Traditional methods for predicting hydrological response features require significant computational time and expertise. Recent work has implemented machine learning models as a tool for forecasting hydrological response features, but these models neglect a crucial component of traditional hydrological modeling that spatially close units can have vastly different hydrological responses. In traditional hydrological modeling, units with similar hydrological properties are grouped together and share model parameters regardless of their spatial proximity. Inspired by this domain knowledge, we have constructed a novel domain-inspired temporal graph convolution neural network. Our approach involves clustering units based on time-varying hydrological properties, constructing graph topologies for each cluster, and forecasting soil moisture using graph convolutions and a gated recurrent neural network. We have trained, validated, and tested our method on field-scale time series data consisting of approximately 99,000 hydrological response units spanning 40 years in a case study in northeastern United States. Comparison with existing models illustrates the effectiveness of using domain-inspired clustering with time series graph neural networks. The framework is being deployed as part of a pro bono social impact program. The trained models are being deployed on small-holding farms in central Texas.
translated by 谷歌翻译
图形神经网络(GNNS)已成为旨在对图形结构数据进行学习和推断的引人注目的模型,但是在理解GNN的基本限制方面几乎没有做出的工作,该限制可扩展到较大的图形并推广到分布外输入。 。在本文中,我们使用一个随机图生成器,该生成器使我们能够系统地研究图形大小和结构属性如何影响GNN的预测性能。我们提供的具体证据表明,在许多图形属性中,节点度分布的平均值和模态是确定GNN是否可以推广到看不见的图的关键特征。因此,我们使用多个节点更新功能和内部循环优化作为对汇总输入的单一类型的规范非线性转换的概括,提出了灵活的GNN(flex-gnn),并将内部循环优化作为概括,从而使网络可以灵活地适应新图。 Flex-GNN框架改善了几个推理任务的培训设置的概括。
translated by 谷歌翻译
Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence.
translated by 谷歌翻译
了解晕星连接是基本的,以提高我们对暗物质的性质和性质的知识。在这项工作中,我们构建一个模型,鉴于IT主机的星系的位置,速度,恒星群体和半径的位置。为了捕获来自星系属性的相关性及其相位空间的相关信息,我们使用图形神经网络(GNN),该网络设计用于使用不规则和稀疏数据。我们从宇宙学和天体物理学中培训了我们在Galaxies上的模型,从宇宙学和天体物理学与机器学习模拟(骆驼)项目。我们的模型,占宇宙学和天体物理的不确定性,能够用$ \ SIM 0.2欧元的准确度来限制晕群。此外,在一套模拟上培训的GNN能够在用利用不同的代码的模拟上进行测试时保留其精度的一部分精度。 GNN的Pytorch几何实现在HTTPS://github.com/pablovd/halographnet上公开可用于github上
translated by 谷歌翻译
6G is envisioned to offer higher data rate, improved reliability, ubiquitous AI services, and support massive scale of connected devices. As a consequence, 6G will be much more complex than its predecessors. The growth of the system scale and complexity as well as the coexistence with the legacy networks and the diversified service requirements will inevitably incur huge maintenance cost and efforts for future 6G networks. Network Root Cause Analysis (Net-RCA) plays a critical role in identifying root causes of network faults. In this article, we first give an introduction about the envisioned 6G networks. Next, we discuss the challenges and potential solutions of 6G network operation and management, and comprehensively survey existing RCA methods. Then we propose an artificial intelligence (AI)-empowered Net-RCA framework for 6G. Performance comparisons on both synthetic and real-world network data are carried out to demonstrate that the proposed method outperforms the existing method considerably.
translated by 谷歌翻译
Causal learning has attracted much attention in recent years because causality reveals the essential relationship between things and indicates how the world progresses. However, there are many problems and bottlenecks in traditional causal learning methods, such as high-dimensional unstructured variables, combinatorial optimization problems, unknown intervention, unobserved confounders, selection bias and estimation bias. Deep causal learning, that is, causal learning based on deep neural networks, brings new insights for addressing these problems. While many deep learning-based causal discovery and causal inference methods have been proposed, there is a lack of reviews exploring the internal mechanism of deep learning to improve causal learning. In this article, we comprehensively review how deep learning can contribute to causal learning by addressing conventional challenges from three aspects: representation, discovery, and inference. We point out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence. We conclude the article with a summary of open issues and potential directions for future work.
translated by 谷歌翻译
制定和实施基于AI的解决方案有助于国家和联邦政府机构,研究机构和商业公司加强决策过程,自动化连锁业务,减少自然和人力资源的消费。与此同时,实践中使用的大多数AI方法只能表示为“黑匣子”并遭受缺乏透明度。这最终可能导致意外的结果和破坏在这种系统中的信任。因此,至关重要,不仅要开发有效和强大的AI系统,而且为了确保其内部过程可解释和公平。我们本章的目标是利用美国经济技术部门的示例,介绍具有高影响决策的AI系统的保证方法的主题。我们通过提供技术经济数据集的因果试验,我们解释了这些领域如何从数据集的关键指标之间揭示致命关系。审查了几种因果推断方法和AI保证技术,并对数据转换为图形结构数据集。
translated by 谷歌翻译
图形神经网络(GNN)在许多领域中显示出优异的应用,其中数据基本上表示为图(例如,化学,生物学,推荐系统)。在该静脉中,通信网络包括许多以图形结构方式(例如,拓扑,配置,交通流量)自然表示的许多基本组件。该职位文章将GNNS作为用于建模,控制和管理通信网络的基本工具。 GNN表示新一代的数据驱动模型,可以准确地学习和再现真实网络后面的复杂行为。因此,这种模型可以应用于各种网络用例,例如规划,在线优化或故障排除。 GNN在传统的神经网络上的主要优点在于在培训期间应用于其他网络和配置时的前所未有的泛化能力,这是实现用于网络实际数据驱动解决方案的关键特征。本文包括关于GNN的简要教程及其对通信网络的可能应用。为了展示这项技术的潜力,我们展示了两种用例,分别应用于有线和无线网络的最先进的GNN模型。最后,我们深入研究了这一小说研究区的关键开放挑战和机会。
translated by 谷歌翻译
Recent years have seen rapid progress at the intersection between causality and machine learning. Motivated by scientific applications involving high-dimensional data, in particular in biomedicine, we propose a deep neural architecture for learning causal relationships between variables from a combination of empirical data and prior causal knowledge. We combine convolutional and graph neural networks within a causal risk framework to provide a flexible and scalable approach. Empirical results include linear and nonlinear simulations (where the underlying causal structures are known and can be directly compared against), as well as a real biological example where the models are applied to high-dimensional molecular data and their output compared against entirely unseen validation experiments. These results demonstrate the feasibility of using deep learning approaches to learn causal networks in large-scale problems spanning thousands of variables.
translated by 谷歌翻译
因果发现是一项主要任务,对于机器学习至关重要,因为因果结构可以使模型超越基于纯粹的相关推理并显着提高其性能。但是,从数据中找到因果结构在计算工作和准确性方面都构成了重大挑战,更不用说在没有干预的情况下不可能。在本文中,我们开发了一种元强化学习算法,该算法通过学习执行干预措施以构建明确的因果图来执行因果发现。除了对可能的下游应用程序有用外,估计的因果图还为数据生成过程提供了解释。在本文中,我们表明我们的算法估计了与SOTA方法相比,即使在以前从未见过的基本因果结构的环境中也是如此。此外,我们进行了一项消融研究,展示了学习干预措施如何有助于我们方法的整体表现。我们得出的结论是,干预措施确实有助于提高性能,从而有效地对可能看不见的环境的因果结构进行了准确的估计。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
气候变化对作物相关的疑虑构成了新的挑战,包括粮食不安全,供应稳定和经济规划。作为中央挑战之一,作物产量预测已成为机器学习领域的按压任务。尽管重要的是,预测任务是特别的复杂性,因为作物产量取决于天气,陆地,土壤质量等各种因素,以及它们的相互作用。近年来,在该域中成功应用了机器学习模型。然而,这些模型要么将他们的任务限制为相对较小的区域,或者只在单个或几年内进行研究,这使得它们难以在空间和时间上概括。在本文中,我们介绍了一种用于作物产量预测的新型图形的复发性神经网络,以纳入模型中的地理和时间知识,进一步提升预测力。我们的方法是在美国大陆的41个州的2000年历史上进行培训,验证和测试,从1981年到2019年覆盖了几年。据我们所知,这是第一种机器学习方法,可在作物产量预测中嵌入地理知识预测全国县级的作物产量。我们还通过应用众所周知的线性模型,基于树的模型,深度学习方法以及比较它们的性能来对与其他机器学习基线进行稳固的基础。实验表明,我们的提出方法始终如一地优于各种指标上现有的现有方法,验证地理空间和时间信息的有效性。
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
考虑基于AI和ML的决策对这些新兴技术的安全和可接受的使用的决策的社会和道德后果至关重要。公平,特别是保证ML决定不会导致对个人或少数群体的歧视。使用因果关系,可以更好地实现和衡量可靠的公平/歧视,从而更好地实现了敏感属性(例如性别,种族,宗教等)之间的因果关系,仅仅是仅仅是关联,例如性别,种族,宗教等(例如,雇用工作,贷款授予等) )。然而,对因果关系解决公平性的最大障碍是因果模型的不可用(通常表示为因果图)。文献中现有的因果关系方法并不能解决此问题,并假设可获得因果模型。在本文中,我们没有做出这样的假设,并且我们回顾了从可观察数据中发现因果关系的主要算法。这项研究的重点是因果发现及其对公平性的影响。特别是,我们展示了不同的因果发现方法如何导致不同的因果模型,最重要的是,即使因果模型之间的轻微差异如何对公平/歧视结论产生重大影响。通过使用合成和标准公平基准数据集的经验分析来巩固这些结果。这项研究的主要目标是强调因果关系使用因果关系适当解决公平性的因果发现步骤的重要性。
translated by 谷歌翻译
即使机器学习算法已经在数据科学中发挥了重要作用,但许多当前方法对输入数据提出了不现实的假设。由于不兼容的数据格式,或数据集中的异质,分层或完全缺少的数据片段,因此很难应用此类方法。作为解决方案,我们提出了一个用于样本表示,模型定义和培训的多功能,统一的框架,称为“ Hmill”。我们深入审查框架构建和扩展的机器学习的多个范围范式。从理论上讲,为HMILL的关键组件的设计合理,我们将通用近似定理的扩展显示到框架中实现的模型所实现的所有功能的集合。本文还包含有关我们实施中技术和绩效改进的详细讨论,该讨论将在MIT许可下发布供下载。该框架的主要资产是其灵活性,它可以通过相同的工具对不同的现实世界数据源进行建模。除了单独观察到每个对象的一组属性的标准设置外,我们解释了如何在框架中实现表示整个对象系统的图表中的消息推断。为了支持我们的主张,我们使用框架解决了网络安全域的三个不同问题。第一种用例涉及来自原始网络观察结果的IoT设备识别。在第二个问题中,我们研究了如何使用以有向图表示的操作系统的快照可以对恶意二进制文件进行分类。最后提供的示例是通过网络中实体之间建模域黑名单扩展的任务。在所有三个问题中,基于建议的框架的解决方案可实现与专业方法相当的性能。
translated by 谷歌翻译