Meshing is a critical, but user-intensive process necessary for stable and accurate simulations in computational fluid dynamics (CFD). Mesh generation is often a bottleneck in CFD pipelines. Adaptive meshing techniques allow the mesh to be updated automatically to produce an accurate solution for the problem at hand. Existing classical techniques for adaptive meshing require either additional functionality out of solvers, many training simulations, or both. Current machine learning techniques often require substantial computational cost for training data generation, and are restricted in scope to the training data flow regime. MeshDQN is developed as a general purpose deep reinforcement learning framework to iteratively coarsen meshes while preserving target property calculation. A graph neural network based deep Q network is used to select mesh vertices for removal and solution interpolation is used to bypass expensive simulations at each step in the improvement process. MeshDQN requires a single simulation prior to mesh coarsening, while making no assumptions about flow regime, mesh type, or solver, only requiring the ability to modify meshes directly in a CFD pipeline. MeshDQN successfully improves meshes for two 2D airfoils.
translated by 谷歌翻译
计算物理问题问题的有限元离散通常依赖于自适应网格细化(AMR)来优先解决模拟过程中包含重要特征的区域。但是,这些空间改进策略通常是启发式的,并且依靠特定领域的知识或反复试验。我们将自适应网状精炼的过程视为不完整的信息下的本地,顺序决策问题,将AMR作为部分可观察到的马尔可夫决策过程。使用深厚的增强学习方法,我们直接从数值模拟中训练政策网络为AMR策略训练。培训过程不需要精确的解决方案或手头部分微分方程的高保真地面真相,也不需要预先计算的培训数据集。我们强化学习公式的本地性质使政策网络可以廉价地培训比部署的问题要小得多。该方法不是特定于任何特定的部分微分方程,问题维度或数值离散化的特定,并且可以灵活地结合各种问题物理。为此,我们使用各种高阶不连续的Galerkin和杂交不连续的Galerkin有限元离散化,将方法应用于各种偏微分方程。我们表明,由此产生的深入强化学习政策与共同的AMR启发式方法具有竞争力,跨越问题类别概括,并在准确性和成本之间取得了有利的平衡,因此它们通常会导致每个问题自由度的准确性更高。
translated by 谷歌翻译
Adaptive mesh refinement (AMR) is necessary for efficient finite element simulations of complex physical phenomenon, as it allocates limited computational budget based on the need for higher or lower resolution, which varies over space and time. We present a novel formulation of AMR as a fully-cooperative Markov game, in which each element is an independent agent who makes refinement and de-refinement choices based on local information. We design a novel deep multi-agent reinforcement learning (MARL) algorithm called Value Decomposition Graph Network (VDGN), which solves the two core challenges that AMR poses for MARL: posthumous credit assignment due to agent creation and deletion, and unstructured observations due to the diversity of mesh geometries. For the first time, we show that MARL enables anticipatory refinement of regions that will encounter complex features at future times, thereby unlocking entirely new regions of the error-cost objective landscape that are inaccessible by traditional methods based on local error estimators. Comprehensive experiments show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics. We show that learned policies generalize to test problems with physical features, mesh geometries, and longer simulation times that were not seen in training. We also extend VDGN with multi-objective optimization capabilities to find the Pareto front of the tradeoff between cost and error.
translated by 谷歌翻译
大型稀疏线性系统在科学和工程中普遍存在,例如由部分微分方程的离散化引起的那些。代数Multigrid(AMG)方法是解决这种线性系统的最常用方法之一,具有广泛的数学理论。线性方程系统在未知数集合上定义了一个图表,并且多重资源求解器的每个级别需要选择适当的粗糙图以及映射到粗略表示的限制和插值运算符。多国求解器的效率尺寸依赖于该选择,多年来已经开发了许多选择方法。最近,已经证明,给定粗糙的图选择,可以直接学习AMG插值和限制运算符。在本文中,我们考虑了学习为多国求解器驯化图的互补问题,这是开发完全学习的AMG方法的必要步骤。我们提出了一种基于图形神经网络(GNN)的加强学习(RL)代理的方法,该方法可以学习在小平面训练图上执行粗糙化的图表,然后应用于非结构化的大平面图,假设有界节点度。我们证明该方法可以产生比现有算法更好的粗糙图形,即使图表尺寸的增加和图形的其他属性而变化。我们还提出了一种有效的推理过程,用于执行图表粗化,导致图形大小的线性时间复杂度。
translated by 谷歌翻译
Steiner树问题(STP)在图中旨在在连接给定的顶点集的图表中找到一个最小权重的树。它是一种经典的NP - 硬组合优化问题,具有许多现实世界应用(例如,VLSI芯片设计,运输网络规划和无线传感器网络)。为STP开发了许多精确和近似算法,但它们分别遭受高计算复杂性和弱案例解决方案保证。还开发了启发式算法。但是,它们中的每一个都需要应用域知识来设计,并且仅适用于特定方案。最近报道的观察结果,同一NP-COLLECLIAL问题的情况可能保持相同或相似的组合结构,但主要在其数据中不同,我们调查将机器学习技术应用于STP的可行性和益处。为此,我们基于新型图形神经网络和深增强学习设计了一种新型模型瓦坎。 Vulcan的核心是一种新颖的紧凑型图形嵌入,将高瞻度图形结构数据(即路径改变信息)转换为低维矢量表示。鉴于STP实例,Vulcan使用此嵌入来对其路径相关的信息进行编码,并基于双层Q网络(DDQN)将编码的图形发送到深度加强学习组件,以找到解决方案。除了STP之外,Vulcan还可以通过将解决方案(例如,SAT,MVC和X3C)来减少到STP来找到解决方案。我们使用现实世界和合成数据集进行广泛的实验,展示了vulcan的原型,并展示了它的功效和效率。
translated by 谷歌翻译
Profile extrusion is a continuous production process for manufacturing plastic profiles from molten polymer. Especially interesting is the design of the die, through which the melt is pressed to attain the desired shape. However, due to an inhomogeneous velocity distribution at the die exit or residual stresses inside the extrudate, the final shape of the manufactured part often deviates from the desired one. To avoid these deviations, the shape of the die can be computationally optimized, which has already been investigated in the literature using classical optimization approaches. A new approach in the field of shape optimization is the utilization of Reinforcement Learning (RL) as a learning-based optimization algorithm. RL is based on trial-and-error interactions of an agent with an environment. For each action, the agent is rewarded and informed about the subsequent state of the environment. While not necessarily superior to classical, e.g., gradient-based or evolutionary, optimization algorithms for one single problem, RL techniques are expected to perform especially well when similar optimization tasks are repeated since the agent learns a more general strategy for generating optimal shapes instead of concentrating on just one single problem. In this work, we investigate this approach by applying it to two 2D test cases. The flow-channel geometry can be modified by the RL agent using so-called Free-Form Deformation, a method where the computational mesh is embedded into a transformation spline, which is then manipulated based on the control-point positions. In particular, we investigate the impact of utilizing different agents on the training progress and the potential of wall time saving by utilizing multiple environments during training.
translated by 谷歌翻译
Wind turbine wake modelling is of crucial importance to accurate resource assessment, to layout optimisation, and to the operational control of wind farms. This work proposes a surrogate model for the representation of wind turbine wakes based on a state-of-the-art graph representation learning method termed a graph neural network. The proposed end-to-end deep learning model operates directly on unstructured meshes and has been validated against high-fidelity data, demonstrating its ability to rapidly make accurate 3D flow field predictions for various inlet conditions and turbine yaw angles. The specific graph neural network model employed here is shown to generalise well to unseen data and is less sensitive to over-smoothing compared to common graph neural networks. A case study based upon a real world wind farm further demonstrates the capability of the proposed approach to predict farm scale power generation. Moreover, the proposed graph neural network framework is flexible and highly generic and as formulated here can be applied to any steady state computational fluid dynamics simulations on unstructured meshes.
translated by 谷歌翻译
Graph mining tasks arise from many different application domains, ranging from social networks, transportation to E-commerce, etc., which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph data mining tasks. However, these graph mining methods and RL models are dispersed in different research areas, which makes it hard to compare them. In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains and summarize the method descriptions, open-source codes, and benchmark datasets of GRL methods. Furthermore, we propose important directions and challenges to be solved in the future. As far as we know, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. In addition, we create an online open-source for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.
translated by 谷歌翻译
Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice. Machine-learning methods that estimate blood flow in individual patients could accelerate or replace CFD simulation to overcome these limitations. In this work, we consider the estimation of vector-valued quantities on the wall of three-dimensional geometric artery models. We employ group-equivariant graph convolution in an end-to-end SE(3)-equivariant neural network that operates directly on triangular surface meshes and makes efficient use of training data. We run experiments on a large dataset of synthetic coronary arteries and find that our method estimates directional wall shear stress (WSS) with an approximation error of 7.6% and normalised mean absolute error (NMAE) of 0.4% while up to two orders of magnitude faster than CFD. Furthermore, we show that our method is powerful enough to accurately predict transient, vector-valued WSS over the cardiac cycle while conditioned on a range of different inflow boundary conditions. These results demonstrate the potential of our proposed method as a plugin replacement for CFD in the personalised prediction of hemodynamic vector and scalar fields.
translated by 谷歌翻译
给定部分微分方程(PDE),面向目标的误差估计使我们能够了解诊断数量的兴趣数量(QOI)或目标的错误如何发生并积累在数值近似中,例如使用有限元方法。通过将误差估计分解为来自各个元素的贡献,可以制定适应方法,该方法可以修改网格,以最大程度地减少所得QOI误差的目的。但是,标准误差估计公式涉及真实的伴随解决方案,这在实践中是未知的。因此,通常的做法是用“富集”的近似值(例如,在更高的空间或精制的网格上)近似。这样做通常会导致计算成本的显着增加,这可能是损害(面向目标)自适应模拟的竞争力的瓶颈。本文的核心思想是通过选择性更换昂贵的误差估计步骤,并使用适当的配置和训练的神经网络开发“数据驱动”目标的网格适应方法。这样,甚至可以在不构造富集空间的情况下获得误差估计器。此处采用了逐元构造,该元素构造与网格几何相关的各种参数的局部值和基础问题物理物理作为输入,并且对误差估计器的相应贡献作为输出。我们证明,这种方法能够以降低的计算成本获得相同的准确性,对于与潮汐涡轮机周围流动相关的自适应网格测试用例,这些测试用例是通过其下游唤醒相互作用的,以及农场的整体功率输出作为将其视为QOI。此外,我们证明了元素元素方法意味着培训成本相当低。
translated by 谷歌翻译
这本数字本书包含在物理模拟的背景下与深度学习相关的一切实际和全面的一切。尽可能多,所有主题都带有Jupyter笔记本的形式的动手代码示例,以便快速入门。除了标准的受监督学习的数据中,我们将看看物理丢失约束,更紧密耦合的学习算法,具有可微分的模拟,以及加强学习和不确定性建模。我们生活在令人兴奋的时期:这些方法具有从根本上改变计算机模拟可以实现的巨大潜力。
translated by 谷歌翻译
图形上的组合优化问题(COP)是优化的基本挑战。强化学习(RL)最近成为解决这些问题的新框架,并证明了令人鼓舞的结果。但是,大多数RL解决方案都采用贪婪的方式来逐步构建解决方案,因此不可避免地对动作序列构成不必要的依赖性,并且需要许多特定于问题的设计。我们提出了一个通用的RL框架,该框架不仅表现出最先进的经验表现,而且还推广到各种各样的警察。具体而言,我们将状态定义为解决问题实例的解决方案,并将操作作为对该解决方案的扰动。我们利用图形神经网络(GNN)为给定的问题实例提取潜在表示,然后应用深Q学习以获得通过翻转或交换顶点标签逐渐完善解决方案的策略。实验是在最大$ k $ cut和旅行推销员问题上进行的,并且针对一系列基于学习的启发式基线实现了绩效改善。
translated by 谷歌翻译
强化学习和最近的深度增强学习是解决如Markov决策过程建模的顺序决策问题的流行方法。问题和选择算法和超参数的RL建模需要仔细考虑,因为不同的配置可能需要完全不同的性能。这些考虑因素主要是RL专家的任务;然而,RL在研究人员和系统设计师不是RL专家的其他领域中逐渐变得流行。此外,许多建模决策,例如定义状态和动作空间,批次的大小和批量更新的频率以及时间戳的数量通常是手动进行的。由于这些原因,RL框架的自动化不同组成部分具有重要意义,近年来它引起了很多关注。自动RL提供了一个框架,其中RL的不同组件包括MDP建模,算法选择和超参数优化是自动建模和定义的。在本文中,我们探讨了可以在自动化RL中使用的文献和目前的工作。此外,我们讨论了Autorl中的挑战,打开问题和研究方向。
translated by 谷歌翻译
网格生成的质量长期以来一直被认为是在有限元方法(FEM)的历史中提供具有可靠模拟结果的工程师的重要方面。在商业软件中使用了当前是最强大的方法的元素提取方法。但是,为了加速提取,通过找到优化目标函数的下一个元素来完成方法,这可能导致在许多时间步骤后的局部网格质量。我们提供TreeMESH,一种使用这种方法与强化学习(也可能有监督学习)和新颖的Monte-Carlo树搜索(MCT)(Coulom(2006),Kocsis和Szepesv \'Ari(2006),Browne et〜al。(2012))。该算法基于先前提出的方法(Pan Et〜Al。(2021))。在DRL(算法,状态 - 动作奖励设置)和添加MCT上进行了许多改进之后,它优于前者在同一边界上的工作。此外,使用树搜索,我们的程序在薄膜材料上揭示了种子密度变化的边界上的大量优势。
translated by 谷歌翻译
在过去的几年中,有监督的学习(SL)已确立了自己的最新数据驱动湍流建模。在SL范式中,基于数据集对模型进行了训练,该数据集通常通过应用相应的滤波器函数来从高保真解决方案中计算出先验的模型,该函数将已分离的和未分辨的流量尺度分开。对于隐式过滤的大涡模拟(LES),此方法是不可行的,因为在这里,使用的离散化本身是隐式滤波器函数。因此,通常不知道确切的滤波器形式,因此,即使有完整的解决方案可用,也无法计算相应的闭合项。强化学习(RL)范式可用于避免通过先前获得的培训数据集训练,而是通过直接与动态LES环境本身进行交互来避免这种不一致。这允许通过设计将潜在复杂的隐式LES过滤器纳入训练过程中。在这项工作中,我们应用了一个增强学习框架,以找到最佳的涡流粘度,以隐式过滤强制均匀的各向同性湍流的大型涡流模拟。为此,我们将基于卷积神经网络的策略网络制定湍流建模的任务作为RL任务,该杂志神经网络仅基于局部流量状态在时空中动态地适应LES中的涡流效率。我们证明,受过训练的模型可以提供长期稳定的模拟,并且在准确性方面,它们的表现优于建立的分析模型。此外,这些模型可以很好地推广到其他决议和离散化。因此,我们证明RL可以为一致,准确和稳定的湍流建模提供一个框架,尤其是对于隐式过滤的LE。
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
现代神经影像学技术,例如扩散张量成像(DTI)和功能性磁共振成像(fMRI),使我们能够将人脑建模为脑网络或连接组。捕获大脑网络的结构信息和分层模式对于理解大脑功能和疾病状态至关重要。最近,图形神经网络(GNN)的有前途的网络表示能力促使许多基于GNN的方法用于脑网络分析。具体而言,这些方法应用功能聚合和全局池来将大脑网络实例转换为有意义的低维表示,用于下游大脑网络分析任务。但是,现有的基于GNN的方法通常忽略了不同受试者的大脑网络可能需要各种聚合迭代,并将GNN与固定数量的层一起学习所有大脑网络。因此,如何完全释放GNN促进大脑网络分析的潜力仍然是不平凡的。为了解决这个问题,我们提出了一个新颖的大脑网络表示框架,即BN-GNN,该框架搜索每个大脑网络的最佳GNN体系结构。具体而言,BN-GNN使用深度加固学习(DRL)来训练元派利,以自动确定给定脑网络所需的最佳特征聚合数(反映在GNN层的数量中)。在八个现实世界大脑网络数据集上进行的广泛实验表明,我们提出的BN-GNN提高了传统GNN在不同大脑网络分析任务上的性能。
translated by 谷歌翻译
深入强化学习(DRL)用于开发自主优化和定制设计的热处理过程,这些过程既对微观结构敏感又节能。与常规监督的机器学习不同,DRL不仅依赖于数据中的静态神经网络培训,但是学习代理人会根据奖励和惩罚元素自主开发最佳解决方案,并减少或没有监督。在我们的方法中,依赖温度的艾伦 - 卡恩模型用于相转换,用作DRL代理的环境,是其获得经验并采取自主决策的模型世界。 DRL算法的试剂正在控制系统的温度,作为用于合金热处理的模型炉。根据所需的相位微观结构为代理定义了微观结构目标。训练后,代理可以为各种初始微观结构状态生成温度时间曲线,以达到最终所需的微观结构状态。详细研究了代理商的性能和热处理概况的物理含义。特别是,该试剂能够控制温度以从各种初始条件开始达到所需的微观结构。代理在处理各种条件方面的这种能力为使用这种方法铺平了道路,也用于回收的导向热处理过程设计,由于杂质的侵入,初始组合物可能因批量而异,以及用于设计节能热处理。为了检验这一假设,将无罚款的代理人与考虑能源成本的代理人进行了比较。对能源成本的罚款是针对找到最佳温度时间剖面的代理的附加标准。
translated by 谷歌翻译
物理世界中的液体的难以解释需要准确地模拟其许多科学和工程应用的动态。传统上,建立得很好但资源密集的CFD溶解器提供了这种模拟。近年来已经看到了深入学习的替代模型,取代了这些求解器来缓解模拟过程。构建数据驱动代理的一些方法模拟了求解器迭代过程。他们推断出前一个液体的下一个状态。其他人直接从时间输入中推断出来。方法在其对空间信息的管理方面也有所不同。图形神经网络(GNN)可以解决CFD仿真中常用的不规则网格的特异性。在本文中,我们展示了我们正在进行的工作来设计一种用于不规则网格的新型直接时间GNN架构。它包括随着样条卷绕卷积连接的尺寸的连续。我们在von k {\'a} rm {\'a} n的vortex街基准测试中测试我们的架构。它实现了小的泛化误差,同时减轻了轨迹的误差累积。
translated by 谷歌翻译
The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
translated by 谷歌翻译