我们考虑偏微分方程(PDE)的逆问题,以便依赖关系结构的参数可以随着时间的流逝而表现出随机变更点。例如,当物理系统处于恶意攻击下(例如,黑客对电网和互联网网络的攻击)或遭受极端外部条件(例如,影响电网的天气条件或大型市场移动)影响衍生性的估值时,可能会发生这种情况。合同)。为此,我们采用了物理知情的神经网络(PINNS) - 可以合并PDE系统所描述的任何物理定律的普遍近似值。这种先验的知识在神经网络的训练中起作用,是限制可接受解决方案空间并增加功能近似的正确性的正规化。我们表明,当真实的数据生成过程在PDE动力学中表现出更改点时,这种正则化会导致完整的错过校准和模型的故障。因此,我们建议使用总差异惩罚扩展PINN,该惩罚适合PDE动力学中的(多个)变更点。这些更改点可以随着时间的推移在随机位置发生,并且它们与解决方案一起估计。我们提出了一种附加的完善算法,该算法将更改点检测到可用于计算强化PINNS方法的动态编程方法的减少的动态编程方法结合在一起,我们证明了使用不同方程式的示例与参数变化的不同方程式的示例,证明了所提出的模型的好处。如果数据中没有更改点,则提出的模型将减少为原始PINNS模型。在存在变更点的情况下,与原始PINNS模型相比,它会导致参数估计,更好的模型拟合和较低的训练误差的改进。
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
在本文中,开发了一种新的不连续性捕获浅神经网络(DCSNN),以近似于$ d $ d $二维的分段连续功能和解决椭圆界面问题。当前网络中有三个新颖的功能。即,(i)跳跃不连续性被准确捕获,(ii)它完全浅,仅包含一个隐藏层,(iii)它完全无网格,用于求解部分微分方程。这里的关键想法是,可以将$ d $维的分段连续函数扩展到$(d+1)$ - 尺寸空间中定义的连续函数,其中增强坐标变量标记每个子域的零件。然后,我们构建一个浅神经网络来表达这一新功能。由于仅使用一个隐藏层,因此训练参数(权重和偏见)的数量与隐藏层中使用的维度和神经元线性缩放。为了解决椭圆界面问题,通过最大程度地减少由管理方程式,边界条件和接口跳跃条件组成的均方误差损失来训练网络。我们执行一系列数值测试以证明本网络的准确性。我们的DCSNN模型由于仅需要训练的参数数量中等(在所有数值示例中使用了几百个参数),因此很有效,结果表明准确性良好。与传统的基于网格的浸入界面方法(IIM)获得的结果相比,该方法专门针对椭圆界面问题而设计,我们的网络模型比IIM表现出更好的精度。我们通过解决一个六维问题来结论,以证明本网络在高维应用中的能力。
translated by 谷歌翻译
标准的神经网络可以近似一般的非线性操作员,要么通过数学运算符的组合(例如,在对流 - 扩散反应部分微分方程中)的组合,要么仅仅是黑匣子,例如黑匣子,例如一个系统系统。第一个神经操作员是基于严格的近似理论于2019年提出的深层操作员网络(DeepOnet)。从那时起,已经发布了其他一些较少的一般操作员,例如,基于图神经网络或傅立叶变换。对于黑匣子系统,对神经操作员的培训仅是数据驱动的,但是如果知道管理方程式可以在培训期间将其纳入损失功能,以开发物理知识的神经操作员。神经操作员可以用作设计问题,不确定性量化,自主系统以及几乎任何需要实时推断的应用程序中的代替代物。此外,通过将它们与相对轻的训练耦合,可以将独立的预训练deponets用作复杂多物理系统的组成部分。在这里,我们介绍了Deponet,傅立叶神经操作员和图神经操作员的评论,以及适当的扩展功能扩展,并突出显示它们在计算机械师中的各种应用中的实用性,包括多孔媒体,流体力学和固体机制, 。
translated by 谷歌翻译
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications.
translated by 谷歌翻译
作为深度学习的典型{Application},物理知识的神经网络(PINN){已成功用于找到部分微分方程(PDES)的数值解决方案(PDES),但是如何提高有限准确性仍然是PINN的巨大挑战。 。在这项工作中,我们引入了一种新方法,对称性增强物理学知情的神经网络(SPINN),其中PDE的谎言对称性诱导的不变表面条件嵌入PINN的损失函数中,以提高PINN的准确性。我们分别通过两组十组独立数值实验来测试SPINN的有效性,分别用于热方程,Korteweg-De Vries(KDV)方程和潜在的汉堡{方程式},这表明Spinn的性能比PINN更好,而PINN的训练点和更简单的结构都更好神经网络。此外,我们讨论了Spinn的计算开销,以PINN的相对计算成本,并表明Spinn的训练时间没有明显的增加,甚至在某些情况下还不是PINN。
translated by 谷歌翻译
在本文中,我们开发了一种物理知识的神经网络(PINN)模型,用于具有急剧干扰初始条件的抛物线问题。作为抛物线问题的一个示例,我们考虑具有点(高斯)源初始条件的对流 - 分散方程(ADE)。在$ d $维的ADE中,在初始条件衰减中的扰动随时间$ t $ as $ t^{ - d/2} $,这可能会在Pinn解决方案中造成较大的近似错误。 ADE溶液中的局部大梯度使该方程的残余效率低下的(PINN)拉丁高立方体采样(常见)。最后,抛物线方程的PINN解对损耗函数中的权重选择敏感。我们提出了一种归一化的ADE形式,其中溶液的初始扰动不会降低幅度,并证明该归一化显着降低了PINN近似误差。我们提出了与通过其他方法选择的权重相比,损耗函数中的权重标准更准确。最后,我们提出了一种自适应采样方案,该方案可显着减少相同数量的采样(残差)点的PINN溶液误差。我们证明了提出的PINN模型的前进,反向和向后ADE的准确性。
translated by 谷歌翻译
在科学的背景下,众所周知的格言“一张图片胜过千言万语”可能是“一个型号胜过一千个数据集”。在本手稿中,我们将Sciml软件生态系统介绍作为混合物理法律和科学模型的信息,并使用数据驱动的机器学习方法。我们描述了一个数学对象,我们表示通用微分方程(UDE),作为连接生态系统的统一框架。我们展示了各种各样的应用程序,从自动发现解决高维汉密尔顿 - Jacobi-Bellman方程的生物机制,可以通过UDE形式主义和工具进行措辞和有效地处理。我们展示了软件工具的一般性,以处理随机性,延迟和隐式约束。这使得各种SCIML应用程序变为核心训练机构的核心集,这些训练机构高度优化,稳定硬化方程,并与分布式并行性和GPU加速器兼容。
translated by 谷歌翻译
拟合科学数据的部分微分方程(PDE)可以用可解释的机制来代表各种以数学为导向的受试者的物理定律。从科学数据中发现PDE的数据驱动的发现蓬勃发展,作为对自然界中复杂现象进行建模的新尝试,但是当前实践的有效性通常受数据的稀缺性和现象的复杂性的限制。尤其是,从低质量数据中发现具有高度非线性系数的PDE在很大程度上已经不足。为了应对这一挑战,我们提出了一种新颖的物理学指导学习方法,该方法不仅可以编码观察知识,例如初始和边界条件,而且还包含了基本的物理原理和法律来指导模型优化。我们从经验上证明,所提出的方法对数据噪声和稀疏性更为强大,并且可以将估计误差较大。此外,我们第一次能够发现具有高度非线性系数的PDE。凭借有希望的性能,提出的方法推动了PDE的边界,这可以通过机器学习模型来进行科学发现。
translated by 谷歌翻译
Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic PDEs. Moreover, from the implementation point of view, PINNs solve inverse problems as easily as forward problems. We propose a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering. Specifically, DeepXDE can solve forward problems given initial and boundary conditions, as well as inverse problems given some extra measurements. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different examples. More broadly, DeepXDE contributes to the more rapid development of the emerging Scientific Machine Learning field.
translated by 谷歌翻译
在本文中,开发了用于求解具有delta功能奇异源的椭圆方程的浅丽兹型神经网络。目前的工作中有三个新颖的功能。即,(i)Delta函数奇异性自然删除,(ii)级别集合函数作为功能输入引入,(iii)它完全浅,仅包含一个隐藏层。我们首先介绍问题的能量功能,然后转换奇异源对沿界面的常规表面积分的贡献。以这种方式,可以自然删除三角洲函数,而无需引入传统正规化方法(例如众所周知的沉浸式边界方法)中常用的函数。然后将最初的问题重新重新审议为最小化问题。我们提出了一个带有一个隐藏层的浅丽兹型神经网络,以近似能量功能的全局最小化器。结果,通过最大程度地减少能源的离散版本的损耗函数来训练网络。此外,我们将界面的级别设置函数作为网络的功能输入,并发现它可以显着提高训练效率和准确性。我们执行一系列数值测试,以显示本方法的准确性及其在不规则域和较高维度中问题的能力。
translated by 谷歌翻译
Deep operator network (DeepONet) has demonstrated great success in various learning tasks, including learning solution operators of partial differential equations. In particular, it provides an efficient approach to predict the evolution equations in a finite time horizon. Nevertheless, the vanilla DeepONet suffers from the issue of stability degradation in the long-time prediction. This paper proposes a {\em transfer-learning} aided DeepONet to enhance the stability. Our idea is to use transfer learning to sequentially update the DeepONets as the surrogates for propagators learned in different time frames. The evolving DeepONets can better track the varying complexities of the evolution equations, while only need to be updated by efficient training of a tiny fraction of the operator networks. Through systematic experiments, we show that the proposed method not only improves the long-time accuracy of DeepONet while maintaining similar computational cost but also substantially reduces the sample size of the training set.
translated by 谷歌翻译
机器学习方法最近在求解部分微分方程(PDE)中的承诺。它们可以分为两种广泛类别:近似解决方案功能并学习解决方案操作员。物理知识的神经网络(PINN)是前者的示例,而傅里叶神经操作员(FNO)是后者的示例。这两种方法都有缺点。 Pinn的优化是具有挑战性,易于发生故障,尤其是在多尺度动态系统上。 FNO不会遭受这种优化问题,因为它在给定的数据集上执行了监督学习,但获取此类数据可能太昂贵或无法使用。在这项工作中,我们提出了物理知识的神经运营商(Pino),在那里我们结合了操作学习和功能优化框架。这种综合方法可以提高PINN和FNO模型的收敛速度和准确性。在操作员学习阶段,Pino在参数PDE系列的多个实例上学习解决方案操作员。在测试时间优化阶段,Pino优化预先训练的操作员ANSATZ,用于PDE的查询实例。实验显示Pino优于许多流行的PDE家族的先前ML方法,同时保留与求解器相比FNO的非凡速度。特别是,Pino准确地解决了挑战的长时间瞬态流量,而其他基线ML方法无法收敛的Kolmogorov流程。
translated by 谷歌翻译
Although physics-informed neural networks(PINNs) have progressed a lot in many real applications recently, there remains problems to be further studied, such as achieving more accurate results, taking less training time, and quantifying the uncertainty of the predicted results. Recent advances in PINNs have indeed significantly improved the performance of PINNs in many aspects, but few have considered the effect of variance in the training process. In this work, we take into consideration the effect of variance and propose our VI-PINNs to give better predictions. We output two values in the final layer of the network to represent the predicted mean and variance respectively, and the latter is used to represent the uncertainty of the output. A modified negative log-likelihood loss and an auxiliary task are introduced for fast and accurate training. We perform several experiments on a wide range of different problems to highlight the advantages of our approach. The results convey that our method not only gives more accurate predictions but also converges faster.
translated by 谷歌翻译
物理知识的神经网络(PINNS)由于能力将物理定律纳入模型,在工程的各个领域都引起了很多关注。但是,对机械和热场之间涉及耦合的工业应用中PINN的评估仍然是一个活跃的研究主题。在这项工作中,我们提出了PINNS在非牛顿流体热机械问题上的应用,该问题通常在橡胶日历过程中考虑。我们证明了PINN在处理逆问题和不良问题时的有效性,这些问题是不切实际的,可以通过经典的数值离散方法解决。我们研究了传感器放置的影响以及无监督点对PINNS性能的分布,即从某些部分数据中推断出隐藏的物理领域的问题。我们还研究了PINN从传感器捕获的测量值中识别未知物理参数的能力。在整个工作中,还考虑了嘈杂测量的效果。本文的结果表明,在识别问题中,PINN可以仅使用传感器上的测量结果成功估算未知参数。在未完全定义边界条件的不足问题中,即使传感器的放置和无监督点的分布对PINNS性能产生了很大的影响,我们表明该算法能够从局部测量中推断出隐藏的物理。
translated by 谷歌翻译
Solute transport in porous media is relevant to a wide range of applications in hydrogeology, geothermal energy, underground CO2 storage, and a variety of chemical engineering systems. Due to the complexity of solute transport in heterogeneous porous media, traditional solvers require high resolution meshing and are therefore expensive computationally. This study explores the application of a mesh-free method based on deep learning to accelerate the simulation of solute transport. We employ Physics-informed Neural Networks (PiNN) to solve solute transport problems in homogeneous and heterogeneous porous media governed by the advection-dispersion equation. Unlike traditional neural networks that learn from large training datasets, PiNNs only leverage the strong form mathematical models to simultaneously solve for multiple dependent or independent field variables (e.g., pressure and solute concentration fields). In this study, we construct PiNN using a periodic activation function to better represent the complex physical signals (i.e., pressure) and their derivatives (i.e., velocity). Several case studies are designed with the intention of investigating the proposed PiNN's capability to handle different degrees of complexity. A manual hyperparameter tuning method is used to find the best PiNN architecture for each test case. Point-wise error and mean square error (MSE) measures are employed to assess the performance of PiNNs' predictions against the ground truth solutions obtained analytically or numerically using the finite element method. Our findings show that the predictions of PiNN are in good agreement with the ground truth solutions while reducing computational complexity and cost by, at least, three orders of magnitude.
translated by 谷歌翻译
大规模复杂动力系统的实时精确解决方案非常需要控制,优化,不确定性量化以及实践工程和科学应用中的决策。本文朝着这个方向做出了贡献,模型限制了切线流形学习(MCTANGENT)方法。 McTangent的核心是几种理想策略的协同作用:i)切线的学术学习,以利用神经网络速度和线条方法的准确性; ii)一种模型限制的方法,将神经网络切线与基础管理方程式进行编码; iii)促进长时间稳定性和准确性的顺序学习策略;和iv)数据随机方法,以隐式强制执行神经网络切线的平滑度及其对真相切线的可能性,以进一步提高麦克氏解决方案的稳定性和准确性。提供了半启发式和严格的论点,以分析和证明拟议的方法是合理的。提供了几个用于传输方程,粘性汉堡方程和Navier Stokes方程的数值结果,以研究和证明所提出的MCTANGENT学习方法的能力。
translated by 谷歌翻译
Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems.
translated by 谷歌翻译
两个不混溶的流体的位移是多孔介质中流体流动的常见问题。这种问题可以作为局部微分方程(PDE)构成通常被称为Buckley-Leverett(B-L)问题。 B-L问题是一种非线性双曲守护法,众所周知,使用传统的数值方法难以解决。在这里,我们使用物理信息的神经网络(Pinns)使用非凸版通量函数来解决前向双曲线B-L问题。本文的贡献是双重的。首先,我们通过将Oleinik熵条件嵌入神经网络残差来提出一种Pinn方法来解决双曲线B-L问题。我们不使用扩散术语(人工粘度)在残留损失中,但我们依靠PDE的强形式。其次,我们使用ADAM优化器与基于残留的自适应细化(RAR)算法,实现不加权的超低损耗。我们的解决方案方法可以精确地捕获冲击前并产生精确的整体解决方案。我们报告了一个2 x 10-2的L2验证误差和1x 10-6的L2损耗。所提出的方法不需要任何额外的正则化或加权损失以获得这种准确的解决方案。
translated by 谷歌翻译
We propose characteristic-informed neural networks (CINN), a simple and efficient machine learning approach for solving forward and inverse problems involving hyperbolic PDEs. Like physics-informed neural networks (PINN), CINN is a meshless machine learning solver with universal approximation capabilities. Unlike PINN, which enforces a PDE softly via a multi-part loss function, CINN encodes the characteristics of the PDE in a general-purpose deep neural network trained with the usual MSE data-fitting regression loss and standard deep learning optimization methods. This leads to faster training and can avoid well-known pathologies of gradient descent optimization of multi-part PINN loss functions. If the characteristic ODEs can be solved exactly, which is true in important cases, the output of a CINN is an exact solution of the PDE, even at initialization, preventing the occurrence of non-physical outputs. Otherwise, the ODEs must be solved approximately, but the CINN is still trained only using a data-fitting loss function. The performance of CINN is assessed empirically in forward and inverse linear hyperbolic problems. These preliminary results indicate that CINN is able to improve on the accuracy of the baseline PINN, while being nearly twice as fast to train and avoiding non-physical solutions. Future extensions to hyperbolic PDE systems and nonlinear PDEs are also briefly discussed.
translated by 谷歌翻译