物理信息的神经网络(PINN)已证明是解决部分微分方程(PDE)的前进和反问题的有效工具。 PINN将PDE嵌入神经网络的丢失中,并在一组散射的残留点上评估该PDE损失。这些点的分布对于PINN的性能非常重要。但是,在现有的针对PINN的研究中,仅使用了一些简单的残留点抽样方法。在这里,我们介绍了两类采样的全面研究:非自适应均匀抽样和适应性非均匀抽样。我们考虑了六个均匀的采样,包括(1)稳定的均匀网格,(2)均匀随机采样,(3)拉丁语超立方体采样,(4)Halton序列,(5)Hammersley序列和(6)Sobol序列。我们还考虑了用于均匀抽样的重采样策略。为了提高采样效率和PINN的准确性,我们提出了两种新的基于残余的自适应抽样方法:基于残留的自适应分布(RAD)和基于残留的自适应改进,并具有分布(RAR-D),它们会动态地改善基于训练过程中PDE残差的剩余点。因此,我们总共考虑了10种不同的采样方法,包括6种非自适应均匀抽样,重采样的均匀抽样,两种提议的自适应抽样和现有的自适应抽样。我们广泛测试了这些抽样方法在许多设置中的四个正向问题和两个反问题的性能。我们在本研究中介绍的数值结果总结了6000多个PINN的模拟。我们表明,RAD和RAR-D的提议的自适应采样方法显着提高了PINN的准确性,其残留点较少。在这项研究中获得的结果也可以用作选择抽样方法的实用指南。
translated by 谷歌翻译
深入学习被证明是通过物理信息的神经网络(PINNS)求解部分微分方程(PDE)的有效工具。 Pinns将PDE残差嵌入到神经网络的损耗功能中,已成功用于解决各种前向和逆PDE问题。然而,第一代Pinns的一个缺点是它们通常具有许多训练点即使具有有限的准确性。在这里,我们提出了一种新的方法,梯度增强的物理信息的神经网络(GPInns),用于提高Pinns的准确性和培训效率。 GPInns利用PDE残差的梯度信息,并将梯度嵌入损耗功能。我们广泛地测试了GPinns,并证明了GPInns在前进和反向PDE问题中的有效性。我们的数值结果表明,GPInn比贴图更好地表现出较少的训练点。此外,我们将GPIn与基于残留的自适应细化(RAR)的方法组合,一种用于在训练期间自适应地改善训练点分布的方法,以进一步提高GPInn的性能,尤其是具有陡峭梯度的溶液的PDE。
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) have gained much attention in various fields of engineering thanks to their capability of incorporating physical laws into the models. PINNs integrate the physical constraints by minimizing the partial differential equations (PDEs) residuals on a set of collocation points. The distribution of these collocation points appears to have a huge impact on the performance of PINNs and the assessment of the sampling methods for these points is still an active topic. In this paper, we propose a Fixed-Budget Online Adaptive Mesh Learning (FBOAML) method, which decomposes the domain into sub-domains, for training collocation points based on local maxima and local minima of the PDEs residuals. The stopping criterion is based on a data set of reference, which leads to an adaptive number of iterations for each specific problem. The effectiveness of FBOAML is demonstrated in the context of non-parameterized and parameterized problems. The impact of the hyper-parameters in FBOAML is investigated in this work. The comparison with other adaptive sampling methods is also illustrated. The numerical results demonstrate important gains in terms of accuracy of PINNs with FBOAML over the classical PINNs with non-adaptive collocation points. We also apply FBOAML in a complex industrial application involving coupling between mechanical and thermal fields. We show that FBOAML is able to identify the high-gradient location and even give better prediction for some physical fields than the classical PINNs with collocation points taken on a pre-adapted finite element mesh.
translated by 谷歌翻译
Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks. As promising surrogate solvers of partial differential equations (PDEs) for real-time prediction, deep neural operators such as deep operator networks (DeepONets) provide a new simulation paradigm in science and engineering. Pure data-driven neural operators and deep learning models, in general, are usually limited to interpolation scenarios, where new predictions utilize inputs within the support of the training set. However, in the inference stage of real-world applications, the input may lie outside the support, i.e., extrapolation is required, which may result to large errors and unavoidable failure of deep learning models. Here, we address this challenge of extrapolation for deep neural operators. First, we systematically investigate the extrapolation behavior of DeepONets by quantifying the extrapolation complexity via the 2-Wasserstein distance between two function spaces and propose a new behavior of bias-variance trade-off for extrapolation with respect to model capacity. Subsequently, we develop a complete workflow, including extrapolation determination, and we propose five reliable learning methods that guarantee a safe prediction under extrapolation by requiring additional information -- the governing PDEs of the system or sparse new observations. The proposed methods are based on either fine-tuning a pre-trained DeepONet or multifidelity learning. We demonstrate the effectiveness of the proposed framework for various types of parametric PDEs. Our systematic comparisons provide practical guidelines for selecting a proper extrapolation method depending on the available information, desired accuracy, and required inference speed.
translated by 谷歌翻译
Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic PDEs. Moreover, from the implementation point of view, PINNs solve inverse problems as easily as forward problems. We propose a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering. Specifically, DeepXDE can solve forward problems given initial and boundary conditions, as well as inverse problems given some extra measurements. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different examples. More broadly, DeepXDE contributes to the more rapid development of the emerging Scientific Machine Learning field.
translated by 谷歌翻译
物理信息神经网络(PINN)的自适应训练方法需要专门的构造,以分配每个训练样本分配的权重分布。有效地寻求这种最佳的权重分布并不是一项简单的任务,大多数现有方法基于近似值的全部分布或最大值选择自适应权重。在本文中,我们表明,用于训练效率的样品自适应选择中的瓶颈是数值残差的尾巴分布的行为。因此,我们提出了剩余的定量调整(RQA)方法,可为每个训练样本提供更好的体重选择。最初将权重设置与剩余的$ p $ th功率成正比之后,我们的RQA方法重新分配了所有高于$ q $ - Quantile(例如$ 90 \%$)的所有权重,以便中位数,因此权重遵循分数 - 从残差得出的调整分布。借助迭代的重新加权技术,RQA也非常易于实现。实验结果表明,所提出的方法可以在各种偏微分方程(PDE)问题上胜过几种自适应方法。
translated by 谷歌翻译
在本文中,我们开发了一种物理知识的神经网络(PINN)模型,用于具有急剧干扰初始条件的抛物线问题。作为抛物线问题的一个示例,我们考虑具有点(高斯)源初始条件的对流 - 分散方程(ADE)。在$ d $维的ADE中,在初始条件衰减中的扰动随时间$ t $ as $ t^{ - d/2} $,这可能会在Pinn解决方案中造成较大的近似错误。 ADE溶液中的局部大梯度使该方程的残余效率低下的(PINN)拉丁高立方体采样(常见)。最后,抛物线方程的PINN解对损耗函数中的权重选择敏感。我们提出了一种归一化的ADE形式,其中溶液的初始扰动不会降低幅度,并证明该归一化显着降低了PINN近似误差。我们提出了与通过其他方法选择的权重相比,损耗函数中的权重标准更准确。最后,我们提出了一种自适应采样方案,该方案可显着减少相同数量的采样(残差)点的PINN溶液误差。我们证明了提出的PINN模型的前进,反向和向后ADE的准确性。
translated by 谷歌翻译
物理知识的神经网络(PINN)将问题领域的物理知识作为对损失函数的软限制,但最近的工作表明这可能导致优化困难。在这里,我们研究了搭配点的位置对这些模型训练性的影响。我们发现,随着训练的进行,可以通过适应搭配点的位置来显着提高香草·皮恩的性能。具体而言,我们提出了一种新型的自适应搭配方案,该方案逐渐将更多的搭配点(不增加数量)分配给模型正在造成更高误差的区域(基于域中损失函数的梯度)。加上在任何优化失速过程中对训练的明智重新启动(通过简单地重新采样搭配点以调整损失景观)会导致预测错误的更好估计。我们提出了一些问题的结果,包括具有不同强迫函数的2D泊松和扩散 - 辅助系统。我们发现,针对这些问题的训练香草PINN可以导致解决方案中的预测误差高达70%,尤其是在低搭配点的状态下。相比之下,我们的自适应方案可以达到较小误差的顺序,其计算复杂性与基线相似。此外,我们发现自适应方法始终如一地执行PAR或比香草Pinn方法稍好,即使对于大型搭配点方案也是如此。所有实验的代码都是开源的。
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
基于神经网络的求解部分微分方程的方法由于其简单性和灵活性来表示偏微分方程的解决方案而引起了相当大的关注。在训练神经网络时,网络倾向于学习与低频分量相对应的全局特征,而高频分量以较慢的速率(F原理)近似。对于解决方案包含广泛尺度的一类等式,由于无法捕获高频分量,网络训练过程可能会遭受缓慢的收敛性和低精度。在这项工作中,我们提出了一种分层方法来提高神经网络解决方案的收敛速率和准确性。所提出的方法包括多训练水平,其中引导新引入的神经网络来学习先前级别近似的残余。通过神经网络训练过程的性​​质,高级校正倾向于捕获高频分量。我们通过一套线性和非线性部分微分方程验证所提出的分层方法的效率和稳健性。
translated by 谷歌翻译
随着计算能力的增加和机器学习的进步,基于数据驱动的学习方法在解决PDE方面引起了极大的关注。物理知识的神经网络(PINN)最近出现并成功地在各种前进和逆PDES问题中取得了成功,其优异的特性,例如灵活性,无网格解决方案和无监督的培训。但是,它们的收敛速度较慢和相对不准确的解决方案通常会限制其在许多科学和工程领域中的更广泛适用性。本文提出了一种新型的数据驱动的PDES求解器,物理知识的细胞表示(Pixel),优雅地结合了经典数值方法和基于学习的方法。我们采用来自数值方法的网格结构,以提高准确性和收敛速度并克服PINN中呈现的光谱偏差。此外,所提出的方法在PINN中具有相同的好处,例如,使用相同的优化框架来解决前进和逆PDE问题,并很容易通过现代自动分化技术强制执行PDE约束。我们为原始Pinn所努力的各种具有挑战性的PDE提供了实验结果,并表明像素达到了快速收敛速度和高精度。
translated by 谷歌翻译
在这项工作中,我们提出了一种深度自适应采样(DAS)方法,用于求解部分微分方程(PDE),其中利用深神经网络近似PDE和深生成模型的解决方案,用于生成改进训练集的新的搭配点。 DAS的整体过程由两个组件组成:通过最小化训练集中的搭配点上的剩余损失来解决PDE,并生成新的训练集,以进一步提高电流近似解的准确性。特别地,我们将残差作为概率密度函数进行处理,并用一个被称为Krnet的深生成模型近似它。来自Krnet的新样品与残留物诱导的分布一致,即,更多样品位于大残留的区域中,并且较少的样品位于小残余区域中。类似于经典的自适应方法,例如自适应有限元,Krnet作为引导训练集的改进的错误指示器。与用均匀分布的搭配点获得的神经网络近似相比,发达的算法可以显着提高精度,特别是对于低规律性和高维问题。我们展示了一个理论分析,表明所提出的DAS方法可以减少误差并展示其与数值实验的有效性。
translated by 谷歌翻译
部分微分方程通常用于模拟各种物理现象,例如热扩散,波传播,流体动力学,弹性,电动力学和图像处理,并且已经开发了许多分析方法或传统的数值方法并广泛用于其溶液。受深度学习对科学和工程研究的迅速影响的启发,在本文中,我们提出了一个新型的神经网络GF-NET,以无监督的方式学习绿色的线性反应扩散方程的功能。所提出的方法克服了通过使用物理信息的方法和绿色功能的对称性来查找任意域上方程函数的挑战。结果,它尤其导致了在不同边界条件和来源下解决目标方程的有效方法。我们还通过正方形,环形和L形域中的实验证明了所提出的方法的有效性。
translated by 谷歌翻译
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications.
translated by 谷歌翻译
近年来,深入学习技术已被用来解决部分微分方程(PDE),其中物理信息的神经网络(PINNS)出现是解决前向和反向PDE问题的有希望的方法。具有点源的PDE,其表示为管理方程中的DIRAC DELTA函数是许多物理过程的数学模型。然而,由于DIRAC DELTA功能所带来的奇点,它们不能直接通过传统的PINNS方法来解决。我们提出了一种普遍的解决方案,以用三种新颖的技术解决这个问题。首先,DIRAC DELTA功能被建模为连续概率密度函数以消除奇点;其次,提出了下限约束的不确定性加权算法,以平衡点源区和其他区域之间的Pinns损失;第三,使用具有周期性激活功能的多尺度深度神经网络来提高PinnS方法的准确性和收敛速度。我们评估了三种代表性PDE的提出方法,实验结果表明,我们的方法优于基于深度学习的方法,涉及准确性,效率和多功能性。
translated by 谷歌翻译
作为深度学习的典型{Application},物理知识的神经网络(PINN){已成功用于找到部分微分方程(PDES)的数值解决方案(PDES),但是如何提高有限准确性仍然是PINN的巨大挑战。 。在这项工作中,我们引入了一种新方法,对称性增强物理学知情的神经网络(SPINN),其中PDE的谎言对称性诱导的不变表面条件嵌入PINN的损失函数中,以提高PINN的准确性。我们分别通过两组十组独立数值实验来测试SPINN的有效性,分别用于热方程,Korteweg-De Vries(KDV)方程和潜在的汉堡{方程式},这表明Spinn的性能比PINN更好,而PINN的训练点和更简单的结构都更好神经网络。此外,我们讨论了Spinn的计算开销,以PINN的相对计算成本,并表明Spinn的训练时间没有明显的增加,甚至在某些情况下还不是PINN。
translated by 谷歌翻译
Although physics-informed neural networks(PINNs) have progressed a lot in many real applications recently, there remains problems to be further studied, such as achieving more accurate results, taking less training time, and quantifying the uncertainty of the predicted results. Recent advances in PINNs have indeed significantly improved the performance of PINNs in many aspects, but few have considered the effect of variance in the training process. In this work, we take into consideration the effect of variance and propose our VI-PINNs to give better predictions. We output two values in the final layer of the network to represent the predicted mean and variance respectively, and the latter is used to represent the uncertainty of the output. A modified negative log-likelihood loss and an auxiliary task are introduced for fast and accurate training. We perform several experiments on a wide range of different problems to highlight the advantages of our approach. The results convey that our method not only gives more accurate predictions but also converges faster.
translated by 谷歌翻译
物理知识的神经网络(PINN)已成为解决各种域中的部分微分方程(PDE)的强大工具。尽管PINNS的先前研究主要集中在训练期间构建和平衡损失功能以避免最小值,但采样搭配点对PINNS性能的影响很大程度上被忽略了。在这项工作中,我们发现PINN的性能可以随着不同的采样策略而显着变化,并且使用固定的搭配点可能对PINNS与正确解决方案的收敛性很小。特别是(1)我们假设对PINN的培训依赖于从初始和/或边界条件点到内部点的成功“传播”,而采样策略差的PINN可能会卡在琐事的解决方案上,如果有\ textit {传播失败}。 (2)我们证明,传播失败的特征是高度不平衡的PDE残留场,在非常狭窄的区域中观察到非常高的残留物。 (3)为了减轻传播失败,我们提出了一种新颖的\ textit {Evolutionary采样}(EVO)方法,该方法可以逐步积累高PDE残差区域中的搭配点。我们进一步提供EVO的扩展,以尊重因果关系原理,同时解决时间依赖性PDE。我们从经验上证明了我们提出的方法在各种PDE问题中的功效和效率。
translated by 谷歌翻译
Physics-Informed Neural Networks (PINN) are algorithms from deep learning leveraging physical laws by including partial differential equations together with a respective set of boundary and initial conditions as penalty terms into their loss function. In this work, we observe the significant role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively. To this end, we implement and evaluate different methods aiming at balancing the contributions of multiple terms of the PINNs loss function and their gradients. After reviewing of three existing loss scaling approaches (Learning Rate Annealing, GradNorm and SoftAdapt), we propose a novel self-adaptive loss balancing scheme for PINNs named \emph{ReLoBRaLo} (Relative Loss Balancing with Random Lookback). We extensively evaluate the performance of the aforementioned balancing schemes by solving both forward as well as inverse problems on three benchmark PDEs for PINNs: Burgers' equation, Kirchhoff's plate bending equation and Helmholtz's equation. The results show that ReLoBRaLo is able to consistently outperform the baseline of existing scaling methods in terms of accuracy, while also inducing significantly less computational overhead.
translated by 谷歌翻译
We propose characteristic-informed neural networks (CINN), a simple and efficient machine learning approach for solving forward and inverse problems involving hyperbolic PDEs. Like physics-informed neural networks (PINN), CINN is a meshless machine learning solver with universal approximation capabilities. Unlike PINN, which enforces a PDE softly via a multi-part loss function, CINN encodes the characteristics of the PDE in a general-purpose deep neural network trained with the usual MSE data-fitting regression loss and standard deep learning optimization methods. This leads to faster training and can avoid well-known pathologies of gradient descent optimization of multi-part PINN loss functions. If the characteristic ODEs can be solved exactly, which is true in important cases, the output of a CINN is an exact solution of the PDE, even at initialization, preventing the occurrence of non-physical outputs. Otherwise, the ODEs must be solved approximately, but the CINN is still trained only using a data-fitting loss function. The performance of CINN is assessed empirically in forward and inverse linear hyperbolic problems. These preliminary results indicate that CINN is able to improve on the accuracy of the baseline PINN, while being nearly twice as fast to train and avoiding non-physical solutions. Future extensions to hyperbolic PDE systems and nonlinear PDEs are also briefly discussed.
translated by 谷歌翻译