我们说明了一种可以利用用于构建先验遵守身体定律的神经网络的方法。我们从简单的单层神经网络(NN)开始,但避免选择激活功能。在某些条件和无限宽度极限下,我们可以应用中央限制定理,NN输出变为高斯。然后,我们可以通过依靠高斯过程(GP)理论来调查和操纵极限网络。据观察,作用于GP的线性操作员再次产生GP。对于定义微分方程并描述物理定律的差分运算符也是如此。如果我们要求GP或等效地遵守物理定律,那么这将产生与GP的协方差函数或内核的方程式,其解决方案等效地限制了模型以遵守物理定律。然后,中央限制定理建议可以通过选择激活函数来构建NNS来遵守物理定律,从而使它们在无限宽度极限中匹配特定的内核。以这种方式构建的激活函数可以保证NN先验遵守物理学,直到非限制网络宽度的近似误差。讨论了均匀的1D-螺旋方程的简单示例,并将其与天真的内核和激活进行了比较。
translated by 谷歌翻译
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep networks and GPs. We further develop a computationally efficient pipeline to compute the covariance function for these GPs. We then use the resulting GPs to perform Bayesian inference for wide deep neural networks on MNIST and CIFAR-10. We observe that trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. Finally we connect the performance of these GPs to the recent theory of signal propagation in random neural networks. * Both authors contributed equally to this work. † Work done as a member of the Google AI Residency program (g.co/airesidency). 1 Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional to the layer width.
translated by 谷歌翻译
为了更好地了解大型神经网络的理论行为,有几项工程已经分析了网络宽度倾向于无穷大的情况。在该制度中,随机初始化的影响和训练神经网络的过程可以与高斯过程和神经切线内核等分析工具正式表达。在本文中,我们审查了在这种无限宽度神经网络中量化不确定性的方法,并将它们与贝叶斯推理框架中的高斯过程的关系进行比较。我们利用沿途使用几个等价结果,以获得预测不确定性的确切闭合性解决方案。
translated by 谷歌翻译
缺乏对深度学习系统的洞察力阻碍了他们的系统设计。在科学和工程学中,建模是一种用于了解内部过程不透明的复杂系统的方法。建模用更简单的代理代替复杂的系统,该系统更适合解释。从中汲取灵感,我们使用高斯流程为神经网络构建了一类代理模型。我们没有从神经网络的某些限制案例中得出内核,而是从经验上从神经网络的自然主义行为中学习了高斯过程的内核。我们首先通过两项案例研究评估我们的方法,灵感来自先前对神经网络行为的理论研究,在这些案例研究中,我们捕获了学习低频的神经网络偏好,并确定了深层神经网络中的病理行为。在进一步的实践案例研究中,我们使用学识渊博的内核来预测神经网络的泛化特性。
translated by 谷歌翻译
Partial differential equations (PDEs) are important tools to model physical systems, and including them into machine learning models is an important way of incorporating physical knowledge. Given any system of linear PDEs with constant coefficients, we propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system. We apply the Ehrenpreis-Palamodov fundamental principle, which works like a non-linear Fourier transform, to construct GP kernels mirroring standard spectral methods for GPs. Our approach can infer probable solutions of linear PDE systems from any data such as noisy measurements, or initial and boundary conditions. Constructing EPGP-priors is algorithmic, generally applicable, and comes with a sparse version (S-EPGP) that learns the relevant spectral frequencies and works better for big data sets. We demonstrate our approach on three families of systems of PDE, the heat equation, wave equation, and Maxwell's equations, where we improve upon the state of the art in computation time and precision, in some experiments by several orders of magnitude.
translated by 谷歌翻译
Deep Gaussian工艺(DGP)作为贝叶斯学习的先验模型直观地利用功能组成中的表达能力。 DGP还提供了不同的建模功能,但是推断很具有挑战性,因为潜在功能空间的边缘化是无法处理的。借助Bochner定理,具有平方指数内核的DGP可以看作是由随机特征层,正弦和余弦激活单元以及随机重量层组成的深度三角网络。在具有瓶颈的宽极限中,我们表明重量空间视图产生了相同的有效协方差函数,该函数先前在功能空间中获得。同样,在网络参数上改变先前的分布相当于使用不同的内核。因此,DGP可以转换为深瓶颈触发网络,可以通过该网络获得确切的最大后验估计。有趣的是,网络表示可以研究DGP的神经切线核,这也可能揭示了棘手的预测分布的平均值。从统计上讲,与浅网络不同,有限宽度的深网具有与极限内核的协方差,并且内部和外部宽度可能在功能学习中起不同的作用。存在数值模拟以支持我们的发现。
translated by 谷歌翻译
神经网络和高斯过程的优势和劣势是互补的。更好地了解他们的关系伴随着使每个方法从另一个方法中受益的承诺。在这项工作中,我们建立了神经网络的前进通行证与(深)稀疏高斯工艺模型之间的等价。我们开发的理论是基于解释激活函数作为跨域诱导功能,通过对激活函数和内核之间的相互作用进行严格分析。这导致模型可以被视为具有改善的不确定性预测或深度高斯过程的神经网络,其具有提高的预测精度。这些权利要求通过对回归和分类数据集进行实验结果来支持。
translated by 谷歌翻译
已知神经网络模型加强隐藏的数据偏差,使它们不可靠且难以解释。我们试图通过在功能空间中引入归纳偏差来构建“知道他们不知道的内容”。我们表明贝叶斯神经网络的定期激活功能在网络权重和平移 - 不变,静止的高斯过程前沿建立了连接之间的连接。此外,我们表明,通过覆盖三角波和周期性的Relu激活功能,该链接超出了正弦波(傅里叶)激活。在一系列实验中,我们表明定期激活功能获得了域内数据的可比性,并捕获对深度神经网络中的扰动输入的灵敏度进行域名检测。
translated by 谷歌翻译
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models or analyses with a downstream application such that error quantification plays a key role. However, by entirely ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of a widely-applied theorem for conditioning GPs on a finite number of direct observations to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate and the capability to incorporate uncertain model parameters and observations. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models.
translated by 谷歌翻译
Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems.
translated by 谷歌翻译
神经操作员是一种深层建筑,可以学会解决(即学习)部分微分方程(PDE)的非线性解决方案操作员。这些模型的当前艺术状态不能提供明确的不确定性量化。可以说,这是这种任务的问题,而不是机器学习中的其他地方,因为PDE通常描述的动态系统通常表现出微妙的多尺度结构,这会使人类难以发现错误。在这项工作中,我们首先在高斯过程的形式主义中首先提供了数学上详细的贝叶斯公式(线性)版本。然后,我们使用贝叶斯深度学习的近似方法将这种分析治疗扩展到一般的深层神经操作员。我们通过为神经操作员提供不确定性量化来扩展对神经操作员的先前结果。结果,我们的方法能够识别病例,并提供结构化的不确定性估计值,而神经操作员无法很好地预测。
translated by 谷歌翻译
宽度限制最近是深度学习研究的焦点:模数计算实用,做更广泛的网络优于较窄的网络?当传统网络增益具有宽度的代表性,潜在掩盖任何负面影响,回答这个问题一直在具有挑战性。我们在本文中的分析通过神经网络的概括到深层高斯过程(深GP),一类非参数分层模型,占据了神经网络的非参数分层模型。在这样做时,我们的目标是了解一旦对给定建模任务的容量足够的容量,才能了解宽度(标准)神经网络。我们深入GP的理论和经验结果表明,大宽度可能对等级模型有害。令人惊讶的是,我们证明了甚至非参数的深GP融合到高斯过程,实际上变得浅薄而没有任何代表性的力量。对应于数据适应性基本函数的混合的后后,与宽度变得较小。我们的尾部分析表明,宽度和深度具有相反的影响:深度突出了模型的非高斯,而宽度使模型越来越高斯。我们发现有一个“甜蜜点”,可以在限制GP行为防止适应性之前最大化测试性能,以宽度= 1或宽度= 2用于非参数深GP。这些结果对具有L2正规化训练的传统神经网络中的相同现象(类似于参数的高斯),使得这种神经网络可能需要多达500至1000个隐藏单元的现象,以获得足够的容量 - 取决于数据集 - 但进一步的宽度降低了性能。
translated by 谷歌翻译
我们在本文中证明,宽的Relu神经网络(NNS)与L2正则化优化的至少一个隐藏层,参数由表示学习引起的多任务学习 - 也在极限宽度到无穷大。这与文献中讨论的多个其他理想化设置相反,其中宽(Relu) - 在极限宽度到无穷大的多任务学习中受益的能力。我们推导了多任务学习能力,从而证明了在功能空间中学习Nn的精确定量宏观表征。
translated by 谷歌翻译
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications.
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
我们分析了通过梯度流通过自洽动力场理论训练的无限宽度神经网络中的特征学习。我们构建了确定性动力学阶参数的集合,该参数是内部产物内核,用于在成对的时间点中,每一层中隐藏的单位激活和梯度,从而减少了通过训练对网络活动的描述。这些内核顺序参数共同定义了隐藏层激活分布,神经切线核的演变以及因此输出预测。我们表明,现场理论推导恢复了从Yang和Hu(2021)获得张量程序的无限宽度特征学习网络的递归随机过程。对于深线性网络,这些内核满足一组代数矩阵方程。对于非线性网络,我们提供了一个交替的采样过程,以求助于内核顺序参数。我们提供了与各种近似方案的自洽解决方案的比较描述。最后,我们提供了更现实的设置中的实验,这些实验表明,在CIFAR分类任务上,在不同宽度上保留了CNN的CNN的损耗和内核动力学。
translated by 谷歌翻译
神经切线核是根据无限宽度神经网络的参数分布定义的内核函数。尽管该极限不切实际,但神经切线内核允许对神经网络进行更直接的研究,并凝视着黑匣子的面纱。最近,从理论上讲,Laplace内核和神经切线内核在$ \ Mathbb {S}}^{D-1} $中共享相同的复制核Hilbert空间,暗示了它们的等价。在这项工作中,我们分析了两个内核的实际等效性。我们首先是通过与核的准确匹配,然后通过与高斯过程的后代匹配来进行匹配。此外,我们分析了$ \ mathbb {r}^d $中的内核,并在回归任务中进行实验。
translated by 谷歌翻译
深神经网络(DNN)是用于压缩和蒸馏信息的强大工具。由于它们的规模和复杂性,通常涉及数十亿间相互作用的内部自由度,精确分析方法通常会缩短。这种情况下的共同策略是识别平均潜在的快速微观变量的不稳定行为的缓慢自由度。在这里,我们在训练结束时识别在过度参数化的深卷积神经网络(CNNS)中发生的尺度的分离。它意味着神经元预激活与几乎高斯的方式与确定性潜在内核一起波动。在对于具有无限许多频道的CNN来说,这些内核是惰性的,对于有限的CNNS,它们以分析的方式通过数据适应和学习数据。由此产生的深度学习的热力学理论产生了几种深度非线性CNN玩具模型的准确预测。此外,它还提供了新的分析和理解CNN的方法。
translated by 谷歌翻译
物理建模对于许多现代科学和工程应用至关重要。从数据科学或机器学习的角度来看,更多的域 - 不可吻合,数据驱动的模型是普遍的,物理知识 - 通常表示为微分方程 - 很有价值,因为它与数据是互补的,并且可能有可能帮助克服问题例如数据稀疏性,噪音和不准确性。在这项工作中,我们提出了一个简单但功能强大且通用的框架 - 自动构建物理学,可以将各种微分方程集成到高斯流程(GPS)中,以增强预测准确性和不确定性量化。这些方程可以是线性或非线性,空间,时间或时空,与未知的源术语完全或不完整,等等。基于内核分化,我们在示例目标函数,方程相关的衍生物和潜在源函数之前构建了GP,这些函数全部来自多元高斯分布。采样值被馈送到两个可能性:一个以适合观测值,另一个符合方程式。我们使用美白方法来逃避采样函数值和内核参数之间的强依赖性,并开发出一种随机变分学习算法。在模拟和几个现实世界应用中,即使使用粗糙的,不完整的方程式,自动元素都显示出对香草GPS的改进。
translated by 谷歌翻译
我们介绍了Hida-Mat'Ern内核的班级,这是整个固定式高斯 - 马尔可夫流程的整个空间的规范家庭协方差。它在垫子内核上延伸,通过允许灵活地构造具有振荡组件的过程。任何固定内核,包括广泛使用的平方指数和光谱混合核,要么直接在该类内,也是适当的渐近限制,展示了该类的一般性。利用其Markovian Nature,我们展示了如何仅使用内核及其衍生物来代表状态空间模型的过程。反过来,这使我们能够更有效地执行高斯工艺推论,并且侧面通常计算负担。我们还表明,除了进一步减少计算复杂性之外,我们还显示了如何利用状态空间表示的特殊属性。
translated by 谷歌翻译