Effective inclusion of physics-based knowledge into deep neural network models of dynamical systems can greatly improve data efficiency and generalization. Such a-priori knowledge might arise from physical principles (e.g., conservation laws) or from the system's design (e.g., the Jacobian matrix of a robot), even if large portions of the system dynamics remain unknown. We develop a framework to learn dynamics models from trajectory data while incorporating a-priori system knowledge as inductive bias. More specifically, the proposed framework uses physics-based side information to inform the structure of the neural network itself, and to place constraints on the values of the outputs and the internal states of the model. It represents the system's vector field as a composition of known and unknown functions, the latter of which are parametrized by neural networks. The physics-informed constraints are enforced via the augmented Lagrangian method during the model's training. We experimentally demonstrate the benefits of the proposed approach on a variety of dynamical systems -- including a benchmark suite of robotics environments featuring large state spaces, non-linear dynamics, external forces, contact forces, and control inputs. By exploiting a-priori system knowledge during training, the proposed approach learns to predict the system dynamics two orders of magnitude more accurately than a baseline approach that does not include prior knowledge, given the same training dataset.
translated by 谷歌翻译
Many dynamical systems -- from robots interacting with their surroundings to large-scale multiphysics systems -- involve a number of interacting subsystems. Toward the objective of learning composite models of such systems from data, we present i) a framework for compositional neural networks, ii) algorithms to train these models, iii) a method to compose the learned models, iv) theoretical results that bound the error of the resulting composite models, and v) a method to learn the composition itself, when it is not known a prior. The end result is a modular approach to learning: neural network submodels are trained on trajectory data generated by relatively simple subsystems, and the dynamics of more complex composite systems are then predicted without requiring additional data generated by the composite systems themselves. We achieve this compositionality by representing the system of interest, as well as each of its subsystems, as a port-Hamiltonian neural network (PHNN) -- a class of neural ordinary differential equations that uses the port-Hamiltonian systems formulation as inductive bias. We compose collections of PHNNs by using the system's physics-informed interconnection structure, which may be known a priori, or may itself be learned from data. We demonstrate the novel capabilities of the proposed framework through numerical examples involving interacting spring-mass-damper systems. Models of these systems, which include nonlinear energy dissipation and control inputs, are learned independently. Accurate compositions are learned using an amount of training data that is negligible in comparison with that required to train a new model from scratch. Finally, we observe that the composite PHNNs enjoy properties of port-Hamiltonian systems, such as cyclo-passivity -- a property that is useful for control purposes.
translated by 谷歌翻译
过去几年目睹了在深入学习框架中纳入物理知识的归纳偏见的兴趣增加。特别地,越来越多的文献一直在探索实施能节能的方式,同时使用来自观察时间序列数据的神经网络来学习动态的神经网络。在这项工作中,我们调查了最近提出的节能神经网络模型,包括HNN,LNN,DELAN,SYMODEN,CHNN,CLNN及其变体。我们提供了这些模型背后的理论的紧凑级,并解释了他们的相似之处和差异。它们的性能在4个物理系统中进行了比较。我们指出了利用一些这些节能模型来设计基于能量的控制器的可能性。
translated by 谷歌翻译
机器人动态的准确模型对于新颖的操作条件安全和稳定控制和概括至关重要。然而,即使在仔细参数调谐后,手工设计的模型也可能是不够准确的。这激励了使用机器学习技术在训练集的状态控制轨迹上近似机器人动力学。根据其SE(3)姿势和广义速度,并满足能量原理的保护,描述了许多机器人的动态,包括地面,天线和水下车辆。本文提出了在神经常规差分方程(ODE)网络结构的SE(3)歧管上的HamiltonIAN制剂,以近似刚体的动态。与黑匣子颂网络相比,我们的配方通过施工保证了总节能。我们为学习的学习,潜在的SE(3)Hamiltonian动力学开发能量整形和阻尼注射控制,以实现具有各种平台的稳定和轨迹跟踪的统一方法,包括摆锤,刚体和四极其系统。
translated by 谷歌翻译
动态系统参见在物理,生物学,化学等自然科学中广泛使用,以及电路分析,计算流体动力学和控制等工程学科。对于简单的系统,可以通过应用基本物理法来导出管理动态的微分方程。然而,对于更复杂的系统,这种方法变得非常困难。数据驱动建模是一种替代范式,可以使用真实系统的观察来了解系统的动态的近似值。近年来,对数据驱动的建模技术的兴趣增加,特别是神经网络已被证明提供了解决广泛任务的有效框架。本文提供了使用神经网络构建动态系统模型的不同方式的调查。除了基础概述外,我们还审查了相关的文献,概述了这些建模范式必须克服的数值模拟中最重要的挑战。根据审查的文献和确定的挑战,我们提供了关于有前途的研究领域的讨论。
translated by 谷歌翻译
预测在环境中只有部分了解其动态的综合动态现象是各种科学领域的普遍存在问题。虽然纯粹的数据驱动方法在这种情况下可以说是不充分的,但是基于标准的物理建模的方法往往是过于简单的,诱导不可忽略的错误。在这项工作中,我们介绍了适当性框架,是一种具有深度数据驱动模型的微分方程所描述的不完整物理动态的原则方法。它包括将动态分解为两个组件:对我们有一些先验知识的动态的物理组件,以及物理模型错误的数据驱动组件核对。仔细制定学习问题,使得物理模型尽可能多地解释数据,而数据驱动组件仅描述了物理模型不能捕获的信息,不再少。这不仅为这种分解提供了存在和唯一性,而且还确保了可解释性和益处泛化。在三个重要用例中进行的实验,每个代表不同的现象,即反应 - 扩散方程,波动方程和非线性阻尼摆锤,表明,空间程度可以有效地利用近似物理模型来准确地预测系统的演变并正确识别相关的物理参数。
translated by 谷歌翻译
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
translated by 谷歌翻译
合并适当的归纳偏差在从数据的学习动态中发挥着关键作用。通过将拉格朗日或哈密顿的动态编码到神经网络架构中,越来越多的工作已经探索了在学习动态中实施节能的方法。这些现有方法基于微分方程,其不允许州中的不连续性,从而限制了一个人可以学习的系统。然而,实际上,大多数物理系统,例如腿机器人和机器人操纵器,涉及联系和碰撞,这在各州引入了不连续性。在本文中,我们介绍了一种可微分的接触型号,可以捕获接触机械:无摩擦/摩擦,以及弹性/无弹性。该模型还可以适应不等式约束,例如关节角度的限制。拟议的联系模式通过允许同时学习联系和系统性质来扩展拉格朗日和哈密顿神经网络的范围。我们在具有不同恢复系数和摩擦系数的一系列具有挑战性的2D和3D物理系统上展示了这一框架。学习的动态可以用作用于下游梯度的优化任务的可分解物理模拟器,例如规划和控制。
translated by 谷歌翻译
深度学习方法的应用加快了挑战性电流问题的分辨率,最近显示出令人鼓舞的结果。但是,电力系统动力学不是快照,稳态操作。必须考虑这些动力学,以确保这些模型提供的最佳解决方案遵守实用的动力约束,避免频率波动和网格不稳定性。不幸的是,由于其高计算成本,基于普通或部分微分方程的动态系统模型通常不适合在控制或状态估计中直接应用。为了应对这些挑战,本文介绍了一种机器学习方法,以近乎实时近似电力系统动态的行为。该拟议的框架基于梯度增强的物理知识的神经网络(GPINNS),并编码有关电源系统的基本物理定律。拟议的GPINN的关键特征是它的训练能力而无需生成昂贵的培训数据。该论文说明了在单机无限总线系统中提出的方法在预测转子角度和频率的前进和反向问题中的潜力,以及不确定的参数,例如惯性和阻尼,以展示其在一系列电力系统应用中的潜力。
translated by 谷歌翻译
深度学习模型能够近似一个特定的动力系统,但在学习通用动力学方面挣扎,在该动态系统中,动态系统遵守了相同的物理定律,但包含不同数量的元素(例如,双重和三铅系统)。为了缓解这个问题,我们提出了模块化拉​​格朗日网络(ModLanet),这是一个具有模块化和物理诱导偏置的结构神经网络框架。该框架使用模块化对每个元素的能量进行建模,然后通过拉格朗日力学构建目标动态系统。模块化有益于重复训练的网络和减少网络和数据集的规模。结果,我们的框架可以从更简单的系统的动力学中学习,并扩展到更复杂的框架,使用其他相关的物理信息神经网络是不可行的。我们研究了使用小型培训数据集建模双体螺旋形或三体系统的框架,与同行相比,我们的模型实现了最佳的数据效率和准确性性能。我们还将模型重新组织为建模多体型和多体系统的扩展,展示了我们框架的可重复使用功能。
translated by 谷歌翻译
Relying on recent research results on Neural ODEs, this paper presents a methodology for the design of state observers for nonlinear systems based on Neural ODEs, learning Luenberger-like observers and their nonlinear extension (Kazantzis-Kravaris-Luenberger (KKL) observers) for systems with partially-known nonlinear dynamics and fully unknown nonlinear dynamics, respectively. In particular, for tuneable KKL observers, the relationship between the design of the observer and its trade-off between convergence speed and robustness is analysed and used as a basis for improving the robustness of the learning-based observer in training. We illustrate the advantages of this approach in numerical simulations.
translated by 谷歌翻译
Neural ordinary differential equations (NODEs) -- parametrizations of differential equations using neural networks -- have shown tremendous promise in learning models of unknown continuous-time dynamical systems from data. However, every forward evaluation of a NODE requires numerical integration of the neural network used to capture the system dynamics, making their training prohibitively expensive. Existing works rely on off-the-shelf adaptive step-size numerical integration schemes, which often require an excessive number of evaluations of the underlying dynamics network to obtain sufficient accuracy for training. By contrast, we accelerate the evaluation and the training of NODEs by proposing a data-driven approach to their numerical integration. The proposed Taylor-Lagrange NODEs (TL-NODEs) use a fixed-order Taylor expansion for numerical integration, while also learning to estimate the expansion's approximation error. As a result, the proposed approach achieves the same accuracy as adaptive step-size schemes while employing only low-order Taylor expansions, thus greatly reducing the computational cost necessary to integrate the NODE. A suite of numerical experiments, including modeling dynamical systems, image classification, and density estimation, demonstrate that TL-NODEs can be trained more than an order of magnitude faster than state-of-the-art approaches, without any loss in performance.
translated by 谷歌翻译
在许多现实世界中,当不二维测量值时,可能会提供自由旋转3D刚体(例如卫星)的图像观察。但是,图像数据的高维度排除了学习动力学和缺乏解释性的使用,从而降低了标准深度学习方法的有用性。在这项工作中,我们提出了一个物理知识的神经网络模型,以估计和预测图像序列中的3D旋转动力学。我们使用多阶段预测管道实现了这一目标,该管道将单个图像映射到潜在表示同构为$ \ Mathbf {so}(3)$,从潜在对计算角速度,并使用Hamiltonian Motion使用Hamiltonian运动方程来预测未来的潜在状态博学的哈密顿人的代表。我们证明了方法对新的旋转刚体数据集的功效,该数据集具有旋转立方体和矩形棱镜序列,并具有均匀且不均匀的密度。
translated by 谷歌翻译
学习包括不同对象之间接触的动态系统的物理结构化表示是机器人技术中基于学习的方法的重要问题。黑盒神经网络可以学会大致表示不连续的动态,但是它们通常需要大量数据,并且在预测更长的时间范围时通常会遭受病理行为。在这项工作中,我们使用深层神经网络和微分方程之间的连接来设计一个深网架构家族,以表示对象之间的接触动态。我们表明,这些网络可以从传统上难以实现黑盒方法和最近启发的神经网络的设置中的嘈杂的观察结果中以数据效率的方式学习不连续的联系事件。我们的结果表明,一种理想化的触摸反馈形式(由生物系统严重依赖)是使这一学习问题可以解决的关键组成部分。加上通过网络体系结构引入的电感偏差,我们的技术可以从观测值中准确学习接触动力学。
translated by 谷歌翻译
在许多学科中,动态系统的数据信息预测模型的开发引起了广泛的兴趣。我们提出了一个统一的框架,用于混合机械和机器学习方法,以从嘈杂和部分观察到的数据中识别动态系统。我们将纯数据驱动的学习与混合模型进行比较,这些学习结合了不完善的域知识。我们的公式与所选的机器学习模型不可知,在连续和离散的时间设置中都呈现,并且与表现出很大的内存和错误的模型误差兼容。首先,我们从学习理论的角度研究无内存线性(W.R.T.参数依赖性)模型误差,从而定义了过多的风险和概括误差。对于沿阵行的连续时间系统,我们证明,多余的风险和泛化误差都通过与T的正方形介于T的术语(指定训练数据的时间间隔)的术语界定。其次,我们研究了通过记忆建模而受益的方案,证明了两类连续时间复发性神经网络(RNN)的通用近似定理:两者都可以学习与内存有关的模型误差。此外,我们将一类RNN连接到储层计算,从而将学习依赖性错误的学习与使用随机特征在Banach空间之间进行监督学习的最新工作联系起来。给出了数值结果(Lorenz '63,Lorenz '96多尺度系统),以比较纯粹的数据驱动和混合方法,发现混合方法较少,渴望数据较少,并且更有效。最后,我们从数值上证明了如何利用数据同化来从嘈杂,部分观察到的数据中学习隐藏的动态,并说明了通过这种方法和培训此类模型来表示记忆的挑战。
translated by 谷歌翻译
学习动态是机器学习(ML)的许多重要应用的核心,例如机器人和自主驾驶。在这些设置中,ML算法通常需要推理使用高维观察的物理系统,例如图像,而不访问底层状态。最近,已经提出了几种方法将从经典机制的前沿集成到ML模型中,以解决图像的物理推理的挑战。在这项工作中,我们清醒了这些模型的当前功能。为此,我们介绍一套由17个数据集组成的套件,该数据集基于具有呈现各种动态的物理系统的视觉观测。我们对几种强大的基线进行了彻底的和详细比较了物理启发方法的主要类别。虽然包含物理前沿的模型通常可以学习具有所需特性的潜在空间,但我们的结果表明这些方法无法显着提高标准技术。尽管如此,我们发现使用连续和时间可逆动力学的使用效益所有课程的模型。
translated by 谷歌翻译
随着数据的不断增加,将现代机器学习方法应用于建模和控制等领域的兴趣爆炸。但是,尽管这种黑盒模型具有灵活性和令人惊讶的准确性,但仍然很难信任它们。结合两种方法的最新努力旨在开发灵活的模型,这些模型仍然可以很好地推广。我们称为混合分析和建模(HAM)的范式。在这项工作中,我们调查了使用数据驱动模型纠正基于错误的物理模型的纠正源术语方法(COSTA)。这使我们能够开发出可以进行准确预测的模型,即使问题的基本物理学尚未得到充分理解。我们将Costa应用于铝电解电池中的Hall-H \'Eroult工艺。我们证明该方法提高了准确性和预测稳定性,从而产生了总体可信赖的模型。
translated by 谷歌翻译
建模物理系统的数据驱动方法无法推广到与学习域共享相同一般动态的看不见的系统,但与不同的物理环境相对应。我们为此关键问题提出了一个新的框架,即上下文知识的动态适应(CODA),该框架考虑了整个系统之间的分布转移,以快速有效地适应新的动力学。 CODA利用多个环境,每个环境都与不同的动态相关联,并学会将动态模型定为上下文参数(特定于每个环境)。调节是通过超网络进行的,并从观察到的数据与上下文向量共同学习。提出的公式限制了搜索假设空间,以促进跨环境的快速适应和更好的概括。我们从理论上激励我们的方法,并在一组非线性动力学上显示出最新的概括结果,这是多种应用领域的代​​表。我们还在这些系统上还显示,可以从上下文向量中推断出新的系统参数,并以最小的监督为准。
translated by 谷歌翻译