Many dynamical systems -- from robots interacting with their surroundings to large-scale multiphysics systems -- involve a number of interacting subsystems. Toward the objective of learning composite models of such systems from data, we present i) a framework for compositional neural networks, ii) algorithms to train these models, iii) a method to compose the learned models, iv) theoretical results that bound the error of the resulting composite models, and v) a method to learn the composition itself, when it is not known a prior. The end result is a modular approach to learning: neural network submodels are trained on trajectory data generated by relatively simple subsystems, and the dynamics of more complex composite systems are then predicted without requiring additional data generated by the composite systems themselves. We achieve this compositionality by representing the system of interest, as well as each of its subsystems, as a port-Hamiltonian neural network (PHNN) -- a class of neural ordinary differential equations that uses the port-Hamiltonian systems formulation as inductive bias. We compose collections of PHNNs by using the system's physics-informed interconnection structure, which may be known a priori, or may itself be learned from data. We demonstrate the novel capabilities of the proposed framework through numerical examples involving interacting spring-mass-damper systems. Models of these systems, which include nonlinear energy dissipation and control inputs, are learned independently. Accurate compositions are learned using an amount of training data that is negligible in comparison with that required to train a new model from scratch. Finally, we observe that the composite PHNNs enjoy properties of port-Hamiltonian systems, such as cyclo-passivity -- a property that is useful for control purposes.
translated by 谷歌翻译
Effective inclusion of physics-based knowledge into deep neural network models of dynamical systems can greatly improve data efficiency and generalization. Such a-priori knowledge might arise from physical principles (e.g., conservation laws) or from the system's design (e.g., the Jacobian matrix of a robot), even if large portions of the system dynamics remain unknown. We develop a framework to learn dynamics models from trajectory data while incorporating a-priori system knowledge as inductive bias. More specifically, the proposed framework uses physics-based side information to inform the structure of the neural network itself, and to place constraints on the values of the outputs and the internal states of the model. It represents the system's vector field as a composition of known and unknown functions, the latter of which are parametrized by neural networks. The physics-informed constraints are enforced via the augmented Lagrangian method during the model's training. We experimentally demonstrate the benefits of the proposed approach on a variety of dynamical systems -- including a benchmark suite of robotics environments featuring large state spaces, non-linear dynamics, external forces, contact forces, and control inputs. By exploiting a-priori system knowledge during training, the proposed approach learns to predict the system dynamics two orders of magnitude more accurately than a baseline approach that does not include prior knowledge, given the same training dataset.
translated by 谷歌翻译
基于哈密顿配方的混合机器学习最近已成功证明了简单的机械系统。在这项工作中,我们在简单的质量弹簧系统和更复杂,更现实的系统上强调方法,具有多个内部和外部端口,包括具有多个连接储罐的系统。我们量化各种条件下的性能,并表明施加不同的假设会极大地影响性能,突出该方法的优势和局限性。我们证明,哈米尔顿港神经网络可以扩展到具有州依赖性端口的更高维度。我们考虑学习具有已知和未知外部端口的系统。哈米尔顿港的公式允许检测偏差,并在删除偏差时仍然提供有效的模型。最后,我们提出了一种对称的高级整合方案,以改善稀疏和嘈杂数据的训练。
translated by 谷歌翻译
机器人动态的准确模型对于新颖的操作条件安全和稳定控制和概括至关重要。然而,即使在仔细参数调谐后,手工设计的模型也可能是不够准确的。这激励了使用机器学习技术在训练集的状态控制轨迹上近似机器人动力学。根据其SE(3)姿势和广义速度,并满足能量原理的保护,描述了许多机器人的动态,包括地面,天线和水下车辆。本文提出了在神经常规差分方程(ODE)网络结构的SE(3)歧管上的HamiltonIAN制剂,以近似刚体的动态。与黑匣子颂网络相比,我们的配方通过施工保证了总节能。我们为学习的学习,潜在的SE(3)Hamiltonian动力学开发能量整形和阻尼注射控制,以实现具有各种平台的稳定和轨迹跟踪的统一方法,包括摆锤,刚体和四极其系统。
translated by 谷歌翻译
Incorporating prior knowledge of physics laws and structural properties of dynamical systems into the design of deep learning architectures has proven to be a powerful technique for improving their computational efficiency and generalization capacity. Learning accurate models of robot dynamics is critical for safe and stable control. Autonomous mobile robots, including wheeled, aerial, and underwater vehicles, can be modeled as controlled Lagrangian or Hamiltonian rigid-body systems evolving on matrix Lie groups. In this paper, we introduce a new structure-preserving deep learning architecture, the Lie group Forced Variational Integrator Network (LieFVIN), capable of learning controlled Lagrangian or Hamiltonian dynamics on Lie groups, either from position-velocity or position-only data. By design, LieFVINs preserve both the Lie group structure on which the dynamics evolve and the symplectic structure underlying the Hamiltonian or Lagrangian systems of interest. The proposed architecture learns surrogate discrete-time flow maps instead of surrogate vector fields, which allows better and faster prediction without requiring the use of a numerical integrator, neural ODE, or adjoint techniques. Furthermore, the learnt discrete-time dynamics can be combined seamlessly with computationally scalable discrete-time (optimal) control strategies.
translated by 谷歌翻译
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
translated by 谷歌翻译
物理学的美在于,通常在变化的系统(称为运动常数)中保守数量。找到运动的常数对于理解系统的动力学很重要,但通常需要数学水平和手动分析工作。在本文中,我们提出了一个神经网络,该网络可以同时了解系统的动力学和来自数据的运动常数。通过利用发现的运动常数,它可以对动态产生更好的预测,并且可以比基于哈密顿的神经网络在更广泛的系统上工作。此外,我们方法的训练进展可以用作系统中运动常数数量的指示,该系统可用于研究新型物理系统。
translated by 谷歌翻译
Despite the immense success of neural networks in modeling system dynamics from data, they often remain physics-agnostic black boxes. In the particular case of physical systems, they might consequently make physically inconsistent predictions, which makes them unreliable in practice. In this paper, we leverage the framework of Irreversible port-Hamiltonian Systems (IPHS), which can describe most multi-physics systems, and rely on Neural Ordinary Differential Equations (NODEs) to learn their parameters from data. Since IPHS models are consistent with the first and second principles of thermodynamics by design, so are the proposed Physically Consistent NODEs (PC-NODEs). Furthermore, the NODE training procedure allows us to seamlessly incorporate prior knowledge of the system properties in the learned dynamics. We demonstrate the effectiveness of the proposed method by learning the thermodynamics of a building from the real-world measurements and the dynamics of a simulated gas-piston system. Thanks to the modularity and flexibility of the IPHS framework, PC-NODEs can be extended to learn physically consistent models of multi-physics distributed systems.
translated by 谷歌翻译
随着数据的不断增加,将现代机器学习方法应用于建模和控制等领域的兴趣爆炸。但是,尽管这种黑盒模型具有灵活性和令人惊讶的准确性,但仍然很难信任它们。结合两种方法的最新努力旨在开发灵活的模型,这些模型仍然可以很好地推广。我们称为混合分析和建模(HAM)的范式。在这项工作中,我们调查了使用数据驱动模型纠正基于错误的物理模型的纠正源术语方法(COSTA)。这使我们能够开发出可以进行准确预测的模型,即使问题的基本物理学尚未得到充分理解。我们将Costa应用于铝电解电池中的Hall-H \'Eroult工艺。我们证明该方法提高了准确性和预测稳定性,从而产生了总体可信赖的模型。
translated by 谷歌翻译
过去几年目睹了在深入学习框架中纳入物理知识的归纳偏见的兴趣增加。特别地,越来越多的文献一直在探索实施能节能的方式,同时使用来自观察时间序列数据的神经网络来学习动态的神经网络。在这项工作中,我们调查了最近提出的节能神经网络模型,包括HNN,LNN,DELAN,SYMODEN,CHNN,CLNN及其变体。我们提供了这些模型背后的理论的紧凑级,并解释了他们的相似之处和差异。它们的性能在4个物理系统中进行了比较。我们指出了利用一些这些节能模型来设计基于能量的控制器的可能性。
translated by 谷歌翻译
Neural ordinary differential equations (NODEs) -- parametrizations of differential equations using neural networks -- have shown tremendous promise in learning models of unknown continuous-time dynamical systems from data. However, every forward evaluation of a NODE requires numerical integration of the neural network used to capture the system dynamics, making their training prohibitively expensive. Existing works rely on off-the-shelf adaptive step-size numerical integration schemes, which often require an excessive number of evaluations of the underlying dynamics network to obtain sufficient accuracy for training. By contrast, we accelerate the evaluation and the training of NODEs by proposing a data-driven approach to their numerical integration. The proposed Taylor-Lagrange NODEs (TL-NODEs) use a fixed-order Taylor expansion for numerical integration, while also learning to estimate the expansion's approximation error. As a result, the proposed approach achieves the same accuracy as adaptive step-size schemes while employing only low-order Taylor expansions, thus greatly reducing the computational cost necessary to integrate the NODE. A suite of numerical experiments, including modeling dynamical systems, image classification, and density estimation, demonstrate that TL-NODEs can be trained more than an order of magnitude faster than state-of-the-art approaches, without any loss in performance.
translated by 谷歌翻译
Relying on recent research results on Neural ODEs, this paper presents a methodology for the design of state observers for nonlinear systems based on Neural ODEs, learning Luenberger-like observers and their nonlinear extension (Kazantzis-Kravaris-Luenberger (KKL) observers) for systems with partially-known nonlinear dynamics and fully unknown nonlinear dynamics, respectively. In particular, for tuneable KKL observers, the relationship between the design of the observer and its trade-off between convergence speed and robustness is analysed and used as a basis for improving the robustness of the learning-based observer in training. We illustrate the advantages of this approach in numerical simulations.
translated by 谷歌翻译
在许多现实世界中,当不二维测量值时,可能会提供自由旋转3D刚体(例如卫星)的图像观察。但是,图像数据的高维度排除了学习动力学和缺乏解释性的使用,从而降低了标准深度学习方法的有用性。在这项工作中,我们提出了一个物理知识的神经网络模型,以估计和预测图像序列中的3D旋转动力学。我们使用多阶段预测管道实现了这一目标,该管道将单个图像映射到潜在表示同构为$ \ Mathbf {so}(3)$,从潜在对计算角速度,并使用Hamiltonian Motion使用Hamiltonian运动方程来预测未来的潜在状态博学的哈密顿人的代表。我们证明了方法对新的旋转刚体数据集的功效,该数据集具有旋转立方体和矩形棱镜序列,并具有均匀且不均匀的密度。
translated by 谷歌翻译
动态系统参见在物理,生物学,化学等自然科学中广泛使用,以及电路分析,计算流体动力学和控制等工程学科。对于简单的系统,可以通过应用基本物理法来导出管理动态的微分方程。然而,对于更复杂的系统,这种方法变得非常困难。数据驱动建模是一种替代范式,可以使用真实系统的观察来了解系统的动态的近似值。近年来,对数据驱动的建模技术的兴趣增加,特别是神经网络已被证明提供了解决广泛任务的有效框架。本文提供了使用神经网络构建动态系统模型的不同方式的调查。除了基础概述外,我们还审查了相关的文献,概述了这些建模范式必须克服的数值模拟中最重要的挑战。根据审查的文献和确定的挑战,我们提供了关于有前途的研究领域的讨论。
translated by 谷歌翻译
在这里,我们提出了符合性整合的符号回归(SISR),这是一种从数据中学习物理控制方程的新技术。SISR使用具有突变的多层LSTM-RNN采用深层符号回归方法,以概率地采样哈密顿符号表达式。使用符号神经网络,我们开发了一种模型无关的方法,用于从数据中提取有意义的物理先验,这些方法可以直接施加到RNN输出中,从而限制了其搜索空间。使用四阶符号整合方案对RNN产生的汉密尔顿人进行了优化和评估;预测性能用于训练LSTM-RNN,以通过寻求风险的政策梯度方法来产生越来越更好的功能。采用这些技术,我们从振荡器,摆,两体和三体重力系统中提取正确的管理方程,并具有嘈杂且非常小的数据集。
translated by 谷歌翻译
最近,对具有神经网络的物理系统建模和计算的兴趣越来越多。在古典力学中,哈密顿系统是一种优雅而紧凑的形式主义,该动力学由一个标量功能,哈密顿量完全决定。解决方案轨迹通常受到约束,以在线性矢量空间的子序列上进化。在这项工作中,我们提出了新的方法,以准确地逼近其解决方案的示例数据信息的约束机械系统的哈密顿功能。我们通过使用明确的谎言组集成商和其他经典方案来关注学习策略中约束的重要性。
translated by 谷歌翻译
在许多学科中,动态系统的数据信息预测模型的开发引起了广泛的兴趣。我们提出了一个统一的框架,用于混合机械和机器学习方法,以从嘈杂和部分观察到的数据中识别动态系统。我们将纯数据驱动的学习与混合模型进行比较,这些学习结合了不完善的域知识。我们的公式与所选的机器学习模型不可知,在连续和离散的时间设置中都呈现,并且与表现出很大的内存和错误的模型误差兼容。首先,我们从学习理论的角度研究无内存线性(W.R.T.参数依赖性)模型误差,从而定义了过多的风险和概括误差。对于沿阵行的连续时间系统,我们证明,多余的风险和泛化误差都通过与T的正方形介于T的术语(指定训练数据的时间间隔)的术语界定。其次,我们研究了通过记忆建模而受益的方案,证明了两类连续时间复发性神经网络(RNN)的通用近似定理:两者都可以学习与内存有关的模型误差。此外,我们将一类RNN连接到储层计算,从而将学习依赖性错误的学习与使用随机特征在Banach空间之间进行监督学习的最新工作联系起来。给出了数值结果(Lorenz '63,Lorenz '96多尺度系统),以比较纯粹的数据驱动和混合方法,发现混合方法较少,渴望数据较少,并且更有效。最后,我们从数值上证明了如何利用数据同化来从嘈杂,部分观察到的数据中学习隐藏的动态,并说明了通过这种方法和培训此类模型来表示记忆的挑战。
translated by 谷歌翻译
预测在环境中只有部分了解其动态的综合动态现象是各种科学领域的普遍存在问题。虽然纯粹的数据驱动方法在这种情况下可以说是不充分的,但是基于标准的物理建模的方法往往是过于简单的,诱导不可忽略的错误。在这项工作中,我们介绍了适当性框架,是一种具有深度数据驱动模型的微分方程所描述的不完整物理动态的原则方法。它包括将动态分解为两个组件:对我们有一些先验知识的动态的物理组件,以及物理模型错误的数据驱动组件核对。仔细制定学习问题,使得物理模型尽可能多地解释数据,而数据驱动组件仅描述了物理模型不能捕获的信息,不再少。这不仅为这种分解提供了存在和唯一性,而且还确保了可解释性和益处泛化。在三个重要用例中进行的实验,每个代表不同的现象,即反应 - 扩散方程,波动方程和非线性阻尼摆锤,表明,空间程度可以有效地利用近似物理模型来准确地预测系统的演变并正确识别相关的物理参数。
translated by 谷歌翻译
在这项工作中,我们利用神经网络(NNS)的通用近似特性来设计端口 - Hamiltonian(pH)框架中的完全致动机械系统的互连和阻尼分配(IDA)基于控制(PBC)方案。为此,我们将IDA-PBC方法转换为解决部分差分匹配方程的监督学习问题,并满足均衡分配和Lyapunov稳定条件。这是主要的结果,即学习算法的输出在被动和Lyapunov稳定性方面具有明确的控制理论解释。通过数值模拟验证了所提出的控制设计方法,用于1和两度自由度的机械系统。
translated by 谷歌翻译
能量保护是许多物理现象和动态系统的核心。在过去的几年中,有大量作品旨在预测使用神经网络的动力系统运动轨迹,同时遵守能源保护法。这些作品中的大多数受到古典力学的启发,例如哈密顿和拉格朗日力学以及神经普通微分方程。尽管这些作品已被证明在特定领域中分别很好地工作,但缺乏统一的方法,该方法通常不适用,而无需对神经网络体系结构进行重大更改。在这项工作中,我们旨在通过提供一种简单的方法来解决此问题,该方法不仅可以应用于能源持持势的系统,还可以应用于耗散系统,通过在不同情况下以不同的情况在不同情况下以正规化术语形式包括不同的归纳偏见。损失功能。所提出的方法不需要更改神经网络体系结构,并且可以构成验证新思想的基础,因此表明有望在这个方向上加速研究。
translated by 谷歌翻译