在整个设计社区中,生成设计一直在增长,作为设计空间探索的可行方法。由于具有附加的对流扩散方程及其相关边界相互作用,热设计比机械或空气动力学设计更为复杂。我们使用合作的多代理深钢筋学习以及流体和固体结构域的连续几何表示,提出了生成的热设计。该提出的框架由预先训练的神经网络替代模型组成,作为预测产生几何形状的传热和压降的环境。设计空间通过复合Bezier曲线进行参数化,以求解多个FIN形状优化。我们表明,我们的多代理框架可以使用多目标奖励来学习设计策略的策略,而无需形状推导或可区分的目标函数。
translated by 谷歌翻译
我们提出了使用复合曲线曲线产生的复杂鳍几何形状的传​​热和压降预测的替代模型。热设计过程包括复杂,计算昂贵且耗时的迭代高保真模拟。随着机器学习算法以及图形处理单元(GPU)的进步,我们可以利用GPU的并行处理体系结构,而不仅仅是仅依靠CPU来加速热流体模拟。在这项研究中,卷积神经网络(CNN)用于直接从保存为图像的拓扑中预测计算流体动力学(CFD)的结果。研究了带有单个鳍和多个形态鳍的表壳。为案例提供了单个FIN设计的Xpection网络和常规CNN的比较。结果表明,对于单鳍设计,尤其是使用Xception网络,观察到高精度的预测精度。增加设计自由到多个鳍片会增加预测的误差。然而,对于设计目的而言,这一错误仍在压降和传热估计中保持在3%之内。
translated by 谷歌翻译
Profile extrusion is a continuous production process for manufacturing plastic profiles from molten polymer. Especially interesting is the design of the die, through which the melt is pressed to attain the desired shape. However, due to an inhomogeneous velocity distribution at the die exit or residual stresses inside the extrudate, the final shape of the manufactured part often deviates from the desired one. To avoid these deviations, the shape of the die can be computationally optimized, which has already been investigated in the literature using classical optimization approaches. A new approach in the field of shape optimization is the utilization of Reinforcement Learning (RL) as a learning-based optimization algorithm. RL is based on trial-and-error interactions of an agent with an environment. For each action, the agent is rewarded and informed about the subsequent state of the environment. While not necessarily superior to classical, e.g., gradient-based or evolutionary, optimization algorithms for one single problem, RL techniques are expected to perform especially well when similar optimization tasks are repeated since the agent learns a more general strategy for generating optimal shapes instead of concentrating on just one single problem. In this work, we investigate this approach by applying it to two 2D test cases. The flow-channel geometry can be modified by the RL agent using so-called Free-Form Deformation, a method where the computational mesh is embedded into a transformation spline, which is then manipulated based on the control-point positions. In particular, we investigate the impact of utilizing different agents on the training progress and the potential of wall time saving by utilizing multiple environments during training.
translated by 谷歌翻译
近年来近年来,加固学习方法已经发展了一系列政策梯度方法,主要用于建模随机政策的高斯分布。然而,高斯分布具有无限的支持,而现实世界应用通常具有有限的动作空间。如果它提供有限支持,则该解剖会导致可以消除的估计偏差,因为它提出了有限的支持。在这项工作中,我们调查如何在Openai健身房的两个连续控制任务中训练该测试策略在训练时执行该测试策略。对于这两个任务来说,测试政策在代理人的最终预期奖励方面优于高斯政策,也显示出更多的稳定性和更快的培训过程融合。对于具有高维图像输入的卡路里环境,在高斯政策中,代理的成功率提高了63%。
translated by 谷歌翻译
Machine learning frameworks such as Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control. This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques such as Bayesian Optimization (BO) and Lipschitz global optimization (LIPO). First, we review the general framework of the model-free control problem, bringing together all methods as black-box optimization problems. Then, we test the control algorithms on three test cases. These are (1) the stabilization of a nonlinear dynamical system featuring frequency cross-talk, (2) the wave cancellation from a Burgers' flow and (3) the drag reduction in a cylinder wake flow. We present a comprehensive comparison to illustrate their differences in exploration versus exploitation and their balance between `model capacity' in the control law definition versus `required complexity'. We believe that such a comparison paves the way toward the hybridization of the various methods, and we offer some perspective on their future development in the literature on flow control problems.
translated by 谷歌翻译
使用复杂的数学方法建模的工程问题或者以昂贵的测试或实验为特征,占用有限预算或有限计算资源。此外,行业的实际情景,基于物流和偏好,对可以进行实验的方式施加限制。例如,材料供应可以仅在单次或计算模型的情况下仅实现少量实验,因此可以基于共享计算资源面临显着的等待时间。在这种情况下,一个人通常以允许最大化一个人的知识的方式进行实验,同时满足上述实际限制。实验顺序设计(Sdoe)是一种流行的方法套件,近年来越来越多的不同工程和实际问题。利用贝叶斯形式主义的普通战略是贝叶斯Sdoe,它通常在一步一步的一步中选择单一实验的一步或近视场景中最好的工作。在这项工作中,我们的目标是扩展SDOE策略,以批量输入查询实验或计算机代码。为此,我们利用基于深度加强学习(RL)的政策梯度方法,提出批次选择的查询,以考虑到整个预算。该算法保留了SDOE中固有的顺序性质,同时基于来自深rl域的任务的奖励元素。所提出的方法的独特能力是其应用于多个任务的能力,例如函数的优化,一旦其培训。我们展示了在合成问题上提出了算法的性能,以及挑战的高维工程问题。
translated by 谷歌翻译
The study of decentralized learning or independent learning in cooperative multi-agent reinforcement learning has a history of decades. Recently empirical studies show that independent PPO (IPPO) can obtain good performance, close to or even better than the methods of centralized training with decentralized execution, in several benchmarks. However, decentralized actor-critic with convergence guarantee is still open. In this paper, we propose \textit{decentralized policy optimization} (DPO), a decentralized actor-critic algorithm with monotonic improvement and convergence guarantee. We derive a novel decentralized surrogate for policy optimization such that the monotonic improvement of joint policy can be guaranteed by each agent \textit{independently} optimizing the surrogate. In practice, this decentralized surrogate can be realized by two adaptive coefficients for policy optimization at each agent. Empirically, we compare DPO with IPPO in a variety of cooperative multi-agent tasks, covering discrete and continuous action spaces, and fully and partially observable environments. The results show DPO outperforms IPPO in most tasks, which can be the evidence for our theoretical results.
translated by 谷歌翻译
Collaborative autonomous multi-agent systems covering a specified area have many potential applications, such as UAV search and rescue, forest fire fighting, and real-time high-resolution monitoring. Traditional approaches for such coverage problems involve designing a model-based control policy based on sensor data. However, designing model-based controllers is challenging, and the state-of-the-art classical control policy still exhibits a large degree of suboptimality. In this paper, we present a reinforcement learning (RL) approach for the multi-agent coverage problem involving agents with second-order dynamics. Our approach is based on the Multi-Agent Proximal Policy Optimization Algorithm (MAPPO). To improve the stability of the learning-based policy and efficiency of exploration, we utilize an imitation loss based on the state-of-the-art classical control policy. Our trained policy significantly outperforms the state-of-the-art. Our proposed network architecture includes incorporation of self attention, which allows a single-shot domain transfer of the trained policy to a large variety of domain shapes and number of agents. We demonstrate our proposed method in a variety of simulated experiments.
translated by 谷歌翻译
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems. Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time, both critical for making state-of-the-art DRL algorithms practically applicable. DRL-agent learns an optimal policy via a trial-and-error method while interacting with the real-world environment. And it is desirable to minimize the direct interaction of the DRL agent with the real-world power grid due to its safety-critical nature. Additionally, state-of-the-art DRL-based policies are mostly trained using a physics-based grid simulator where dynamic simulation is computationally intensive, lowering the training efficiency. We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model, instead of a real-world power-grid or physics-based simulation, is utilized with the policy learning framework, making the process faster and sample efficient. However, stabilizing model-based DRL is challenging because of the complex system dynamics of large-scale power systems. We solved these issues by incorporating imitation learning to have a warm start in policy learning, reward-shaping, and multi-step surrogate loss. Finally, we achieved 97.5% sample efficiency and 87.7% training efficiency for an application to the IEEE 300-bus test system.
translated by 谷歌翻译
在许多领域,例如物理科学,生命科学和金融,控制方法用于在差分方程治理的复杂动态系统中实现所需目标。在这项工作中,我们制定了控制随机部分微分方程(SPDE)作为加强学习问题的问题。我们介绍了一种基于学习的,分布式控制方法,用于使用深度确定性政策梯度方法对具有高维状态动作空间的SPDES系统的在线控制。我们测试了我们对控制随机汉堡等方程问题的方法的性能,描述了无限大域的湍流流体流动。
translated by 谷歌翻译
在过去的几年中,有监督的学习(SL)已确立了自己的最新数据驱动湍流建模。在SL范式中,基于数据集对模型进行了训练,该数据集通常通过应用相应的滤波器函数来从高保真解决方案中计算出先验的模型,该函数将已分离的和未分辨的流量尺度分开。对于隐式过滤的大涡模拟(LES),此方法是不可行的,因为在这里,使用的离散化本身是隐式滤波器函数。因此,通常不知道确切的滤波器形式,因此,即使有完整的解决方案可用,也无法计算相应的闭合项。强化学习(RL)范式可用于避免通过先前获得的培训数据集训练,而是通过直接与动态LES环境本身进行交互来避免这种不一致。这允许通过设计将潜在复杂的隐式LES过滤器纳入训练过程中。在这项工作中,我们应用了一个增强学习框架,以找到最佳的涡流粘度,以隐式过滤强制均匀的各向同性湍流的大型涡流模拟。为此,我们将基于卷积神经网络的策略网络制定湍流建模的任务作为RL任务,该杂志神经网络仅基于局部流量状态在时空中动态地适应LES中的涡流效率。我们证明,受过训练的模型可以提供长期稳定的模拟,并且在准确性方面,它们的表现优于建立的分析模型。此外,这些模型可以很好地推广到其他决议和离散化。因此,我们证明RL可以为一致,准确和稳定的湍流建模提供一个框架,尤其是对于隐式过滤的LE。
translated by 谷歌翻译
Reformulating the history matching problem from a least-square mathematical optimization problem into a Markov Decision Process introduces a method in which reinforcement learning can be utilized to solve the problem. This method provides a mechanism where an artificial deep neural network agent can interact with the reservoir simulator and find multiple different solutions to the problem. Such formulation allows for solving the problem in parallel by launching multiple concurrent environments enabling the agent to learn simultaneously from all the environments at once, achieving significant speed up.
translated by 谷歌翻译
深入强化学习(DRL)用于开发自主优化和定制设计的热处理过程,这些过程既对微观结构敏感又节能。与常规监督的机器学习不同,DRL不仅依赖于数据中的静态神经网络培训,但是学习代理人会根据奖励和惩罚元素自主开发最佳解决方案,并减少或没有监督。在我们的方法中,依赖温度的艾伦 - 卡恩模型用于相转换,用作DRL代理的环境,是其获得经验并采取自主决策的模型世界。 DRL算法的试剂正在控制系统的温度,作为用于合金热处理的模型炉。根据所需的相位微观结构为代理定义了微观结构目标。训练后,代理可以为各种初始微观结构状态生成温度时间曲线,以达到最终所需的微观结构状态。详细研究了代理商的性能和热处理概况的物理含义。特别是,该试剂能够控制温度以从各种初始条件开始达到所需的微观结构。代理在处理各种条件方面的这种能力为使用这种方法铺平了道路,也用于回收的导向热处理过程设计,由于杂质的侵入,初始组合物可能因批量而异,以及用于设计节能热处理。为了检验这一假设,将无罚款的代理人与考虑能源成本的代理人进行了比较。对能源成本的罚款是针对找到最佳温度时间剖面的代理的附加标准。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译
我们将记住和忘记的经验重播(Ref-ER)算法扩展到多代理增强学习(MARL)。参考器被证明超过了最先进的算法状态,以连续控制从OpenAI健身房到复杂的流体流动。在MARL中,代理之间的依赖项包括在州值估计器中,环境动力学是通过参考文献使用的重要性权重对其建模的。在协作环境中,当使用个人奖励估算值时,我们发现最佳性能,并且我们忽略了其他动作对过渡图的影响。我们基准在斯坦福大学智能系统实验室(SISL)环境中进行参考文献的性能。我们发现,采用单个馈送前馈神经网络来进行策略和参考文献中的价值函数,优于依靠复杂的神经网络体系结构的最先进的算法状态。
translated by 谷歌翻译
过程合成经历了数字化和人工智能加速的破坏性转换。我们提出了一种基于最先进的演员批评逻辑的化学过程设计的增强学习算法。我们提出的算法代表化学过程作为图形,并使用图形卷积神经网络从过程图中学习。特别是,图形神经网络是在代理体系结构中实现的,以处理状态并做出决策。此外,我们实施了一个层次结构和混合决策过程来生成流程表,在该过程中,将单位操作迭代作为离散决策和相应的设计变量选择作为连续决策。我们证明了我们的方法在包括平衡反应,共聚物分离和回收的一个说明性案例研究中设计经济可行的流程表的潜力。结果显示在离散,连续和混合动作空间中快速学习。由于拟议的强化学习代理的灵活体系结构,该方法被预定为包括大型动作状态空间和在未来研究中处理模拟器的接口。
translated by 谷歌翻译
Compared with model-based control and optimization methods, reinforcement learning (RL) provides a data-driven, learning-based framework to formulate and solve sequential decision-making problems. The RL framework has become promising due to largely improved data availability and computing power in the aviation industry. Many aviation-based applications can be formulated or treated as sequential decision-making problems. Some of them are offline planning problems, while others need to be solved online and are safety-critical. In this survey paper, we first describe standard RL formulations and solutions. Then we survey the landscape of existing RL-based applications in aviation. Finally, we summarize the paper, identify the technical gaps, and suggest future directions of RL research in aviation.
translated by 谷歌翻译
小型无人驾驶飞机的障碍避免对于未来城市空袭(UAM)和无人机系统(UAS)交通管理(UTM)的安全性至关重要。有许多技术用于实时强大的无人机指导,但其中许多在离散的空域和控制中解决,这将需要额外的路径平滑步骤来为UA提供灵活的命令。为提供无人驾驶飞机的操作安全有效的计算指导,我们探讨了基于近端政策优化(PPO)的深增强学习算法的使用,以指导自主UA到其目的地,同时通过连续控制避免障碍物。所提出的场景状态表示和奖励功能可以将连续状态空间映射到连续控制,以便进行标题角度和速度。为了验证所提出的学习框架的性能,我们用静态和移动障碍进行了数值实验。详细研究了与环境和安全操作界限的不确定性。结果表明,该拟议的模型可以提供准确且强大的指导,并解决了99%以上的成功率的冲突。
translated by 谷歌翻译
深入学习的强化学习(RL)的结合导致了一系列令人印象深刻的壮举,许多相信(深)RL提供了一般能力的代理。然而,RL代理商的成功往往对培训过程中的设计选择非常敏感,这可能需要繁琐和易于易于的手动调整。这使得利用RL对新问题充满挑战,同时也限制了其全部潜力。在许多其他机器学习领域,AutomL已经示出了可以自动化这样的设计选择,并且在应用于RL时也会产生有希望的初始结果。然而,自动化强化学习(AutorL)不仅涉及Automl的标准应用,而且还包括RL独特的额外挑战,其自然地产生了不同的方法。因此,Autorl已成为RL中的一个重要研究领域,提供来自RNA设计的各种应用中的承诺,以便玩游戏等游戏。鉴于RL中考虑的方法和环境的多样性,在不同的子领域进行了大部分研究,从Meta学习到进化。在这项调查中,我们寻求统一自动的领域,我们提供常见的分类法,详细讨论每个区域并对研究人员来说是一个兴趣的开放问题。
translated by 谷歌翻译
智能城市的智能交通灯可以最佳地减少交通拥堵。在这项研究中,我们采用了加强学习,培训了城市移动模拟器的红绿灯的控制代理。由于现有工程的差异,除了基于价值的方法之外,利用基于策略的深度加强学习方法,近端策略优化(PPO),例如Deep Q网络(DQN)和双DQN(DDQN)。首先,将获得PPO的最佳政策与来自DQN和DDQN的PPO相比。发现PPO的政策比其他政策更好。接下来,而不是固定间隔的流量光阶段,我们采用具有可变时间间隔的光相位,这导致更好的策略来传递流量流。然后,研究了环境和行动干扰的影响,以展示基于学习的控制器是强大的。最后,我们考虑不平衡的交通流量,并发现智能流量可以适度地对不平衡的流量方案执行,尽管它仅从平衡流量方案中了解最佳策略。
translated by 谷歌翻译