The decarbonization of buildings presents new challenges for the reliability of the electrical grid as a result of the intermittency of renewable energy sources and increase in grid load brought about by end-use electrification. To restore reliability, grid-interactive efficient buildings can provide flexibility services to the grid through demand response. Residential demand response programs are hindered by the need for manual intervention by customers. To maximize the energy flexibility potential of residential buildings, an advanced control architecture is needed. Reinforcement learning is well-suited for the control of flexible resources as it is able to adapt to unique building characteristics compared to expert systems. Yet, factors hindering the adoption of RL in real-world applications include its large data requirements for training, control security and generalizability. Here we address these challenges by proposing the MERLIN framework and using a digital twin of a real-world 17-building grid-interactive residential community in CityLearn. We show that 1) independent RL-controllers for batteries improve building and district level KPIs compared to a reference RBC by tailoring their policies to individual buildings, 2) despite unique occupant behaviours, transferring the RL policy of any one of the buildings to other buildings provides comparable performance while reducing the cost of training, 3) training RL-controllers on limited temporal data that does not capture full seasonality in occupant behaviour has little effect on performance. Although, the zero-net-energy (ZNE) condition of the buildings could be maintained or worsened as a result of controlled batteries, KPIs that are typically improved by ZNE condition (electricity price and carbon emissions) are further improved when the batteries are managed by an advanced controller.
translated by 谷歌翻译
Energy consumption in buildings, both residential and commercial, accounts for approximately 40% of all energy usage in the U.S., and similar numbers are being reported from countries around the world. This significant amount of energy is used to maintain a comfortable, secure, and productive environment for the occupants. So, it is crucial that the energy consumption in buildings must be optimized, all the while maintaining satisfactory levels of occupant comfort, health, and safety. Recently, Machine Learning has been proven to be an invaluable tool in deriving important insights from data and optimizing various systems. In this work, we review the ways in which machine learning has been leveraged to make buildings smart and energy-efficient. For the convenience of readers, we provide a brief introduction of several machine learning paradigms and the components and functioning of each smart building system we cover. Finally, we discuss challenges faced while implementing machine learning algorithms in smart buildings and provide future avenues for research at the intersection of smart buildings and machine learning.
translated by 谷歌翻译
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators.
translated by 谷歌翻译
在本文中,多种子体增强学习用于控制混合能量存储系统,通过最大化可再生能源和交易的价值来降低微电网的能量成本。该代理商必须学习在波动需求,动态批发能源价格和不可预测的可再生能源中,控制三种不同类型的能量存储系统。考虑了两种案例研究:首先看能量存储系统如何在动态定价下更好地整合可再生能源发电,第二种与这些同一代理商如何与聚合剂一起使用,以向自私外部微电网销售能量的能量减少自己的能源票据。这项工作发现,具有分散执行的多代理深度确定性政策梯度的集中学习及其最先进的变体允许多种代理方法显着地比来自单个全局代理的控制更好。还发现,在多种子体方法中使用单独的奖励功能比使用单个控制剂更好。还发现能够与其他微电网交易,而不是卖回实用电网,也发现大大增加了网格的储蓄。
translated by 谷歌翻译
Heating in private households is a major contributor to the emissions generated today. Heat pumps are a promising alternative for heat generation and are a key technology in achieving our goals of the German energy transformation and to become less dependent on fossil fuels. Today, the majority of heat pumps in the field are controlled by a simple heating curve, which is a naive mapping of the current outdoor temperature to a control action. A more advanced control approach is model predictive control (MPC) which was applied in multiple research works to heat pump control. However, MPC is heavily dependent on the building model, which has several disadvantages. Motivated by this and by recent breakthroughs in the field, this work applies deep reinforcement learning (DRL) to heat pump control in a simulated environment. Through a comparison to MPC, it could be shown that it is possible to apply DRL in a model-free manner to achieve MPC-like performance. This work extends other works which have already applied DRL to building heating operation by performing an in-depth analysis of the learned control strategies and by giving a detailed comparison of the two state-of-the-art control methods.
translated by 谷歌翻译
智能能源网络提供了一种有效的手段,可容纳可变可再生能源(例如太阳能和风能)的高渗透率,这是能源生产深度脱碳的关键。但是,鉴于可再生能源以及能源需求的可变性,必须制定有效的控制和能源存储方案来管理可变的能源产生并实现所需的系统经济学和环境目标。在本文中,我们引入了由电池和氢能存储组成的混合储能系统,以处理与电价,可再生能源生产和消费有关的不确定性。我们旨在提高可再生能源利用率,并最大程度地减少能源成本和碳排放,同时确保网络内的能源可靠性和稳定性。为了实现这一目标,我们提出了一种多代理的深层确定性政策梯度方法,这是一种基于强化的基于强化学习的控制策略,可实时优化混合能源存储系统和能源需求的调度。提出的方法是无模型的,不需要明确的知识和智能能源网络环境的严格数学模型。基于现实世界数据的仿真结果表明:(i)混合储能系统和能源需求的集成和优化操作可将碳排放量减少78.69%,将成本节省的成本储蓄提高23.5%,可续订的能源利用率比13.2%以上。其他基线模型和(ii)所提出的算法优于最先进的自学习算法,例如Deep-Q网络。
translated by 谷歌翻译
增强学习(RL)是多能管理系统的有前途的最佳控制技术。它不需要先验模型 - 降低了前期和正在进行的项目特定工程工作,并且能够学习基础系统动力学的更好表示。但是,香草RL不能提供约束满意度的保证 - 导致其在安全至关重要的环境中产生各种不安全的互动。在本文中,我们介绍了两种新颖的安全RL方法,即SafeFallback和Afvafe,其中安全约束配方与RL配方脱钩,并且提供了硬构成满意度,可以保证在培训(探索)和开发过程中(近距离) )最佳政策。在模拟的多能系统案例研究中,我们已经表明,这两种方法均与香草RL基准相比(94,6%和82,8%,而35.5%)和香草RL基准相比明显更高的效用(即有用的政策)开始。提出的SafeFallback方法甚至可以胜过香草RL基准(102,9%至100%)。我们得出的结论是,这两种方法都是超越RL的安全限制处理技术,正如随机代理所证明的,同时仍提供坚硬的保证。最后,我们向I.A.提出了基本的未来工作。随着更多数据可用,改善约束功能本身。
translated by 谷歌翻译
Ongoing risks from climate change have impacted the livelihood of global nomadic communities, and are likely to lead to increased migratory movements in coming years. As a result, mobility considerations are becoming increasingly important in energy systems planning, particularly to achieve energy access in developing countries. Advanced Plug and Play control strategies have been recently developed with such a decentralized framework in mind, more easily allowing for the interconnection of nomadic communities, both to each other and to the main grid. In light of the above, the design and planning strategy of a mobile multi-energy supply system for a nomadic community is investigated in this work. Motivated by the scale and dimensionality of the associated uncertainties, impacting all major design and decision variables over the 30-year planning horizon, Deep Reinforcement Learning (DRL) is implemented for the design and planning problem tackled. DRL based solutions are benchmarked against several rigid baseline design options to compare expected performance under uncertainty. The results on a case study for ger communities in Mongolia suggest that mobile nomadic energy systems can be both technically and economically feasible, particularly when considering flexibility, although the degree of spatial dispersion among households is an important limiting factor. Key economic, sustainability and resilience indicators such as Cost, Equivalent Emissions and Total Unmet Load are measured, suggesting potential improvements compared to available baselines of up to 25%, 67% and 76%, respectively. Finally, the decomposition of values of flexibility and plug and play operation is presented using a variation of real options theory, with important implications for both nomadic communities and policymakers focused on enabling their energy access.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
我们提供了PelficGridWorld软件包,为用户提供轻量级,模块化和可定制的框架,用于创建专注的电源系统的多代理体育馆环境,该环境易于与强化学习(RL)的现有培训框架集成。虽然存在许多框架用于训练多代理RL(MARL)政策,但没有可以快速原型并发开发环境,尤其是在所需电流解决方案来定义网格的异构(复合式,多器件)电力系统的背景下 - 级别变量和成本。 PowerGridWorld是一个开源软件包,有助于填补此间隙。为了突出PowerGridWorld的关键功能,我们展示了两个案例研究,并使用Openai的多代理深度确定性政策梯度(MADDPG)和RLLIB的近端策略优化(PPO)算法来演示MARL政策。在这两种情况下,至少一些代理子集合在每次作为奖励(负成本)结构的一部分中的一部分中的功率流溶液的元件。
translated by 谷歌翻译
As an efficient way to integrate multiple distributed energy resources and the user side, a microgrid is mainly faced with the problems of small-scale volatility, uncertainty, intermittency and demand-side uncertainty of DERs. The traditional microgrid has a single form and cannot meet the flexible energy dispatch between the complex demand side and the microgrid. In response to this problem, the overall environment of wind power, thermostatically controlled loads, energy storage systems, price-responsive loads and the main grid is proposed. Secondly, the centralized control of the microgrid operation is convenient for the control of the reactive power and voltage of the distributed power supply and the adjustment of the grid frequency. However, there is a problem in that the flexible loads aggregate and generate peaks during the electricity price valley. The existing research takes into account the power constraints of the microgrid and fails to ensure a sufficient supply of electric energy for a single flexible load. This paper considers the response priority of each unit component of TCLs and ESSs on the basis of the overall environment operation of the microgrid so as to ensure the power supply of the flexible load of the microgrid and save the power input cost to the greatest extent. Finally, the simulation optimization of the environment can be expressed as a Markov decision process process. It combines two stages of offline and online operations in the training process. The addition of multiple threads with the lack of historical data learning leads to low learning efficiency. The asynchronous advantage actor-critic with the experience replay pool memory library is added to solve the data correlation and nonstatic distribution problems during training.
translated by 谷歌翻译
建筑物中的加热和冷却系统占全球能源使用的31 \%,其中大部分受基于规则的控制器(RBC)调节,这些控制器(RBC)既不通过与电网进行最佳交互来最大化能源效率或最小化排放。通过强化学习(RL)的控制已显示可显着提高建筑能源效率,但是现有的解决方案需要访问世界上每栋建筑物都无法期望的特定建筑模拟器或数据。作为回应,我们表明可以在没有这样的知识的情况下获得减少排放的政策,这是我们称为零射击建筑物控制的范式。我们结合了系统识别和基于模型的RL的想法,以创建PEARL(概率避免发射的增强学习),并表明建立表现模型所需的短期积极探索是所需的。在三个不同的建筑能源模拟的实验中,我们显示珍珠在所有情况下都优于现有的RBC,并且在所有情况下,流行的RL基线,在维持热舒适度的同时,将建筑物排放量减少了31 \%。我们的源代码可通过https://enjeener.io/projects/pearl在线获得。
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
我们提出了一个混合工业冷却系统模型,该模型将分析解决方案嵌入多物理模拟中。该模型设计用于增强学习(RL)应用程序,并平衡简单性与模拟保真度和解释性。该模型的忠诚度根据大规模冷却系统的现实世界数据进行了评估。接下来是一个案例研究,说明如何将模型用于RL研究。为此,我们开发了一个工业任务套件,该套件允许指定不同的问题设置和复杂性水平,并使用它来评估不同RL算法的性能。
translated by 谷歌翻译
我们考虑了需求侧能源管理的问题,每个家庭都配备了能够在线安排家用电器的智能电表。目的是最大程度地减少实时定价计划下的整体成本。尽管以前的作品引入了集中式方法,在该方法中,调度算法具有完全可观察的性能,但我们提出了将智能网格环境作为马尔可夫游戏的表述。每个家庭都是具有部分可观察性的去中心化代理,可以在现实环境中进行可扩展性和隐私保护。电网操作员产生的价格信号随能量需求而变化。我们提出了从代理商的角度来解决部分可观察性和环境的局部可观察性的扩展,以解决部分可观察性。该算法学习了一位集中批评者,该批评者协调分散的代理商的培训。因此,我们的方法使用集中学习,但分散执行。仿真结果表明,我们的在线深入强化学习方法可以纯粹基于瞬时观察和价格信号来降低所有消耗的总能量的峰值与平均值和所有家庭的电力。
translated by 谷歌翻译
建筑物中的供暖和冷却系统占全球能源使用的31%,其中大部分受基于规则的控制器(RBC)调节,这些控制器(RBC)既不通过与网格最佳交互来最大程度地提高能源效率或最小化排放。通过增强学习(RL)的控制已显示可显着提高建筑能源效率,但是现有的解决方案需要在模拟器中进行预训练,这些模拟器对世界上每栋建筑物的获得非常昂贵。作为回应,我们表明可以通过结合系统识别和基于模型的RL的想法来对建筑物进行安全,零射击的控制。我们称这种组合珍珠(概率避免施加加固的增强学习),并表明它可以减少排放而无需预先培训,只需要三个小时的调试期。在三个不同的建筑能源模拟的实验中,我们显示珍珠在所有情况下都胜过现有的RBC,并且在所有情况下,流行的RL基线,在维持热舒适度的同时,将建筑物排放量降低了31%。
translated by 谷歌翻译
电动汽车快速采用(EVS)要求广泛安装EV充电站。为了最大限度地提高充电站的盈利能力,提供充电和电网服务的智能控制器实际上很需要。然而,由于不确定的到达时间和EVS的充电需求,确定最佳充电时间表具有挑战性。在本文中,我们提出了一种新的集中分配和分散执行(CADE)强化学习(RL)框架,以最大限度地提高收费站的利润。在集中分配过程中,EVS被分配给等待或充电点。在分散的执行过程中,每个充电器都在学习来自共享重放内存的动作值函数的同时使其自己的充电/放电决定。该CADE框架显着提高了RL算法的可扩展性和采样效率。数值结果表明,所提出的CADE框架既有计算高效且可扩展,显着优于基线模型预测控制(MPC)。我们还提供了对学习的动作值的深入分析,以解释加强学习代理的内部工作。
translated by 谷歌翻译
为了通过使用可再生能源来取代化石燃料,间歇性风能和光伏(PV)功率的资源不平衡是点对点(P2P)功率交易的关键问题。为了解决这个问题,本文介绍了增强学习(RL)技术。对于RL,图形卷积网络(GCN)和双向长期记忆(BI-LSTM)网络由基于合作游戏理论的纳米簇之间的P2P功率交易共同应用于P2P功率交易。柔性且可靠的DC纳米醇适合整合可再生能源以进行分配系统。每个局部纳米粒子群都采用了生产者的位置,同时着重于功率生产和消费。对于纳米级簇的电源管理,使用物联网(IoT)技术将多目标优化应用于每个本地纳米群集群。考虑到风和光伏发电的间歇性特征,进行电动汽车(EV)的充电/排放。 RL算法,例如深Q学习网络(DQN),深度复发Q学习网络(DRQN),BI-DRQN,近端策略优化(PPO),GCN-DQN,GCN-DQN,GCN-DRQN,GCN-DRQN,GCN-BI-DRQN和GCN-PPO用于模拟。因此,合作P2P电力交易系统利用使用时间(TOU)基于关税的电力成本和系统边际价格(SMP)最大化利润,并最大程度地减少电网功耗的量。用P2P电源交易的纳米簇簇的电源管理实时模拟了分配测试馈线,并提议的GCN-PPO技术将纳米糖簇的电量降低了36.7%。
translated by 谷歌翻译
Modern power systems will have to face difficult challenges in the years to come: frequent blackouts in urban areas caused by high power demand peaks, grid instability exacerbated by intermittent renewable generation, and global climate change amplified by rising carbon emissions. While current practices are growingly inadequate, the path to widespread adoption of artificial intelligence (AI) methods is hindered by missing aspects of trustworthiness. The CityLearn Challenge is an exemplary opportunity for researchers from multiple disciplines to investigate the potential of AI to tackle these pressing issues in the energy domain, collectively modeled as a reinforcement learning (RL) task. Multiple real-world challenges faced by contemporary RL techniques are embodied in the problem formulation. In this paper, we present a novel method using the solution function of optimization as policies to compute actions for sequential decision-making, while notably adapting the parameters of the optimization model from online observations. Algorithmically, this is achieved by an evolutionary algorithm under a novel trajectory-based guidance scheme. Formally, the global convergence property is established. Our agent ranked first in the latest 2021 CityLearn Challenge, being able to achieve superior performance in almost all metrics while maintaining some key aspects of interpretability.
translated by 谷歌翻译
我们仔细比较了两种无模型控制算法,演进策略和近端政策优化(PPO),具有后退地平线模型预测控制(MPC),用于操作模拟,价格响应式热水器。考虑了四个MPC变体:单次控制器,具有完美预测产生最佳控制;一个有限的地平控制器,具有完美预测;基于平均的预测控制器;使用历史情景,一个两阶段随机编程控制器。在所有情况下,水温和电价的MPC模型精确;只有水需求不确定。为了比较,ES和PPO通过在MPC使用的相同场景下直接与模拟环境直接交互来学习基于神经网络的策略。然后在需求时间序列的单独一周继续的单独一周内进行评估所有方法。我们证明了对这个问题的最佳控制是具有挑战性的,需要超过8小时的MPC寻找,具有完美预测来获得最低成本。尽管存在这一挑战,但ES和PPO都学会了在平均成本方面优于平均预测和两级随机MPC控制器的良好通用政策,并且在计算动作时速度越来越多的数量级。我们表明ES尤其可以利用并行性,使用1150 CPU核心在90秒内学习策略。
translated by 谷歌翻译