共享的电子移动服务已被广泛测试和在全球城市中驾驶,并且已经编织成现代城市规划的结构。本文研究了这些系统中的实用而重要的问题:如何在空间和时间跨空间和时间部署和管理其基础架构,以便在可持续的盈利能力的同时对用户无处不在。然而,在现实世界的系统中,评估不同部署策略的性能,然后找到最佳计划是非常昂贵的,因为它通常是不可行的,可以对试用和错误进行许多迭代。我们通过设计高保真仿真环境来解决这一目标,该环境摘要在细粒度下共享电子移动系统的关键操作细节,并使用从现实世界中收集的数据进行校准。这使我们能够尝试任意部署计划来学习在实际在实际系统中实施任何内容之前的特定上下文。特别是,我们提出了一种新的多代理神经检索方法,其中我们设计了一个分层控制器以产生暂定部署计划。然后使用多模拟范例,即并行评估的生成的部署计划进行测试,其中结果用于用深增强学习训练控制器。通过这种闭环,控制器可以被引导以在将来的迭代中产生更好的部署计划的概率。在我们的仿真环境中,已经广泛评估了所提出的方法,实验结果表明它优于基于基于基于基于的基于基于基于的启发式的服务覆盖范围和净收入的方法。
translated by 谷歌翻译
The electrification of shared mobility has become popular across the globe. Many cities have their new shared e-mobility systems deployed, with continuously expanding coverage from central areas to the city edges. A key challenge in the operation of these systems is fleet rebalancing, i.e., how EVs should be repositioned to better satisfy future demand. This is particularly challenging in the context of expanding systems, because i) the range of the EVs is limited while charging time is typically long, which constrain the viable rebalancing operations; and ii) the EV stations in the system are dynamically changing, i.e., the legitimate targets for rebalancing operations can vary over time. We tackle these challenges by first investigating rich sets of data collected from a real-world shared e-mobility system for one year, analyzing the operation model, usage patterns and expansion dynamics of this new mobility mode. With the learned knowledge we design a high-fidelity simulator, which is able to abstract key operation details of EV sharing at fine granularity. Then we model the rebalancing task for shared e-mobility systems under continuous expansion as a Multi-Agent Reinforcement Learning (MARL) problem, which directly takes the range and charging properties of the EVs into account. We further propose a novel policy optimization approach with action cascading, which is able to cope with the expansion dynamics and solve the formulated MARL. We evaluate the proposed approach extensively, and experimental results show that our approach outperforms the state-of-the-art, offering significant performance gain in both satisfied demand and net revenue.
translated by 谷歌翻译
在本文中,我们介绍了有关典型乘车共享系统中决策优化问题的强化学习方法的全面,深入的调查。涵盖了有关乘车匹配,车辆重新定位,乘车,路由和动态定价主题的论文。在过去的几年中,大多数文献都出现了,并且要继续解决一些核心挑战:模型复杂性,代理协调和多个杠杆的联合优化。因此,我们还引入了流行的数据集和开放式仿真环境,以促进进一步的研发。随后,我们讨论了有关该重要领域的强化学习研究的许多挑战和机会。
translated by 谷歌翻译
在带有电动车队的乘车系统中,充电是一个复杂的决策过程。大多数电动汽车(EV)出租车服务要求驾驶员做出利己主义决定,从而导致分散的临时充电策略。车辆之间通常缺乏或不共享移动性系统的当前状态,因此无法做出最佳的决定。大多数现有方法都不将时间,位置和持续时间结合到全面的控制算法中,也不适合实时操作。因此,我们提出了一种实时预测性充电方法,用于使用一个名为“闲置时间开发(ITX)”的单个操作员进行乘车服务,该方法预测了车辆闲置并利用这些时期来收获能量的时期。它依靠图形卷积网络和线性分配算法来设计最佳的车辆和充电站配对,以最大程度地提高利用的空闲时间。我们通过对纽约市现实世界数据集的广泛模拟研究评估了我们的方法。结果表明,就货币奖励功能而言,ITX的表现优于所有基线方法至少提高5%(相当于6,000个车辆操作的$ 70,000),该奖励奖励功能的建模旨在复制现实世界中乘车系统的盈利能力。此外,与基线方法相比,ITX可以将延迟至少减少4.68%,并且通常通过促进顾客在整个车队中更好地传播乘客的舒适度。我们的结果还表明,ITX使车辆能够在白天收获能量,稳定电池水平,并增加需求意外激增的弹性。最后,与表现最佳的基线策略相比,峰值负载减少了17.39%,这使网格操作员受益,并为更可持续的电网使用铺平了道路。
translated by 谷歌翻译
乘客和货物交付的可行性服务服务的无处不在的增长在运输系统领域内带来了各种挑战和机遇。因此,正在开发智能运输系统以最大限度地提高运营盈利能力,用户的便利性和环境可持续性。与riveShiening的最后一次交付的增长呼吁进行高效且凝聚力的系统,运输乘客和货物。现有方法使用静态路由方法来解决考虑到请求的需求和在路线规划期间车辆之间的货物转移。在本文中,我们为合并的商品和乘客运输提供了一种动态和需求意识的舰队管理框架,该乘客运输能够通过允许司机谈判到相互合适的价格中的决策过程中的乘客和司机。乘客接受/拒绝,(2)货物与车辆的匹配,以及货物的多跳转移,(3)基于该插入成本,在沿着它们的途径来动态地为每个车辆提供最佳路线,从而确定匹配的插入成本(4)使用深度加强学习(RL),(5)允许在每个车辆的分布推断,同时共同优化舰队目标,向预期的高乘客和商品需求调度怠速车辆。我们所提出的模型可在每个车辆内独立部署,因为这最大限度地减少了与分布式系统的增长相关的计算成本,并将其民主化决策对每个人进行决策。与各种车辆类型,商品和乘客效用的仿真表明,与不考虑联合负载运输或动态多跳路线规划的其他方法相比,我们的方法的有效性。
translated by 谷歌翻译
非法车辆停车是世界上主要城市面临的常见城市问题,因为它导致空气污染和交通事故。政府高度依赖于积极的人类努力,以检测非法停车活动。然而,这种方法对于覆盖一个大城市来说,这一方法非常无效,因为警方必须巡逻整个城市道路。 Mobikike的大规模和高质量的共享自行车轨迹为我们提供了一个独特的机会,可以设计无处不在的非法停车检测方法,因为大多数非法停车处发生在路边,对自行车用户产生重大影响。检测结果可以指导巡逻计划,即将巡逻警察发送到具有更高的非法停车风险的地区,进一步提高巡逻效率。灵感来自这个想法,在建议的框架中采用了三个主要组件:1)〜{\ em轨迹预处理},它过滤了异常GPS点,执行Map-匹配,并构建轨迹索引; 2)〜{\ em非法停车检测},模拟正常轨迹,从评估轨迹提取特征,并利用基于试验的方法来发现非法停车事件; 3)〜{\ em巡逻计划},它利用检测结果作为参考上下文,并将调度任务作为一种多智能体增强学习问题来指导巡逻警察。最后,提出了广泛的实验以验证非法停车检测的有效性,以及巡逻效率的提高。
translated by 谷歌翻译
从传统流动性到电气性的过渡在很大程度上取决于为基础设施的可用性和最佳放置。本文研究了城市地区充电站的最佳位置。我们最大程度地提高了该区域的充电基础设施供应,并在设定预算限制的同时最大程度地减少等待,旅行和充电时间。此外,我们还包括在家中收取车辆的可能性,以更加精致地估计整个城市地区的实际充电需求。我们将充电站问题的放置作为非线性整数优化问题,该问题寻求充电站的最佳位置和不同充电类型的充电堆数量。我们设计了一种新颖的深钢筋学习方法来解决充电站放置问题(PCRL)。与五个基线相比,对现实世界数据集的广泛实验表明,PCRL如何减少等待时间和旅行时间,同时增加收费计划的好处。与现有的基础设施相比,我们可以将等待时间最多减少97%,并将收益提高到497%。
translated by 谷歌翻译
物流运营商最近提出了一项技术,可以帮助降低城市货运分销中的交通拥堵和运营成本,最近提出了移动包裹储物柜(MPLS)。鉴于他们能够在整个部署领域搬迁,因此他们具有提高客户可访问性和便利性的潜力。在这项研究中,我们制定了移动包裹储物柜问题(MPLP),这是位置路由问题(LRP)的特殊情况,该案例确定了整天MPL的最佳中途停留位置以及计划相应的交付路线。开发了基于混合Q学习网络的方法(HQM),以解决所得大问题实例的计算复杂性,同时逃脱了本地Optima。此外,HQM与全球和局部搜索机制集成在一起,以解决经典强化学习(RL)方法所面临的探索和剥削困境。我们检查了HQM在不同问题大小(最多200个节点)下的性能,并根据遗传算法(GA)进行了基准测试。我们的结果表明,HQM获得的平均奖励比GA高1.96倍,这表明HQM具有更好的优化能力。最后,我们确定有助于车队规模要求,旅行距离和服务延迟的关键因素。我们的发现概述了MPL的效率主要取决于时间窗口的长度和MPL中断的部署。
translated by 谷歌翻译
我们研究了在国内捐助服务服务中引起的车辆路由问题的随机变体。我们考虑的问题结合了以下属性。就客户是随机的,但不仅限于预定义的集合,因此请求服务的客户是可变的,因为它们可能出现在给定的服务领域的任何地方。此外,需求量是随机的,并且在拜访客户时会观察到。目的是在满足车辆能力和时间限制的同时最大化预期的服务需求。我们将此问题称为VRP,具有高度可变的客户基础和随机需求(VRP-VCSD)。对于这个问题,我们首先提出了马尔可夫决策过程(MDP)的配方,该制定代表了一位决策者建立所有车辆路线的经典集中决策观点。虽然结果配方却很棘手,但它为我们提供了开发新的MDP公式的地面,我们称其为部分分散。在此公式中,动作空间被车辆分解。但是,由于我们执行相同的车辆特定政策,同时优化集体奖励,因此权力下放是不完整的。我们提出了几种策略,以减少与部分分散的配方相关的国家和行动空间的维度。这些产生了一个更容易解决的问题,我们通过加强学习来解决。特别是,我们开发了一种称为DECQN的Q学习算法,具有最先进的加速技术。我们进行了彻底的计算分析。结果表明,DECN的表现大大优于三个基准策略。此外,我们表明我们的方法可以与针对VRP-VCSD的特定情况开发的专业方法竞争,在该情况下,客户位置和预期需求是事先知道的。
translated by 谷歌翻译
A fundamental question in any peer-to-peer ride-sharing system is how to, both effectively and efficiently, meet the request of passengers to balance the supply and demand in real time. On the passenger side, traditional approaches focus on pricing strategies by increasing the probability of users' call to adjust the distribution of demand. However, previous methods do not take into account the impact of changes in strategy on future supply and demand changes, which means drivers are repositioned to different destinations due to passengers' calls, which will affect the driver's income for a period of time in the future. Motivated by this observation, we make an attempt to optimize the distribution of demand to handle this problem by learning the long-term spatio-temporal values as a guideline for pricing strategy. In this study, we propose an offline deep reinforcement learning based method focusing on the demand side to improve the utilization of transportation resources and customer satisfaction. We adopt a spatio-temporal learning method to learn the value of different time and location, then incentivize the ride requests of passengers to adjust the distribution of demand to balance the supply and demand in the system. In particular, we model the problem as a Markov Decision Process (MDP).
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
随着自动驾驶行业的发展,自动驾驶汽车群体的潜在相互作用也随之增长。结合人工智能和模拟的进步,可以模拟此类组,并且可以学习控制内部汽车的安全模型。这项研究将强化学习应用于多代理停车场的问题,在那里,汽车旨在有效地停车,同时保持安全和理性。利用强大的工具和机器学习框架,我们以马尔可夫决策过程的形式与独立学习者一起设计和实施灵活的停车环境,从而利用多代理通信。我们实施了一套工具来进行大规模执行实验,从而取得了超过98.1%成功率的高达7辆汽车的模型,从而超过了现有的单代机构模型。我们还获得了与汽车在我们环境中表现出的竞争性和协作行为有关的几个结果,这些行为的密度和沟通水平各不相同。值得注意的是,我们发现了一种没有竞争的合作形式,以及一种“泄漏”的合作形式,在没有足够状态的情况下,代理商进行了协作。这种工作在自动驾驶和车队管理行业中具有许多潜在的应用,并为将强化学习应用于多机构停车场提供了几种有用的技术和基准。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
Ongoing risks from climate change have impacted the livelihood of global nomadic communities, and are likely to lead to increased migratory movements in coming years. As a result, mobility considerations are becoming increasingly important in energy systems planning, particularly to achieve energy access in developing countries. Advanced Plug and Play control strategies have been recently developed with such a decentralized framework in mind, more easily allowing for the interconnection of nomadic communities, both to each other and to the main grid. In light of the above, the design and planning strategy of a mobile multi-energy supply system for a nomadic community is investigated in this work. Motivated by the scale and dimensionality of the associated uncertainties, impacting all major design and decision variables over the 30-year planning horizon, Deep Reinforcement Learning (DRL) is implemented for the design and planning problem tackled. DRL based solutions are benchmarked against several rigid baseline design options to compare expected performance under uncertainty. The results on a case study for ger communities in Mongolia suggest that mobile nomadic energy systems can be both technically and economically feasible, particularly when considering flexibility, although the degree of spatial dispersion among households is an important limiting factor. Key economic, sustainability and resilience indicators such as Cost, Equivalent Emissions and Total Unmet Load are measured, suggesting potential improvements compared to available baselines of up to 25%, 67% and 76%, respectively. Finally, the decomposition of values of flexibility and plug and play operation is presented using a variation of real options theory, with important implications for both nomadic communities and policymakers focused on enabling their energy access.
translated by 谷歌翻译
电动汽车(EV)在自动启动的按需(AMOD)系统中起关键作用,但是它们的独特充电模式增加了AMOD系统中的模型不确定性(例如,状态过渡概率)。由于通常存在训练和测试(真)环境之间的不匹配,因此将模型不确定性纳入系统设计至关重要。但是,在现有文献重新平衡的EV AMOD系统中,尚未明确考虑模型不确定性,并且仍然是一项紧急和挑战的任务。在这项工作中,我们为EV重新平衡和充电问题设计了一个强大而有限的多机构增强学习(MARL)框架。然后,我们提出了一种强大且受限的MARL算法(Rocoma),该算法训练了强大的EV重新平衡政策,以平衡供需比率和整个城市的充电利用率在国家过渡不确定性下。实验表明,Rocoma可以学习有效且强大的重新平衡政策。当存在模型不确定性时,它的表现优于非稳定MAL方法。它使系统公平性增加了19.6%,并使重新平衡成本降低了75.8%。
translated by 谷歌翻译
Autonomous vehicles are suited for continuous area patrolling problems. However, finding an optimal patrolling strategy can be challenging for many reasons. Firstly, patrolling environments are often complex and can include unknown and evolving environmental factors. Secondly, autonomous vehicles can have failures or hardware constraints such as limited battery lives. Importantly, patrolling large areas often requires multiple agents that need to collectively coordinate their actions. In this work, we consider these limitations and propose an approach based on a distributed, model-free deep reinforcement learning based multi-agent patrolling strategy. In this approach, agents make decisions locally based on their own environmental observations and on shared information. In addition, agents are trained to automatically recharge themselves when required to support continuous collective patrolling. A homogeneous multi-agent architecture is proposed, where all patrolling agents have an identical policy. This architecture provides a robust patrolling system that can tolerate agent failures and allow supplementary agents to be added to replace failed agents or to increase the overall patrol performance. This performance is validated through experiments from multiple perspectives, including the overall patrol performance, the efficiency of the battery recharging strategy, the overall robustness of the system, and the agents' ability to adapt to environment dynamics.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
自驱动粒子(SDP)描述了日常生活中常见的一类常见的多种子体系统,例如植绒鸟类和交通流量。在SDP系统中,每个代理商都追求自己的目标,并不断改变其与附近代理商的合作或竞争行为。手动设计用于此类SDP系统的控制器是耗时的,而产生的紧急行为往往是不可逼真的,也不是更广泛的。因此,SDP系统的现实模拟仍然具有挑战性。强化学习提供了一种吸引人的替代方案,用于自动化SDP控制器的开发。然而,以前的多档强化学习(Marl)方法将代理人定义为手头之前的队友或敌人,这未能捕获每个代理的作用的SDP的本质,即使在一个集中也变化或竞争。为了用Marl模拟SDP,一个关键挑战是协调代理的行为,同时仍然最大化个人目标。将交通仿真作为测试床,在这项工作中,我们开发了一种称为协调政策优化(Copo)的新型MARL方法,该方法包括社会心理学原理来学习SDP的神经控制器。实验表明,与各种度量标准的Marl基线相比,该方法可以实现优越的性能。明显的车辆明显地表现出复杂和多样化的社会行为,以提高整个人口的性能和安全性。演示视频和源代码可用于:https://decisionforce.github.io/copo/
translated by 谷歌翻译
机器人和与世界相互作用或互动的机器人和智能系统越来越多地被用来自动化各种任务。这些系统完成这些任务的能力取决于构成机器人物理及其传感器物体的机械和电气部件,例如,感知算法感知环境,并计划和控制算法以生产和控制算法来生产和控制算法有意义的行动。因此,通常有必要在设计具体系统时考虑这些组件之间的相互作用。本文探讨了以端到端方式对机器人系统进行任务驱动的合作的工作,同时使用推理或控制算法直接优化了系统的物理组件以进行任务性能。我们首先考虑直接优化基于信标的本地化系统以达到本地化准确性的问题。设计这样的系统涉及将信标放置在整个环境中,并通过传感器读数推断位置。在我们的工作中,我们开发了一种深度学习方法,以直接优化信标的放置和位置推断以达到本地化精度。然后,我们将注意力转移到了由任务驱动的机器人及其控制器优化的相关问题上。在我们的工作中,我们首先提出基于多任务增强学习的数据有效算法。我们的方法通过利用能够在物理设计的空间上概括设计条件的控制器,有效地直接优化了物理设计和控制参数,以直接优化任务性能。然后,我们对此进行跟进,以允许对离散形态参数(例如四肢的数字和配置)进行优化。最后,我们通过探索优化的软机器人的制造和部署来得出结论。
translated by 谷歌翻译