Eco-driving strategies have been shown to provide significant reductions in fuel consumption. This paper outlines an active driver assistance approach that uses a residual policy learning (RPL) agent trained to provide residual actions to default power train controllers while balancing fuel consumption against other driver-accommodation objectives. Using previous experiences, our RPL agent learns improved traction torque and gear shifting residual policies to adapt the operation of the powertrain to variations and uncertainties in the environment. For comparison, we consider a traditional reinforcement learning (RL) agent trained from scratch. Both agents employ the off-policy Maximum A Posteriori Policy Optimization algorithm with an actor-critic architecture. By implementing on a simulated commercial vehicle in various car-following scenarios, we find that the RPL agent quickly learns significantly improved policies compared to a baseline source policy but in some measures not as good as those eventually possible with the RL agent trained from scratch.
translated by 谷歌翻译
With the growing need to reduce energy consumption and greenhouse gas emissions, Eco-driving strategies provide a significant opportunity for additional fuel savings on top of other technological solutions being pursued in the transportation sector. In this paper, a model-free deep reinforcement learning (RL) control agent is proposed for active Eco-driving assistance that trades-off fuel consumption against other driver-accommodation objectives, and learns optimal traction torque and transmission shifting policies from experience. The training scheme for the proposed RL agent uses an off-policy actor-critic architecture that iteratively does policy evaluation with a multi-step return and policy improvement with the maximum posteriori policy optimization algorithm for hybrid action spaces. The proposed Eco-driving RL agent is implemented on a commercial vehicle in car following traffic. It shows superior performance in minimizing fuel consumption compared to a baseline controller that has full knowledge of fuel-efficiency tables.
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
在自主驾驶场中,人类知识融合到深增强学习(DRL)通常基于在模拟环境中记录的人类示范。这限制了在现实世界交通中的概率和可行性。我们提出了一种两级DRL方法,从真实的人类驾驶中学习,实现优于纯DRL代理的性能。培训DRL代理商是在Carla的框架内完成了机器人操作系统(ROS)。对于评估,我们设计了不同的真实驾驶场景,可以将提出的两级DRL代理与纯DRL代理进行比较。在从人驾驶员中提取“良好”行为之后,例如在信号交叉口中的预期,该代理变得更有效,并且驱动更安全,这使得这种自主代理更适应人体机器人交互(HRI)流量。
translated by 谷歌翻译
Real-time applications of energy management strategies (EMSs) in hybrid electric vehicles (HEVs) are the harshest requirements for researchers and engineers. Inspired by the excellent problem-solving capabilities of deep reinforcement learning (DRL), this paper proposes a real-time EMS via incorporating the DRL method and transfer learning (TL). The related EMSs are derived from and evaluated on the real-world collected driving cycle dataset from Transportation Secure Data Center (TSDC). The concrete DRL algorithm is proximal policy optimization (PPO) belonging to the policy gradient (PG) techniques. For specification, many source driving cycles are utilized for training the parameters of deep network based on PPO. The learned parameters are transformed into the target driving cycles under the TL framework. The EMSs related to the target driving cycles are estimated and compared in different training conditions. Simulation results indicate that the presented transfer DRL-based EMS could effectively reduce time consumption and guarantee control performance.
translated by 谷歌翻译
本文提出了一个基于加固学习(RL)的电动连接车辆(CV)的生态驾驶框架,以提高信号交叉点的车辆能效。通过整合基于型号的汽车策略,改变车道的政策和RL政策来确保车辆代理的安全操作。随后,制定了马尔可夫决策过程(MDP),该过程使车辆能够执行纵向控制和横向决策,从而共同优化了交叉口附近CVS的CAR跟踪和改变车道的行为。然后,将混合动作空间参数化为层次结构,从而在动态交通环境中使用二维运动模式训练代理。最后,我们所提出的方法从基于单车的透视和基于流的透视图中在Sumo软件中进行了评估。结果表明,我们的策略可以通过学习适当的动作方案来大大减少能源消耗,而不会中断其他人类驱动的车辆(HDVS)。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
运输电气化需要越来越多的电动机(例如电动机和电动机存储系统)上的电动机,并且对电动电气的控制通常涉及多个输入和多个输出(MIMO)。本文重点介绍了基于多代理增强学习(MARL)算法的多模式混合动力汽车的能源管理策略的在线优化,该算法旨在解决MIMO控制优化,而大多数现有方法仅处理单个输出控制。基于对基于深层确定性策略梯度(DDPG)基于的MARL算法优化的多模式混合动力汽车(HEV)的能源效率的分析,提出了一种新的与多代理的合作网络物理学习。然后,通过一种新颖的随机方法来设定学习驾驶周期,以加快训练过程。最终,网络设计,学习率和政策噪声被纳入了敏感性分析中,并确定了基于DDPG的算法参数,并研究了与多代理的不同关系的学习绩效,并证明与与不完全独立的关系比率0.2是最好的。与单一代理和多代理的同情研究表明,多代理可以在单一代理方案中获得总能量的4%提高。因此,MAL的多目标控制可以实现良好的优化效果和应用效率。
translated by 谷歌翻译
预计自动驾驶技术不仅可以提高移动性和道路安全性,还可以提高能源效率的益处。在可预见的未来,自动车辆(AVS)将在与人机车辆共享的道路上运行。为了保持安全性和活力,同时尽量减少能耗,AV规划和决策过程应考虑自动自动驾驶车辆与周围的人机车辆之间的相互作用。在本章中,我们描述了一种通过基于认知层次理论和强化学习开发人的驾驶员行为建模来开发共用道路上的节能自主驾驶政策的框架。
translated by 谷歌翻译
Proper functioning of connected and automated vehicles (CAVs) is crucial for the safety and efficiency of future intelligent transport systems. Meanwhile, transitioning to fully autonomous driving requires a long period of mixed autonomy traffic, including both CAVs and human-driven vehicles. Thus, collaboration decision-making for CAVs is essential to generate appropriate driving behaviors to enhance the safety and efficiency of mixed autonomy traffic. In recent years, deep reinforcement learning (DRL) has been widely used in solving decision-making problems. However, the existing DRL-based methods have been mainly focused on solving the decision-making of a single CAV. Using the existing DRL-based methods in mixed autonomy traffic cannot accurately represent the mutual effects of vehicles and model dynamic traffic environments. To address these shortcomings, this article proposes a graph reinforcement learning (GRL) approach for multi-agent decision-making of CAVs in mixed autonomy traffic. First, a generic and modular GRL framework is designed. Then, a systematic review of DRL and GRL methods is presented, focusing on the problems addressed in recent research. Moreover, a comparative study on different GRL methods is further proposed based on the designed framework to verify the effectiveness of GRL methods. Results show that the GRL methods can well optimize the performance of multi-agent decision-making for CAVs in mixed autonomy traffic compared to the DRL methods. Finally, challenges and future research directions are summarized. This study can provide a valuable research reference for solving the multi-agent decision-making problems of CAVs in mixed autonomy traffic and can promote the implementation of GRL-based methods into intelligent transportation systems. The source code of our work can be found at https://github.com/Jacklinkk/Graph_CAVs.
translated by 谷歌翻译
自动驾驶在过去二十年中吸引了重要的研究兴趣,因为它提供了许多潜在的好处,包括释放驾驶和减轻交通拥堵的司机等。尽管进展有前途,但车道变化仍然是自治车辆(AV)的巨大挑战,特别是在混合和动态的交通方案中。最近,强化学习(RL)是一种强大的数据驱动控制方法,已被广泛探索了在令人鼓舞的效果中的通道中的车道改变决策。然而,这些研究的大多数研究专注于单车展,并且在多个AVS与人类驱动车辆(HDV)共存的情况下,道路变化已经受到稀缺的关注。在本文中,我们在混合交通公路环境中制定了多个AVS的车道改变决策,作为多功能增强学习(Marl)问题,其中每个AV基于相邻AV的动作使车道变化的决定和HDV。具体地,使用新颖的本地奖励设计和参数共享方案开发了一种多代理优势演员批评网络(MA2C)。特别是,提出了一种多目标奖励功能来纳入燃油效率,驾驶舒适度和自主驾驶的安全性。综合实验结果,在三种不同的交通密度和各级人类司机侵略性下进行,表明我们所提出的Marl框架在效率,安全和驾驶员舒适方面始终如一地优于几个最先进的基准。
translated by 谷歌翻译
在过去的几十年中,车辆的升级和更新加速了。出于对环境友好和情报的需求,电动汽车(EV)以及连接和自动化的车辆(CAVS)已成为运输系统的新组成部分。本文开发了一个增强学习框架,以在信号交叉点上对由骑士和人类驱动车辆(HDV)组成的电力排实施自适应控制。首先,提出了马尔可夫决策过程(MDP)模型来描述混合排的决策过程。新颖的状态表示和奖励功能是为模型设计的,以考虑整个排的行为。其次,为了处理延迟的奖励,提出了增强的随机搜索(ARS)算法。代理商所学到的控制政策可以指导骑士的纵向运动,后者是排的领导者。最后,在模拟套件相扑中进行了一系列模拟。与几种最先进的(SOTA)强化学习方法相比,提出的方法可以获得更高的奖励。同时,仿真结果证明了延迟奖励的有效性,延迟奖励的有效性均优于分布式奖励机制}与正常的汽车跟随行为相比,灵敏度分析表明,可以将能量保存到不同的扩展(39.27%-82.51%))通过调整优化目标的相对重要性。在没有牺牲行进延迟的前提下,建议的控制方法可以节省多达53.64%的电能。
translated by 谷歌翻译
Reinforcement learning-based (RL-based) energy management strategy (EMS) is considered a promising solution for the energy management of electric vehicles with multiple power sources. It has been shown to outperform conventional methods in energy management problems regarding energy-saving and real-time performance. However, previous studies have not systematically examined the essential elements of RL-based EMS. This paper presents an empirical analysis of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell Electric Vehicle (FCEV). The empirical analysis is developed in four aspects: algorithm, perception and decision granularity, hyperparameters, and reward function. The results show that the Off-policy algorithm effectively develops a more fuel-efficient solution within the complete driving cycle compared with other algorithms. Improving the perception and decision granularity does not produce a more desirable energy-saving solution but better balances battery power and fuel consumption. The equivalent energy optimization objective based on the instantaneous state of charge (SOC) variation is parameter sensitive and can help RL-EMSs to achieve more efficient energy-cost strategies.
translated by 谷歌翻译
行人在场的运动控制算法对于开发安全可靠的自动驾驶汽车(AV)至关重要。传统运动控制算法依赖于手动设计的决策政策,这些政策忽略了AV和行人之间的相互作用。另一方面,深度强化学习的最新进展允许在没有手动设计的情况下自动学习政策。为了解决行人在场的决策问题,作者介绍了一个基于社会价值取向和深入强化学习(DRL)的框架,该框架能够以不同的驾驶方式生成决策政策。该政策是在模拟环境中使用最先进的DRL算法培训的。还引入了适合DRL训练的新型计算效率的行人模型。我们执行实验以验证我们的框架,并对使用两种不同的无模型深钢筋学习算法获得的策略进行了比较分析。模拟结果表明,开发的模型如何表现出自然的驾驶行为,例如短暂的驾驶行为,以促进行人的穿越。
translated by 谷歌翻译
在强化学习(RL)的试验和错误机制中,我们期望学习安全的政策时出现臭名昭着的矛盾:如何学习没有足够数据和关于危险区域的先前模型的安全政策?现有方法主要使用危险行动的后期惩罚,这意味着代理人不会受到惩罚,直到体验危险。这一事实导致代理商也无法在收敛之后学习零违规政策。否则,它不会收到任何惩罚并失去有关危险的知识。在本文中,我们提出了安全设置的演员 - 评论家(SSAC)算法,它使用面向安全的能量函数或安全索引限制了策略更新。安全索引旨在迅速增加,以便潜在的危险行动,这使我们能够在动作空间上找到安全设置,或控制安全集。因此,我们可以在服用它们之前识别危险行为,并在收敛后进一步获得零限制违规政策。我们声称我们可以以类似于学习价值函数的无模型方式学习能量函数。通过使用作为约束目标的能量函数转变,我们制定了受约束的RL问题。我们证明我们基于拉格朗日的解决方案确保学习的政策将收敛到某些假设下的约束优化。在复杂的模拟环境和硬件循环(HIL)实验中评估了所提出的算法,具有来自自动车辆的真实控制器。实验结果表明,所有环境中的融合政策达到了零限制违规和基于模型的基线的相当性能。
translated by 谷歌翻译
然而,由于各种交通/道路结构方案以及人类驾驶员行为的长时间分布,自动驾驶的感应,感知和本地化取得了重大进展,因此,对于智能车辆来说,这仍然是一个持开放态度的挑战始终知道如何在有可用的传感 /感知 /本地化信息的道路上做出和执行最佳决定。在本章中,我们讨论了人工智能,更具体地说,强化学习如何利用运营知识和安全反射来做出战略性和战术决策。我们讨论了一些与强化学习解决方案的鲁棒性及其对自动驾驶驾驶策略的实践设计有关的具有挑战性的问题。我们专注于在高速公路上自动驾驶以及增强学习,车辆运动控制和控制屏障功能的整合,从而实现了可靠的AI驾驶策略,可以安全地学习和适应。
translated by 谷歌翻译
许多现实世界的应用程序都可以作为多机构合作问题进行配置,例如网络数据包路由和自动驾驶汽车的协调。深入增强学习(DRL)的出现为通过代理和环境的相互作用提供了一种有前途的多代理合作方法。但是,在政策搜索过程中,传统的DRL解决方案遭受了多个代理具有连续动作空间的高维度。此外,代理商政策的动态性使训练非平稳。为了解决这些问题,我们建议采用高级决策和低水平的个人控制,以进行有效的政策搜索,提出一种分层增强学习方法。特别是,可以在高级离散的动作空间中有效地学习多个代理的合作。同时,低水平的个人控制可以减少为单格强化学习。除了分层增强学习外,我们还建议对手建模网络在学习过程中对其他代理的政策进行建模。与端到端的DRL方法相反,我们的方法通过以层次结构将整体任务分解为子任务来降低学习的复杂性。为了评估我们的方法的效率,我们在合作车道变更方案中进行了现实世界中的案例研究。模拟和现实世界实验都表明我们的方法在碰撞速度和收敛速度中的优越性。
translated by 谷歌翻译
最近,自主驾驶社会上有许多进展,吸引了学术界和工业的很多关注。然而,现有的作品主要专注于汽车,自动驾驶卡车算法和模型仍然需要额外的开发。在本文中,我们介绍了智能自动驾驶卡车系统。我们所呈现的系统由三个主要组成部分组成,1)一个现实的交通仿真模块,用于在测试场景中产生现实的交通流量,2)设计和评估了在现实世界部署中模仿实际卡车响应的高保真卡车模型,3 )具有基于学习的决策算法和多模轨迹策划仪的智能计划模块,考虑到卡车的约束,道路斜率变化和周围的交通流量。我们为每个组分单独提供定量评估,以证明每个部件的保真度和性能。我们还将我们的建议系统部署在真正的卡车上,并进行真实的世界实验,表明我们的系统能力缓解了SIM-TO-REAL差距。我们的代码可以在https://github.com/inceptioresearch/iits提供
translated by 谷歌翻译
在未来几年和几十年中,自动驾驶汽车(AV)将变得越来越普遍,为更安全,更方便的旅行提供了新的机会,并可能利用自动化和连接性的更智能的交通控制方法。跟随汽车是自动驾驶中的主要功能。近年来,基于强化学习的汽车已受到关注,目的是学习和达到与人类相当的绩效水平。但是,大多数现有的RL方法将汽车模拟为单方面问题,仅感知前方的车辆。然而,最近的文献,王和霍恩[16]表明,遵循的双边汽车考虑了前方的车辆,而后面的车辆表现出更好的系统稳定性。在本文中,我们假设可以使用RL学习这款双边汽车,同时学习其他目标,例如效率最大化,混蛋最小化和安全奖励,从而导致学识渊博的模型超过了人类驾驶。我们通过将双边信息集成到基于双边控制模型(BCM)的CAR遵循控制的状态和奖励功能的情况下,提出并引入了遵循控制遵循的汽车的深钢筋学习(DRL)框架。此外,我们使用分散的多代理增强学习框架来为每个代理生成相​​应的控制动作。我们的仿真结果表明,我们学到的政策比(a)汽车间的前进方向,(b)平均速度,(c)混蛋,(d)碰撞时间(TTC)和(e)的速度更好。字符串稳定性。
translated by 谷歌翻译
交通信号控制是一个具有挑战性的现实问题,旨在通过协调道路交叉路口的车辆移动来最大程度地减少整体旅行时间。现有使用中的流量信号控制系统仍然很大程度上依赖于过度简化的信息和基于规则的方法。具体而言,可以将绿色/红灯交替的周期性视为在策略优化中对每个代理进行更好计划的先验。为了更好地学习这种适应性和预测性先验,传统的基于RL的方法只能从只有本地代理的预定义动作池返回固定的长度。如果这些代理之间没有合作,则某些代理商通常会对其他代理产生冲突,从而减少整个吞吐量。本文提出了一个合作,多目标体系结构,具有年龄段的权重,以更好地估算流量信号控制优化的多重奖励条款,该奖励术语称为合作的多目标多代理多代理深度确定性策略梯度(Comma-ddpg)。运行的两种类型的代理可以最大程度地提高不同目标的奖励 - 一种用于每个交叉路口的本地流量优化,另一种用于全球流量等待时间优化。全球代理用于指导本地代理作为帮助更快学习的手段,但在推理阶段不使用。我们还提供了解决溶液存在的分析,并为提出的RL优化提供了融合证明。使用亚洲国家的交通摄像机收集的现实世界流量数据进行评估。我们的方法可以有效地将总延迟时间减少60 \%。结果表明,与SOTA方法相比,其优越性。
translated by 谷歌翻译