这项工作报告了开发针对腿部机器人地形遍历性建模的深层增强学习方法,该方法既包含了外部感受和本体感受性的感觉数据。现有作品使用机器人不合时宜的外部感受的环境特征或手工制作的运动功能;取而代之的是,我们建议还从本体感受的感官数据中学习机器人特异性惯性特征,以在单个深层神经网络中奖励近似。合并惯性功能可以改善模型保真度,并提供取决于在部署过程中机器人状态的奖励。我们使用最大熵深的逆增强学习(Medirl)算法训练奖励网络,并同时提出最大程度地减少轨迹排名损失,以应对腿部机器人示范的次优。所证明的轨迹通过运动能源消耗来排名,以学习能源感知的奖励功能和比示范更节能的政策。我们使用MIT Mini-Cheetah机器人和Mini-Cheetah模拟器收集的数据集评估我们的方法。该代码可在https://github.com/ganlumomo/minicheetah-traversability-irl上公开获得。
translated by 谷歌翻译
这项工作通过建立最近提出的轨迹排名最大的熵深逆增强学习(T-Medirl),为拥挤的环境中具有社会意识的本地规划师的新框架提出了一个新的框架。为了解决社会导航问题,我们的多模式学习计划者明确考虑了社会互动因素以及社会意识因素,以从T-Medirl Pipeline中学习,以从人类的示范中学习奖励功能。此外,我们建议使用机器人周围行人的突然速度变化来解决人类示范中的亚临时性。我们的评估表明,这种方法可以成功地使机器人在拥挤的社交环境中导航,并在成功率,导航时间和入侵率方面胜过最先进的社会导航方法。
translated by 谷歌翻译
随着腿部机器人和嵌入式计算都变得越来越有能力,研究人员已经开始专注于这些机器人的现场部署。在非结构化环境中的强大自治需要对机器人周围的世界感知,以避免危害。但是,由于处理机车动力学所需的复杂规划人员和控制器,因此在网上合并在线的同时在线保持敏捷运动对腿部机器人更具挑战性。该报告将比较三种最新的感知运动方法,并讨论可以使用视觉来实现腿部自主权的不同方式。
translated by 谷歌翻译
从意外的外部扰动中恢复的能力是双模型运动的基本机动技能。有效的答复包括不仅可以恢复平衡并保持稳定性的能力,而且在平衡恢复物质不可行时,也可以保证安全的方式。对于与双式运动有关的机器人,例如人形机器人和辅助机器人设备,可帮助人类行走,设计能够提供这种稳定性和安全性的控制器可以防止机器人损坏或防止伤害相关的医疗费用。这是一个具有挑战性的任务,因为它涉及用触点产生高维,非线性和致动系统的高动态运动。尽管使用基于模型和优化方法的前进方面,但诸如广泛领域知识的要求,诸如较大的计算时间和有限的动态变化的鲁棒性仍然会使这个打开问题。在本文中,为了解决这些问题,我们开发基于学习的算法,能够为两种不同的机器人合成推送恢复控制政策:人形机器人和有助于双模型运动的辅助机器人设备。我们的工作可以分为两个密切相关的指示:1)学习人形机器人的安全下降和预防策略,2)使用机器人辅助装置学习人类的预防策略。为实现这一目标,我们介绍了一套深度加强学习(DRL)算法,以学习使用这些机器人时提高安全性的控制策略。
translated by 谷歌翻译
机器人和与世界相互作用或互动的机器人和智能系统越来越多地被用来自动化各种任务。这些系统完成这些任务的能力取决于构成机器人物理及其传感器物体的机械和电气部件,例如,感知算法感知环境,并计划和控制算法以生产和控制算法来生产和控制算法有意义的行动。因此,通常有必要在设计具体系统时考虑这些组件之间的相互作用。本文探讨了以端到端方式对机器人系统进行任务驱动的合作的工作,同时使用推理或控制算法直接优化了系统的物理组件以进行任务性能。我们首先考虑直接优化基于信标的本地化系统以达到本地化准确性的问题。设计这样的系统涉及将信标放置在整个环境中,并通过传感器读数推断位置。在我们的工作中,我们开发了一种深度学习方法,以直接优化信标的放置和位置推断以达到本地化精度。然后,我们将注意力转移到了由任务驱动的机器人及其控制器优化的相关问题上。在我们的工作中,我们首先提出基于多任务增强学习的数据有效算法。我们的方法通过利用能够在物理设计的空间上概括设计条件的控制器,有效地直接优化了物理设计和控制参数,以直接优化任务性能。然后,我们对此进行跟进,以允许对离散形态参数(例如四肢的数字和配置)进行优化。最后,我们通过探索优化的软机器人的制造和部署来得出结论。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. In the present work, we report a new method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby we leverage fast, automated, and cost-effective data generation schemes. The approach is applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.
translated by 谷歌翻译
我们向连续状态马尔可夫决策过程(MDP)提出了一种扩散近似方法,该方法可用于解决非结构化的越野环境中的自主导航和控制。与呈现完全已知的状态转换模型的大多数决策定理计划框架相比,我们设计了一种方法,该方法消除了这种强烈假设,这些假设通常非常难以在现实中工程师。我们首先采用价值函数的二阶泰勒扩展。然后通过部分微分方程近似贝尔曼的最优性方程,其仅依赖于转换模型的第一和第二矩。通过组合价值函数的内核表示,然后设计一种有效的策略迭代算法,其策略评估步骤可以表示为特征的方程式的线性系统,其特征是由有限组支持状态。我们首先通过大量的仿真以2D美元的$ 2D $避让和2.5d $地形导航问题进行验证。结果表明,拟议的方法在几个基线上导致了卓越的性能。然后,我们开发一个系统,该系统将我们的决策框架整合,与船上感知,并在杂乱的室内和非结构化的户外环境中进行现实世界的实验。物理系统的结果进一步展示了我们在挑战现实世界环境中的方法的适用性。
translated by 谷歌翻译
机器人导航传统上依赖于构建用于计划无碰撞轨迹的显式映射到所需的目标。在可变形的复杂地形中,使用基于几何的方法可以不能找到由于错误的可变形物体而像刚性和不可能的那样的路径。相反,我们学习预测地形区域的可迁移性以及更喜欢更容易导航的区域的估计(例如,小草上的小灌木)。与规范动态模型相比,我们而不是预测碰撞,而不是在实现的错误上回归。我们用一个政策方法训练,导致使用跨模拟和现实世界的培训数据分裂的50分钟的成功导航政策。我们基于学习的导航系统是一个示例高效的短期计划,我们在通过包括草原和森林的各种地形导航的清晰路径哈士摩克
translated by 谷歌翻译
由于涉及的复杂动态和多标准优化,控制非静态双模型机器人具有挑战性。最近的作品已经证明了深度加强学习(DRL)的仿真和物理机器人的有效性。在这些方法中,通常总共总共汇总来自不同标准的奖励以学习单个值函数。但是,这可能导致混合奖励之间的依赖信息丢失并导致次优策略。在这项工作中,我们提出了一种新颖的奖励自适应加强学习,用于Biped运动,允许控制策略通过使用动态机制通过多标准同时优化。该方法应用多重批评,为每个奖励组件学习单独的值函数。这导致混合政策梯度。我们进一步提出了动态权重,允许每个组件以不同的优先级优化策略。这种混合动态和动态策略梯度(HDPG)设计使代理商更有效地学习。我们表明所提出的方法优于总结奖励方法,能够转移到物理机器人。 SIM-to-Real和Mujoco结果进一步证明了HDPG的有效性和泛化。
translated by 谷歌翻译
我们提出了一种新颖的户外导航算法,以生成稳定,有效的动作,以将机器人导航到目标。我们使用多阶段的训练管道,并表明我们的模型产生了政策,从而在复杂的地形上导致稳定且可靠的机器人导航。基于近端政策优化(PPO)算法,我们开发了一种新颖的方法来实现户外导航任务的多种功能,即:减轻机器人的漂移,使机器人在颠簸的地形上保持稳定,避免在山丘上攀登,并具有陡峭的山坡,并改变了山坡,并保持了陡峭的高度变化,并使机器人稳定在山坡上,并避免了攀岩地面上的攀登,并避免了机器人的攀岩地形,并避免了机器人的攀岩地形。避免碰撞。我们的培训过程通过引入更广泛的环境和机器人参数以及统一模拟器中LIDAR感知的丰富特征来减轻现实(SIM到现实)差距。我们使用Clearphith Husky和Jackal在模拟和现实世界中评估我们的方法。此外,我们将我们的方法与最先进的方法进行了比较,并表明在现实世界中,它在不平坦的地形上至少提高了30.7%通过防止机器人在高梯度的区域移动,机器人在每个运动步骤处的高程变化。
translated by 谷歌翻译
我们提出了一种奖励预测,基于模型的深度学习方法,具有轨迹约束的视觉关注,用于Mapless,本地视觉导航任务。我们的方法学会在潜伏图像空间中的位置放置视觉注意,这跟踪车辆控制动作引起的轨迹,以提高规划期间的预测精度。注意模型由任务特定的损耗和额外的轨迹约束损失共同优化,允许适应性令人鼓舞的正则化结构,以改善泛化和可靠性。重要的是,视觉注意力应用于潜在特征映射空间而不是原始图像空间,以促进有效的规划。我们在规划低湍流的视觉导航任务中验证了我们的型号,在越野设置和爬坡地区的越野设置和山坡上爬上山坡的轨迹。实验涉及随机程序生成的模拟和现实世界环境。与关注和自我关注替代方案相比,我们发现我们的方法改善了泛化和学习效率。
translated by 谷歌翻译
Some of the most challenging environments on our planet are accessible to quadrupedal animals but remain out of reach for autonomous machines. Legged locomotion can dramatically expand the operational domains of robotics. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have escalated in complexity while falling short of the generality and robustness of animal locomotion. Here we present a radically robust controller for legged locomotion in challenging natural environments. We present a novel solution to incorporating proprioceptive feedback in locomotion control and demonstrate remarkable zero-shot generalization from simulation to natural environments. The controller is trained by reinforcement learning in simulation. It is based on a neural network that acts on a stream of proprioceptive signals. The trained controller has taken two generations of quadrupedal ANYmal robots to a variety of natural environments that are beyond the reach of prior published work in legged locomotion. The controller retains its robustness under conditions that have never been encountered during training: deformable terrain such as mud and snow, dynamic footholds such as rubble, and overground impediments such as thick vegetation and gushing water. The presented work opens new frontiers for robotics and indicates that radical robustness in natural environments can be achieved by training in much simpler domains.
translated by 谷歌翻译
基于腿部机器人的基于深的加固学习(RL)控制器表现出令人印象深刻的鲁棒性,可在不同的环境中为多个机器人平台行走。为了在现实世界中启用RL策略为类人类机器人应用,至关重要的是,建立一个可以在2D和3D地形上实现任何方向行走的系统,并由用户命令控制。在本文中,我们通过学习遵循给定步骤序列的政策来解决这个问题。该政策在一组程序生成的步骤序列(也称为脚步计划)的帮助下进行培训。我们表明,仅将即将到来的2个步骤喂入政策就足以实现全向步行,安装到位,站立和攀登楼梯。我们的方法采用课程学习对地形的复杂性,并规避了参考运动或预训练的权重的需求。我们证明了我们提出的方法在Mujoco仿真环境中学习2个新机器人平台的RL策略-HRP5P和JVRC -1-。可以在线获得培训和评估的代码。
translated by 谷歌翻译
估计越野环境中的地形横穿性需要关于机器人和这些地形之间复杂相互作用动态的推理。但是,建立准确的物理模型,或创建有益的标签来以有监督的方式学习模型是有挑战性的。我们提出了一种方法,该方法通过将外部感受性的环境信息与本体感受性的地形相互作用反馈相结合,以自我监督的方式将遍历性成本映像结合在一起。此外,我们提出了一种将机器人速度纳入Costmap预测管道中的新型方法。我们在具有挑战性的越野地形上,在多个大型,自动的全地形车辆(ATV)上验证了我们的方法,并在单独的大型地面机器人上易于集成。我们的短尺寸导航结果表明,使用我们学到的Costmaps可以使整体航行更顺畅,并为机器人提供了对机器人与不同地形类型(例如草和砾石)之间相互作用的更细粒度的了解。我们的大规模导航试验表明,与基于占用率的导航基线相比,我们可以将干预措施的数量减少多达57%,这是在挑战400 m至3150 m不等的越野课程中。
translated by 谷歌翻译
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
translated by 谷歌翻译
我们解决了动态环境中感知力的问题。在这个问题中,四足动物的机器人必须对环境混乱和移动的障碍物表现出强大而敏捷的步行行为。我们提出了一个名为Prelude的分层学习框架,该框架将感知力的问题分解为高级决策,以预测导航命令和低级步态生成以实现目标命令。在此框架中,我们通过在可进入手推车上收集的人类示范和使用加固学习(RL)的低级步态控制器(RL)上收集的人类示范中的模仿学习来训练高级导航控制器。因此,我们的方法可以从人类监督中获取复杂的导航行为,并从反复试验中发现多功能步态。我们证明了方法在模拟和硬件实验中的有效性。可以在https://ut-aut-autin-rpl.github.io/prelude上找到视频和代码。
translated by 谷歌翻译
策略搜索和模型预测控制〜(MPC)是机器人控制的两个不同范式:策略搜索具有使用经验丰富的数据自动学习复杂策略的强度,而MPC可以使用模型和轨迹优化提供最佳控制性能。开放的研究问题是如何利用并结合两种方法的优势。在这项工作中,我们通过使用策略搜索自动选择MPC的高级决策变量提供答案,这导致了一种新的策略搜索 - 用于模型预测控制框架。具体地,我们将MPC作为参数化控制器配制,其中难以优化的决策变量表示为高级策略。这种制定允许以自我监督的方式优化政策。我们通过专注于敏捷无人机飞行中的具有挑战性的问题来验证这一框架:通过快速的盖茨飞行四轮车。实验表明,我们的控制器在模拟和现实世界中实现了鲁棒和实时的控制性能。拟议的框架提供了合并学习和控制的新视角。
translated by 谷歌翻译