Our aim is to build autonomous agents that can solve tasks in environments like Minecraft. To do so, we used an imitation learning-based approach. We formulate our control problem as a search problem over a dataset of experts' demonstrations, where the agent copies actions from a similar demonstration trajectory of image-action pairs. We perform a proximity search over the BASALT MineRL-dataset in the latent representation of a Video PreTraining model. The agent copies the actions from the expert trajectory as long as the distance between the state representations of the agent and the selected expert trajectory from the dataset do not diverge. Then the proximity search is repeated. Our approach can effectively recover meaningful demonstration trajectories and show human-like behavior of an agent in the Minecraft environment.
translated by 谷歌翻译
尽管行为学习近期取得了令人印象深刻的进步,但由于无法利用大型,人类生成的数据集,它落后于计算机视觉和自然语言处理。人类的行为具有较大的差异,多种模式和人类的示范通常不带有奖励标签。这些属性限制了当前方法在离线RL和行为克隆中的适用性,以从大型预收取的数据集中学习。在这项工作中,我们提出了行为变压器(BET),这是一种用多种模式建模未标记的演示数据的新技术。 BET翻新带有动作离散化的标准变压器体系结构,再加上受对象检测中偏移预测启发的多任务动作校正。这使我们能够利用现代变压器的多模式建模能力来预测多模式的连续动作。我们通过实验评估了各种机器人操作和自动驾驶行为数据集的赌注。我们表明,BET可以显着改善以前的最新工作解决方案,同时捕获预采用的数据集中存在的主要模式。最后,通过一项广泛的消融研究,我们分析了BET中每个关键成分的重要性。 BET生成的行为视频可在https://notmahi.github.io/bet上获得
translated by 谷歌翻译
在模仿学习的背景下,提供专家轨迹通常是昂贵且耗时的。因此,目标必须是创建算法,这些算法需要尽可能少的专家数据。在本文中,我们提出了一种算法,该算法模仿了专家的高级战略,而不仅仅是模仿行动水平的专家,我们假设这需要更少的专家数据并使培训更加稳定。作为先验,我们假设高级策略是达到未知的目标状态区域,我们假设这对于强化学习中许多领域是有效的先验。目标国家地区未知,但是由于专家已经证明了如何达到目标,因此代理商试图到达与专家类似的州。我们的算法以时间连贯性的思想为基础,训练神经网络,以预测两个状态是否相似,从某种意义上说,它们可能会随着时间的流逝而发生。在推论期间,代理将其当前状态与案例基础的专家状态进行比较以获得相似性。结果表明,我们的方法仍然可以在很少有专家数据的设置中学习一个近乎最佳的政策,这些算法试图模仿动作级别的专家,这一算法再也无法做到了。
translated by 谷歌翻译
在现代机器学习研究中,概括到以前看不见的任务的能力几乎是一个关键的挑战。它也是未来“将军AI”的基石。任何部署在现实世界应用中的人为智能代理,都必须随时适应未知环境。研究人员通常依靠强化和模仿学习来通过试用和错误学习来在线适应新任务。但是,这对于需要许多时间段或大量子任务才能完成的复杂任务可能具有挑战性。这些“长范围”任务遭受了样本效率低下的损失,并且可能需要非常长的培训时间,然后代理人才能学习执行必要的长期计划。在这项工作中,我们介绍了案例,该案例试图通过使用自适应“不久的将来”子目标训练模仿学习代理来解决这些问题。这些子观念在每个步骤中使用构图算术在学习潜在的表示空间中进行重新计算。除了提高标准长期任务的学习效率外,这种方法还可以使对以前看不见的任务进行一次性的概括,只有在不同环境中为该任务进行单个参考轨迹。我们的实验表明,所提出的方法始终优于先前的最新成分模仿学习方法30%。
translated by 谷歌翻译
元学习的目标是尽快推出新的任务和目标。理想情况下,我们希望在第一次尝试中概括新目标和任务的方法。对此结束,我们介绍了上下文规划网络(CPN)。任务表示为目标图像并用于调节方法。我们评估CPN以及适用于零射击目标的荟萃学习的其他几种方法。我们使用MetaForld基准任务评估24个不同的操作任务的方法。我们发现CPN在一项任务上表现了几种方法和基线,并且与他人的现有方法具有竞争力。我们展示了使用Kinova Jaco机械臂在Jenga任务的物理平台上的方法。
translated by 谷歌翻译
学习剂的实际应用需要样本有效且可解释的算法。向行为先验学习是一种有前途的方法,可以使工具探索政策更好或对早期学习的陷阱进行安全保护。现有的模仿学习解决方案需要大量的专家演示,并依靠难以解释的学习方法,例如深Q学习。在这项工作中,我们提出了一种基于计划的方法,该方法可以在强化学习环境中使用这些行为先验进行有效的探索和学习,我们证明以行为先验的形式进行了精心挑战的探索政策可以帮助代理商更快地学习。
translated by 谷歌翻译
模仿学习使用专家的演示来揭示最佳政策,并且也适用于现实世界的机器人技术任务。但是,在这种情况下,由于安全,经济和时间限制,对代理的培训是在模拟环境中进行的。后来,使用SIM到现实方法将代理应用于现实域。在本文中,我们采用模仿学习方法来解决模拟环境中的机器人技术任务,并使用转移学习将这些解决方案应用于现实世界环境。我们的任务设置在Duckietown环境中,机器人代理必须根据单个前向摄像头的输入图像遵循右车道。我们提出了三个模仿学习和两种能够完成此任务的模拟方法。在这些技术上提供了详细的比较,以突出它们的优势和缺点。
translated by 谷歌翻译
在多任务强化学习设置中,学习者通常通过利用它们之间的相似性来培训多个相关任务。与此同时,训练有素的代理能够解决更广泛的不同问题。虽然这种效果有充分的模型多任务方法,但我们在使用多个任务中使用单一学习动态模型时展示了不利影响。因此,我们以类似的方式从共享策略网络中的类似方式解决了基于模型的多项任务强度学习效益的基本问题。使用单个动力学模型,我们看到清晰的任务混乱证据和表现降低。作为一种补救措施,通过训练孤立的子网来强制执行学习动态模型的内部结构,该任务的孤立的子网显着提高了使用相同数量的参数的性能。我们通过在简单的GridWorld上比较两种方法和更复杂的VizDoom多任务实验来说明我们的研究结果。
translated by 谷歌翻译
Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent. This is because partial observability gives rise to hidden confounders in the causal graph. We break down the space of confounded imitation learning problems and identify three settings with different data requirements in which the correct imitation policy can be identified. We then introduce an algorithm for deconfounded imitation learning, which trains an inference model jointly with a latent-conditional policy. At test time, the agent alternates between updating its belief over the latent and acting under the belief. We show in theory and practice that this algorithm converges to the correct interventional policy, solves the confounding issue, and can under certain assumptions achieve an asymptotically optimal imitation performance.
translated by 谷歌翻译
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps. Thus, they can easily generalize and adapt to new and changing environments. Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting, making it difficult for them to imitate human behavior in case of versatile demonstrations. Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility. To facilitate generalization to novel task configurations, we do not directly match the agent's and expert's trajectory distributions but rather work with concise geometric descriptors which generalize well to unseen task configurations. We empirically validate our method on various robot tasks using versatile human demonstrations and compare to imitation learning algorithms in a state-action setting as well as a trajectory-based setting. We find that the geometric descriptors greatly help in generalizing to new task configurations and that combining them with our distribution-matching objective is crucial for representing and reproducing versatile behavior.
translated by 谷歌翻译
我们调查视觉跨实施的模仿设置,其中代理商学习来自其他代理的视频(例如人类)的策略,示范相同的任务,但在其实施例中具有缺点差异 - 形状,动作,终效应器动态等。在这项工作中,我们证明可以从对这些差异强大的跨实施例证视频自动发现和学习基于视觉的奖励功能。具体而言,我们介绍了一种用于跨实施的跨实施的自我监督方法(XIRL),它利用时间周期 - 一致性约束来学习深度视觉嵌入,从而从多个专家代理的示范的脱机视频中捕获任务进度,每个都执行相同的任务不同的原因是实施例差异。在我们的工作之前,从自我监督嵌入产生奖励通常需要与参考轨迹对齐,这可能难以根据STARK实施例的差异来获取。我们凭经验显示,如果嵌入式了解任务进度,则只需在学习的嵌入空间中占据当前状态和目标状态之间的负距离是有用的,作为培训与加强学习的培训政策的奖励。我们发现我们的学习奖励功能不仅适用于在训练期间看到的实施例,而且还概括为完全新的实施例。此外,在将现实世界的人类示范转移到模拟机器人时,我们发现XIRL比当前最佳方法更具样本。 https://x-irl.github.io提供定性结果,代码和数据集
translated by 谷歌翻译
为了基于深度加强学习(RL)来增强目标驱动的视觉导航的交叉目标和跨场景,我们将信息理论正则化术语引入RL目标。正则化最大化导航动作与代理的视觉观察变换之间的互信息,从而促进更明智的导航决策。这样,代理通过学习变分生成模型来模拟动作观察动态。基于该模型,代理生成(想象)从其当前观察和导航目标的下一次观察。这样,代理学会了解导航操作与其观察变化之间的因果关系,这允许代理通过比较当前和想象的下一个观察来预测导航的下一个动作。 AI2-Thor框架上的交叉目标和跨场景评估表明,我们的方法在某些最先进的模型上获得了平均成功率的10美元。我们进一步评估了我们的模型在两个现实世界中:来自离散的活动视觉数据集(AVD)和带有TurtleBot的连续现实世界环境中的看不见的室内场景导航。我们证明我们的导航模型能够成功实现导航任务这些情景。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
本文解决了逆增强学习(IRL)的问题 - 从观察其行为中推断出代理的奖励功能。 IRL可以为学徒学习提供可概括和紧凑的代表,并能够准确推断人的偏好以帮助他们。 %并提供更准确的预测。但是,有效的IRL具有挑战性,因为许多奖励功能可以与观察到的行为兼容。我们专注于如何利用先前的强化学习(RL)经验,以使学习这些偏好更快,更高效。我们提出了IRL算法基础(通过样本中的连续功能意图推断行为获取行为),该算法利用多任务RL预培训和后继功能,使代理商可以为跨越可能的目标建立强大的基础,从而跨越可能的目标。给定的域。当仅接触一些专家演示以优化新颖目标时,代理商会使用其基础快速有效地推断奖励功能。我们的实验表明,我们的方法非常有效地推断和优化显示出奖励功能,从而准确地从少于100个轨迹中推断出奖励功能。
translated by 谷歌翻译
While deep reinforcement learning has proven to be successful in solving control tasks, the "black-box" nature of an agent has received increasing concerns. We propose a prototype-based post-hoc policy explainer, ProtoX, that explains a blackbox agent by prototyping the agent's behaviors into scenarios, each represented by a prototypical state. When learning prototypes, ProtoX considers both visual similarity and scenario similarity. The latter is unique to the reinforcement learning context, since it explains why the same action is taken in visually different states. To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent. We then add an isometry layer to allow ProtoX to adapt scenario similarity to the downstream task. ProtoX is trained via imitation learning using behavior cloning, and thus requires no access to the environment or agent. In addition to explanation fidelity, we design different prototype shaping terms in the objective function to encourage better interpretability. We conduct various experiments to test ProtoX. Results show that ProtoX achieved high fidelity to the original black-box agent while providing meaningful and understandable explanations.
translated by 谷歌翻译
在www.aicrowd.com平台上托管的学习竞赛自主赛车虚拟挑战由两个曲目组成:单摄像头和多相机。我们的Uniteam团队是单个相机轨道中的最终获胜者之一。该代理必须在最短时间内通过以前未知的F1风格轨道,而越野驾驶量最少。在我们的方法中,我们将U-NET体系结构用于道路细分,各种自动编码器编码道路二进制面具以及最近的邻居搜索策略,该策略选择给定状态的最佳动作。我们的经纪人在第1阶段(已知赛道)的平均速度为105 km/h,在第2阶段(未知轨道)上达到了73 km/h,而没有任何越野驾驶。在这里,我们提出解决方案和结果。代码实施可在此处提供:https://gitlab.aicrowd.com/shivansh beohar/l2r
translated by 谷歌翻译
在几次拍摄的仿制学习(FSIL)中,使用行为克隆(BC)来解决少数专家演示的看不见的任务成为一个流行的研究方向。以下功能在机器人应用中至关重要:(1)在包含多个阶段的复合任务中行为。 (2)从少量变体和未对准示范中检索知识。 (3)从不同的专家学习。以前没有工作可以同时达到这些能力。在这项工作中,我们在上述设置的联盟下进行FSIL问题,并介绍一个小说阶段意识注意网络(扫描),以同时检索来自少数示范的知识。扫描使用注意模块识别长度变体演示中的每个阶段。此外,它是根据演示条件的政策设计,了解专家和代理人之间的关系。实验结果表明,扫描可以从不同的专家中学习,而不进行微调和优于复杂的复合任务的基线,可视化可视化。
translated by 谷歌翻译
我们提出了一个端到端,基于模型的深度加强学习代理,它在规划期间动态地参加其国家的相关部分。代理使用基于集的表示的瓶颈机制,以强制代理参加每个规划步骤的实体数量。在实验中,我们研究了具有不同挑战的几套定制环境的瓶颈机制。我们始终如一地观察到该设计允许规划代理通过参加相关对象来概括其在兼容的看不见环境中的学习任务解决能力,从而导致更好的分发概括性表现。
translated by 谷歌翻译
我们提出了一种从演示方法(LFD)方法的新颖学习,即示范(DMFD)的可变形操作,以使用状态或图像作为输入(给定的专家演示)来求解可变形的操纵任务。我们的方法以三种不同的方式使用演示,并平衡在线探索环境和使用专家的指导之间进行权衡的权衡,以有效地探索高维空间。我们在一组一维绳索的一组代表性操纵任务上测试DMFD,并从软件套件中的一套二维布和2维布进行测试,每个任务都带有状态和图像观测。对于基于状态的任务,我们的方法超过基线性能高达12.9%,在基于图像的任务上最多超过33.44%,具有可比或更好的随机性。此外,我们创建了两个具有挑战性的环境,用于使用基于图像的观测值折叠2D布,并为其设定性能基准。与仿真相比,我们在现实世界执行过程中归一化性能损失最小的真实机器人(约为6%),我们将DMFD部署为最小。源代码在github.com/uscresl/dmfd上
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
Adversarial imitation learning (AIL) has become a popular alternative to supervised imitation learning that reduces the distribution shift suffered by the latter. However, AIL requires effective exploration during an online reinforcement learning phase. In this work, we show that the standard, naive approach to exploration can manifest as a suboptimal local maximum if a policy learned with AIL sufficiently matches the expert distribution without fully learning the desired task. This can be particularly catastrophic for manipulation tasks, where the difference between an expert and a non-expert state-action pair is often subtle. We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of multiple exploratory, auxiliary tasks in addition to a main task. The addition of these auxiliary tasks forces the agent to explore states and actions that standard AIL may learn to ignore. Additionally, this particular formulation allows for the reusability of expert data between main tasks. Our experimental results in a challenging multitask robotic manipulation domain indicate that LfGP significantly outperforms both AIL and behaviour cloning, while also being more expert sample efficient than these baselines. To explain this performance gap, we provide further analysis of a toy problem that highlights the coupling between a local maximum and poor exploration, and also visualize the differences between the learned models from AIL and LfGP.
translated by 谷歌翻译