Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
In reinforcement learning applications like robotics, agents usually need to deal with various input/output features when specified with different state/action spaces by their developers or physical restrictions. This indicates unnecessary re-training from scratch and considerable sample inefficiency, especially when agents follow similar solution steps to achieve tasks. In this paper, we aim to transfer similar high-level goal-transition knowledge to alleviate the challenge. Specifically, we propose PILoT, i.e., Planning Immediate Landmarks of Targets. PILoT utilizes the universal decoupled policy optimization to learn a goal-conditioned state planner; then, distills a goal-planner to plan immediate landmarks in a model-free style that can be shared among different agents. In our experiments, we show the power of PILoT on various transferring challenges, including few-shot transferring across action spaces and dynamics, from low-dimensional vector states to image inputs, from simple robot to complicated morphology; and we also illustrate a zero-shot transfer solution from a simple 2D navigation task to the harder Ant-Maze task.
translated by 谷歌翻译
Pessimism is of great importance in offline reinforcement learning (RL). One broad category of offline RL algorithms fulfills pessimism by explicit or implicit behavior regularization. However, most of them only consider policy divergence as behavior regularization, ignoring the effect of how the offline state distribution differs with that of the learning policy, which may lead to under-pessimism for some states and over-pessimism for others. Taking account of this problem, we propose a principled algorithmic framework for offline RL, called \emph{State-Aware Proximal Pessimism} (SA-PP). The key idea of SA-PP is leveraging discounted stationary state distribution ratios between the learning policy and the offline dataset to modulate the degree of behavior regularization in a state-wise manner, so that pessimism can be implemented in a more appropriate way. We first provide theoretical justifications on the superiority of SA-PP over previous algorithms, demonstrating that SA-PP produces a lower suboptimality upper bound in a broad range of settings. Furthermore, we propose a new algorithm named \emph{State-Aware Conservative Q-Learning} (SA-CQL), by building SA-PP upon representative CQL algorithm with the help of DualDICE for estimating discounted stationary state distribution ratios. Extensive experiments on standard offline RL benchmark show that SA-CQL outperforms the popular baselines on a large portion of benchmarks and attains the highest average return.
translated by 谷歌翻译
High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with fidelity, diversity, and controllability in consideration, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows meet all three design goals, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.
translated by 谷歌翻译
现代的元强化学习(META-RL)方法主要基于模型 - 不合时宜的元学习开发,该方法在跨任务中执行策略梯度步骤以最大程度地提高策略绩效。但是,在元RL中,梯度冲突问题仍然很少了解,这可能导致遇到不同任务时的性能退化。为了应对这一挑战,本文提出了一种新颖的个性化元素RL(PMETA-RL)算法,该算法汇总了特定任务的个性化政策,以更新用于所有任务的元政策,同时保持个性化的政策,以最大程度地提高每个任务的平均回报在元政策的约束下任务。我们还提供了表格设置下的理论分析,该分析证明了我们的PMETA-RL算法的收敛性。此外,我们将所提出的PMETA-RL算法扩展到基于软参与者批评的深网络版本,使其适合连续控制任务。实验结果表明,所提出的算法在健身房和Mujoco套件上的其他以前的元rl算法都优于其他以前的元素算法。
translated by 谷歌翻译
在智能决策系统的核心上,如何代表和优化政策是一个基本问题。这个问题的根源挑战是政策空间的大规模和高复杂性,这加剧了政策学习的困难,尤其是在现实世界中。对于理想的替代政策领域,最近在低维潜在空间中的政策表示表明其在改善政策的评估和优化方面的潜力。这些研究所涉及的关键问题是,我们应根据哪些标准抽象出所需的压缩和泛化的政策空间。但是,文献中对政策抽象的理论和政策表示学习方法的研究较少。在这项工作中,我们做出了最初的努力来填补空缺。首先,我们提出了一个统一的政策抽象理论,其中包含与不同级别的政策特征相关的三种类型的策略抽象。然后,我们将它们推广到三个策略指标,以量化政策的距离(即相似性),以便在学习策略表示方面更方便使用。此外,我们建议基于深度度量学习的政策表示学习方法。对于实证研究,我们研究了拟议的政策指标和代表的功效,分别表征政策差异和传达政策概括。我们的实验均在政策优化和评估问题中进行,其中包含信任区域政策优化(TRPO),多样性引导的进化策略(DGES)和非政策评估(OPE)。自然而然地,实验结果表明,对于所有下游学习问题,都没有普遍的最佳抽象。虽然影响力 - 反应抽象可以是通常的首选选择。
translated by 谷歌翻译
在分支机构和结合中得出良好的可变选择策略对于现代混合编程(MIP)求解器的效率至关重要。通过在先前的解决方案过程中收集的MIP分支数据,学习分支方法最近变得比启发式方法更好。由于分支机构自然是一项顺序决策任务,因此应该学会优化整个MIP求解过程的实用性,而不是在每个步骤上都是近视。在这项工作中,我们将学习作为离线增强学习(RL)问题进行分支,并提出了一种长期视线的混合搜索方案来构建离线MIP数据集,该数据集对分支决策的长期实用程序。在政策培训阶段,我们部署了基于排名的奖励分配计划,以将有希望的样本与长期或短期视图区分开,并通过离线政策学习训练名为分支排名的分支模型。合成MIP基准和现实世界任务的实验表明,与广泛使用的启发式方法和基于先进的学习分支模型相比,分支rankink更有效,更健壮,并且可以更好地概括为MIP实例的大型MIP实例。
translated by 谷歌翻译
为了更好地利用搜索日志和建模用户的行为模式,提出了许多点击模型来提取用户的隐式交互反馈。大多数传统点击模型都是基于概率图形模型(PGM)框架,该框架需要手动设计的依赖项,并且可能会过度简化用户行为。最近,提出了基于神经网络的方法来通过增强表达能力并允许灵活的依赖性来提高用户行为的预测准确性。但是,他们仍然遭受数据稀疏性和冷启动问题的困扰。在本文中,我们提出了一个新颖的图形增强点击模型(GraphCM),用于Web搜索。首先,我们将每个查询或文档视为顶点,并分别针对查询和文档提出新颖的均匀图构造方法,以完全利用会议内和会议间信息,以解决稀疏性和冷启动问题。其次,在考试假设之后,我们分别对吸引力估计量和检查预测值进行了建模,以输出吸引力得分和检查概率,在该分数中,应用图形神经网络和邻居相互作用技术用于提取在预构建的同质图中编码的辅助信息。最后,我们将组合功能应用于将考试概率和吸引力得分整合到点击预测中。在三个现实世界会话数据集上进行的广泛实验表明,GraphCM不仅胜过了最先进的模型,而且还可以在解决数据稀疏性和冷启动问题方面取得卓越的性能。
translated by 谷歌翻译
我们研究了普遍存在的动作,即所有动作都有预设执行持续时间的环境中,研究了无模型的多机械加固学习(MARL)。在执行期间,环境变化受到动作执行的影响但不同步。在许多现实世界中,这种设置无处不在。但是,大多数MAL方法都假定推断后立即执行动作,这通常是不现实的,并且可能导致多机构协调的灾难性失败。为了填补这一空白,我们为MARL开发了一个算法的算法框架。然后,我们为无模型的MARL算法提出了一种新颖的情节记忆,legeM。 Legem通过利用代理人的个人经历来建立代理商的情节记忆。它通过解决了通过我们的新型奖励再分配计划提出的具有挑战性的时间信用分配问题来提高多机构学习,从而减轻了非马克维亚奖励的问题。我们在各种多代理方案上评估了Legem,其中包括猎鹿游戏,采石场游戏,造林游戏和Starcraft II微管理任务。经验结果表明,LegeM显着提高了多机构的协调,并提高了领先的绩效并提高了样本效率。
translated by 谷歌翻译
In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level finegrained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy. We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies. Both code and data is available from https://github.com/lemuria-wchen/imcs21.
translated by 谷歌翻译