Offline reinforcement learning (RL) enables the agent to effectively learn from logged data, which significantly extends the applicability of RL algorithms in real-world scenarios where exploration can be expensive or unsafe. Previous works have shown that extracting primitive skills from the recurring and temporally extended structures in the logged data yields better learning. However, these methods suffer greatly when the primitives have limited representation ability to recover the original policy space, especially in offline settings. In this paper, we give a quantitative characterization of the performance of offline hierarchical learning and highlight the importance of learning lossless primitives. To this end, we propose to use a \emph{flow}-based structure as the representation for low-level policies. This allows us to represent the behaviors in the dataset faithfully while keeping the expression ability to recover the whole policy space. We show that such lossless primitives can drastically improve the performance of hierarchical policies. The experimental results and extensive ablation studies on the standard D4RL benchmark show that our method has a good representation ability for policies and achieves superior performance in most tasks.
translated by 谷歌翻译
在阻碍强化学习(RL)到现实世界中的问题的原因之一,两个因素至关重要:与培训相比,数据有限和测试环境的不匹配。在本文中,我们试图通过分配强大的离线RL的问题同时解决这些问题。特别是,我们学习了一个从源环境中获得的历史数据,并优化了RL代理,并在扰动的环境中表现良好。此外,我们考虑将算法应用于大规模问题的线性函数近似。我们证明我们的算法可以实现$ O(1/\ sqrt {k})$的次级临时性,具体取决于线性函数尺寸$ d $,这似乎是在此设置中使用样品复杂性保证的第一个结果。进行了不同的实验以证明我们的理论发现,显示了我们算法与非持bust算法的优越性。
translated by 谷歌翻译
在现实世界中的决策情况(例如金融,机器人技术,自动驾驶等)中,控制风险通常比最大程度地提高预期奖励更为重要。风险措施的最自然选择是差异,而它会惩罚上升波动率作为下行部分。取而代之的是,(下行)半变量捕获了随机变量在其平均值下的负偏差,更适合于规避风险的提议。本文旨在优化加强学习W.R.T.中的平均持续性(MSV)标准。稳定的奖励。由于半变量是时间的,并且不满足标准的贝尔曼方程,因此传统的动态编程方法直接不适合MSV问题。为了应对这一挑战,我们求助于扰动分析(PA)理论,并建立MSV的性能差异公式。我们揭示MSV问题可以通过迭代解决与策略有关的奖励功能的一系列RL问题来解决。此外,我们根据政策梯度理论和信任区域方法提出了两种派利算法。最后,我们进行了不同的实验,从简单的匪徒问题到穆约科的连续控制任务,这些实验证明了我们提出的方法的有效性。
translated by 谷歌翻译
离线增强学习(RL)可以从先前收集的数据中进行有效的学习,而无需探索,这在探索昂贵甚至不可行时在现实世界应用中显示出巨大的希望。折扣因子$ \ gamma $在提高在线RL样本效率和估计准确性方面起着至关重要的作用,但是折现因子在离线RL中的作用尚未得到很好的探索。本文研究了$ \ gamma $在离线RL中的两个明显影响,并通过理论分析,即正则化效果和悲观效应。一方面,$ \ gamma $是在现有离线技术下以样本效率而定的最佳选择的监管机构。另一方面,较低的指导$ \ gamma $也可以看作是一种悲观的方式,我们在最坏的模型中优化了政策的性能。我们通过表格MDP和标准D4RL任务从经验上验证上述理论观察。结果表明,折现因子在离线RL算法的性能中起着至关重要的作用,无论是在现有的离线方法的小型数据制度下还是在没有其他保守主义的大型数据制度中。
translated by 谷歌翻译
室内视频中的头部检测是许多真实应用的重要组成部分。虽然深层模型在一般物体检测中取得了显着进展,但它们在复杂的室内场景中不足以满足。室内监控视频通常包括杂乱的背景对象,其中头部有小尺度和不同的姿势。在本文中,我们提出了运动感知伪暹罗网络(MPSN),一种端到端的方法,利用头部运动信息来引导深层模型来提取室内场景中的有效头特征。通过将相邻帧的像素明显差异作为辅助输入,MPSN有效地增强了人头运动信息并消除了背景中的无关物体。与现有方法相比,它在两个室内视频数据集中实现了卓越的性能。我们的实验表明,MPSN成功地抑制了静态背景对象,并突出了移动实例,尤其是室内视频中的人类头部。我们还比较不同的方法来捕获头部运动,这表明MPSN的简单性和灵活性。最后,为了验证MPSN的稳健性,我们对鲁棒模型选择的小扰动的数学解决方案进行对抗性实验。代码可在https://github.com/pl-share/mpsn获得。
translated by 谷歌翻译
大多数加固学习算法优化了折扣标准,这些标准是有益的,可以加速收敛并降低估计的方差。虽然折扣标准适用于诸如财务相关问题的某些任务,但许多工程问题同样对待未来的奖励,并更喜欢长期的平均标准。在本文中,我们研究了长期平均标准的强化学习问题。首先,我们在折扣和平均标准中制定统一的信任区域理论,并在扰动分析(PA)理论中导出信托区域内的新颖性能。其次,我们提出了一种名为平均策略优化(APO)的实用算法,其提高了名为平均值约束的新颖技术的值估计。最后,实验在连续控制环境Mujoco中进行。在大多数任务中,APO比折扣PPO更好,这表明了我们方法的有效性。我们的工作提供了统一的信任地区方法,包括折扣和平均标准,这可能会补充折扣目标超出了钢筋学习的框架。
translated by 谷歌翻译
最近,深度多智能经纪增强学习(Marl)已经表明了解决复杂的合作任务的承诺。它的成功部分是因为代理商之间的参数共享。然而,这种共享可能导致代理人行事,并限制其协调能力。在本文中,我们的目标是在共享多智能经纪增强学习的优化和代表中引入多样性。具体而言,我们提出了一种信息理论正则化,以最大限度地提高代理商身份与其轨迹之间的相互信息,鼓励广泛的勘探和各种个性化行为。在表示中,我们将特定于代理的神经网络架构中的特定模块纳入了共享神经网络架构,这些模块由L1-Norm规则化,以促进代理之间的学习共享,同时保持必要的多样性。实证结果表明,我们的方法在谷歌研究足球和超级硬星争II微型管理任务中实现了最先进的性能。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译