分布强化学习〜(RL)是一类最先进的算法,可估计总回报的整个分布,而不仅仅是其期望。分布RL的经验成功取决于回报分布的表示和分布差异的选择。在本文中,我们提出了一类新类\ textit {sindhorn Distributional rl〜(sindhorndrl)}算法,该算法学习了一组有限的统计数据,即确定性样本,从每个返回分布中,然后使用sinkhorn迭代来评估sindhorn迭代之间的距离当前和目标铃铛分布。 sindhorn的差异特征是瓦斯汀距离与最大平均差异〜(MMD)之间的插值。 Sindhorndrl通过利用基于最佳传输距离的几何形状和MMD的无偏梯度估计特性,从而找到了一个甜蜜点。最后,与最先进的算法相比,Sinkhorndrl的竞争性能在55场Atari游戏中得到了证明。
translated by 谷歌翻译
分布强化学习〜(RL)是一类最先进的算法,可估计总回报的全部分布,而不仅仅是其期望。尽管分销RL的表现出色,但对基于预期的RL的优势的理论理解仍然难以捉摸。在本文中,我们将分布RL的优越性归因于其正规化效果,无论其预期如何,其价值分布信息。首先,通过稳健统计数据中总误差模型的变体的杠杆作用,我们将值分布分解为其预期和其余分布部分。因此,与基于期望的RL相比,分布RL的额外好处主要解释为在神经拟合Z-材料框架中\ textit {风险敏感的熵正则化}的影响。同时,我们在最大熵RL中的分布RL的风险敏感熵正则和香草熵之间建立了一个桥梁,专门针对参与者 - 批评算法。它揭示了分布RL诱导校正后的奖励函数,从而促进了针对环境内在不确定性的风险敏感探索。最后,广泛的实验证实了分布RL的正则化作用和不同熵正则化的相互影响的作用。我们的研究铺平了一种更好地解释分布RL算法的功效,尤其是通过正则化的镜头的方法。
translated by 谷歌翻译
在实际情况下,代理观察的状态观察可能含有测量误差或对抗性噪音,误导代理人在训练时采取次优行动甚至崩溃。在本文中,我们研究了分布加固学习的培训稳健性〜(RL),一类最先进的方法,即估计整个分布,而不是仅期望的总回报。首先,我们验证了基于期望和分布的Bellman运营商在状态 - Noisy Markov决策过程〜(SN-MDP)中的收缩,该典型表格案例包含随机和对抗状态观察噪声。除了SN-MDP之外,我们将分析基于期望的RL中最小二乘损失的脆弱性,具有线性或非线性函数近似。相比之下,基于直方图密度估计理论地表征分布RL损耗的有界梯度规范。由此产生的稳定梯度,而分布RL的优化占其更好地训练稳健性,而不是国家观察噪声。最后,在游戏套件上进行了广泛的实验,在不同的状态观察噪声的不同强度下,在SN-MDP样设置中验证了基于期望和分布RL的收敛性。更重要的是,与SN-MDP之外的嘈杂设置中,与基于期望的对应物相比,分布RL与嘈杂的状态观察相比,分配RL不易受到噪声的噪声。
translated by 谷歌翻译
有效的探索仍然是强化学习中有挑战性的问题,特别是对于来自环境的外在奖励稀疏甚至完全忽视的任务。基于内在动机的重要进展显示了在简单环境中的有希望的结果,但通常会在具有多式联运和随机动力学的环境中陷入困境。在这项工作中,我们提出了一种基于条件变分推理的变分动力模型来模拟多模和随机性。通过在当前状态,动作和潜在变量的条件下产生下一个状态预测,我们考虑作为条件生成过程的环境状态动作转换,这提供了更好地了解动态并在勘探中引发更好的性能。我们派生了环境过渡的负面日志可能性的上限,并使用这样一个上限作为勘探的内在奖励,这使得代理通过自我监督的探索来学习技能,而无需观察外在奖励。我们在基于图像的仿真任务和真正的机器人操纵任务中评估所提出的方法。我们的方法优于若干基于最先进的环境模型的勘探方法。
translated by 谷歌翻译
自动食品识别是迈向被动饮食监测的第一步。在本文中,我们通过开采歧视性食品地区解决了食品识别问题。从对抗性擦除中汲取灵感,该策略逐渐发现判别对象区域以弱监督语义细分,我们提出了一种新型的网络体系结构,其中主要网络保持了对输入图像进行分类的基本准确性,辅助网络对抗性地矿山挖掘了歧视食品区域,歧视食物区域,歧视食物区域,歧视食物区域,歧视图像。区域网络对所得的开采区域进行了分类。然后将全局(原始输入图像)和本地(矿区)表示为最终预测。拟议的架构表示为par-net,是端到端的训练,并以在线方式突出显示歧视区域。此外,我们推出了一个名为Sushi-50的新的细粒食品数据集,该数据集由50种不同的寿司类别组成。已经进行了广泛的实验来评估所提出的方法。在选择的三个食物数据集(Food-101,Vireo-172和Sushi-50)上,我们的方法始终如一地执行并取得了最先进的结果(TOP-1测试准确性$ 90.4 \%\%$,$ 90.2 \%\%$ $ ,分别为$ 92.0 \%$)与其他现有方法相比。数据集和代码可在https://github.com/jianing-qiu/parnet上找到
translated by 谷歌翻译
Accurate prediction of future person location and movement trajectory from an egocentric wearable camera can benefit a wide range of applications, such as assisting visually impaired people in navigation, and the development of mobility assistance for people with disability. In this work, a new egocentric dataset was constructed using a wearable camera, with 8,250 short clips of a targeted person either walking 1) toward, 2) away, or 3) across the camera wearer in indoor environments, or 4) staying still in the scene, and 13,817 person bounding boxes were manually labelled. Apart from the bounding boxes, the dataset also contains the estimated pose of the targeted person as well as the IMU signal of the wearable camera at each time point. An LSTM-based encoder-decoder framework was designed to predict the future location and movement trajectory of the targeted person in this egocentric setting. Extensive experiments have been conducted on the new dataset, and have shown that the proposed method is able to reliably and better predict future person location and trajectory in egocentric videos captured by the wearable camera compared to three baselines.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译