我们提出了一个我们命名肖像解释的任务,并为其构建一个名为Portrait250k的数据集。当前关于人类属性认可和人重新识别等肖像的研究取得了许多成功,但通常,它们:1)可能缺乏各种任务与可能带来的可能利益之间的相互关系; 2)专门为每个任务设计的深层模型,这效率低下; 3)可能无法满足统一模型的需求和实际场景中的全面感知。在本文中,拟议的肖像解释从新的系统角度认识到人类的感知。我们将肖像的感知分为三个方面,即外观,姿势和情感,以及设计相应的子任务。基于多任务学习的框架,肖像解释需要对静态属性和肖像的动态状态进行全面描述。为了激发有关这项新任务的研究,我们构建了一个新数据集,其中包含25万张图像,上面标有身份,性别,年龄,体质,身高,表达和整个身体和手臂的姿势。我们的数据集是从51部电影中收集的,因此涵盖了广泛的多样性。此外,我们专注于表示肖像解释的表示,并提出了反映我们系统观点的基线。我们还为此任务提出了适当的指标。我们的实验结果表明,结合与肖像解释有关的任务可以产生好处。代码和数据集将公开。
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
在视频监视和时尚检索中,识别软性识别人行人属性至关重要。最近的作品在单个数据集上显示了有希望的结果。然而,这些方法在不同属性分布,观点,不同的照明和低分辨率下的概括能力很少因当前数据集中的强偏差和变化属性而很少被理解。为了缩小这一差距并支持系统的调查,我们介绍了UPAR,即统一的人属性识别数据集。它基于四个知名人士属性识别数据集:PA100K,PETA,RAPV2和Market1501。我们通过提供3300万个附加注释来统一这些数据集,以在整个数据集中统一40个属性类别的40个重要二进制属性。因此,我们首次对可概括的行人属性识别以及基于属性的人检索进行研究。由于图像分布,行人姿势,规模和遮挡的巨大差异,现有方法在准确性和效率方面都受到了极大的挑战。此外,我们基于对正则化方法的彻底分析,为基于PAR和属性的人检索开发了强大的基线。我们的模型在PA100K,PETA,RAPV2,Market1501-Atributes和UPAR上的跨域和专业设置中实现了最先进的性能。我们相信UPAR和我们的强大基线将为人工智能界做出贡献,并促进有关大规模,可推广属性识别系统的研究。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
人重新识别(RE-ID)在公共安全和视频监控等应用中起着重要作用。最近,从合成数据引擎的普及中获益的合成数据学习,从公众眼中引起了极大的关注。但是,现有数据集数量,多样性和变性有限,并且不能有效地用于重新ID问题。为了解决这一挑战,我们手动构造一个名为FineGPR的大型人数据集,具有细粒度的属性注释。此外,旨在充分利用FineGPR的潜力,并推广从数百万综合数据的高效培训,我们提出了一个名为AOST的属性分析流水线,它动态地学习了真实域中的属性分布,然后消除了合成和现实世界之间的差距因此,自由地部署到新场景。在基准上进行的实验表明,FineGPR具有AOST胜过(或与)现有的实际和合成数据集,这表明其对重新ID任务的可行性,并证明了众所周知的较少的原则。我们的Synthetic FineGPR数据集可公开可用于\ URL {https://github.com/jeremyxsc/finegpr}。
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, we propose a Deep Multimodal Fusion network to elaborate rich semantic knowledge for assisting in representation learning during the pre-training. Importantly, a multimodal fusion strategy is introduced to translate the features of different modalities into the common space, which can significantly boost generalization capability of Re-ID model. As for the fine-tuning stage, a realistic dataset is adopted to fine-tune the pre-trained model for better distribution alignment with real-world data. Comprehensive experiments on benchmarks demonstrate that our method can significantly outperform previous domain generalization or meta-learning methods with a clear margin. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.
translated by 谷歌翻译
步态描绘了个人独特而区别的步行模式,并已成为人类识别最有希望的生物识别特征之一。作为一项精细的识别任务,步态识别很容易受到许多因素的影响,并且通常需要大量完全注释的数据,这些数据是昂贵且无法满足的。本文提出了一个大规模的自我监督基准,以通过对比度学习进行步态识别,旨在通过提供信息丰富的步行先验和各种现实世界中的多样化的变化,从大型的无标记的步行视频中学习一般步态代表。具体而言,我们收集了一个由1.02m步行序列组成的大规模的无标记的步态数据集gaitu-1m,并提出了一个概念上简单而经验上强大的基线模型步态。在实验上,我们在四个广泛使用的步态基准(Casia-B,Ou-Mvlp,Grew and Grew and Gait3d)上评估了预训练的模型,或者在不转移学习的情况下。无监督的结果与基于早期模型和基于GEI的早期方法相当甚至更好。在转移学习后,我们的方法在大多数情况下都超过现有方法。从理论上讲,我们讨论了步态特异性对比框架的关键问题,并提供了一些进一步研究的见解。据我们所知,Gaitlu-1M是第一个大规模未标记的步态数据集,而GaitSSB是第一种在上述基准测试基准上取得显着无监督结果的方法。 GaitSSB的源代码将集成到OpenGait中,可在https://github.com/shiqiyu/opengait上获得。
translated by 谷歌翻译
步行属性识别旨在将多个属性分配给由视频监控摄像机捕获的一个行人图像。虽然提出了许多方法并进行了巨大的进展,但我们认为现在是时候退回和分析该地区的现状。我们回顾并重新思考从三个角度来看最近的进展。首先,鉴于人们没有明确和完整的行人属性识别定义,我们正式定义和区分步行属性识别与其他类似任务。其次,根据拟议的定义,我们暴露了现有数据集的局限性,违反了学术规范,并与实际行业申请的基本要求不一致。因此,我们提出了两个数据集,Peta \ TextSubscript {$ zs $}和rap \ textsubscript {$ zs $},在行人身份上的零拍设置之后构建。此外,我们还为未来的行人属性数据集建设介绍了几种现实标准。最后,我们重新实现现有的最先进的方法,并引入强大的基线方法,以提供可靠的评估和公平的比较。实验是在四个现有数据集和两个拟议的数据集中进行,以衡量行人属性识别的进展情况。
translated by 谷歌翻译
人搜索是多个子任务的集成任务,例如前景/背景分类,边界框回归和人员重新识别。因此,人搜索是一个典型的多任务学习问题,尤其是在以端到端方式解决时。最近,一些作品通过利用各种辅助信息,例如人关节关键点,身体部位位置,属性等,这带来了更多的任务并使人搜索模型更加复杂。每个任务的不一致的趋同率可能会损害模型优化。一个直接的解决方案是手动为不同的任务分配不同的权重,以补偿各种融合率。但是,鉴于人搜索的特殊情况,即有大量任务,手动加权任务是不切实际的。为此,我们提出了一种分组的自适应减肥方法(GALW)方法,该方法会自动和动态地调整每个任务的权重。具体而言,我们根据其收敛率对任务进行分组。同一组中的任务共享相同的可学习权重,这是通过考虑损失不确定性动态分配的。对两个典型基准(Cuhk-Sysu and Prw)的实验结果证明了我们方法的有效性。
translated by 谷歌翻译
Pretraining is a dominant paradigm in computer vision. Generally, supervised ImageNet pretraining is commonly used to initialize the backbones of person re-identification (Re-ID) models. However, recent works show a surprising result that CNN-based pretraining on ImageNet has limited impacts on Re-ID system due to the large domain gap between ImageNet and person Re-ID data. To seek an alternative to traditional pretraining, here we investigate semantic-based pretraining as another method to utilize additional textual data against ImageNet pretraining. Specifically, we manually construct a diversified FineGPR-C caption dataset for the first time on person Re-ID events. Based on it, a pure semantic-based pretraining approach named VTBR is proposed to adopt dense captions to learn visual representations with fewer images. We train convolutional neural networks from scratch on the captions of FineGPR-C dataset, and then transfer them to downstream Re-ID tasks. Comprehensive experiments conducted on benchmark datasets show that our VTBR can achieve competitive performance compared with ImageNet pretraining - despite using up to 1.4x fewer images, revealing its potential in Re-ID pretraining.
translated by 谷歌翻译
大规模数据集在面部生成/编辑的最新成功中扮演着必不可少的角色,并显着促进了新兴研究领域的进步。但是,学术界仍然缺乏具有不同面部属性注释的视频数据集,这对于与面部相关视频的研究至关重要。在这项工作中,我们提出了一个带有丰富面部属性注释的大规模,高质量和多样化的视频数据集,名为高质量的名人视频数据集(CelebV-HQ)。 Celebv-HQ至少包含35,666个视频剪辑,分辨率为512x512,涉及15,653个身份。所有剪辑均以83个面部属性手动标记,涵盖外观,动作和情感。我们对年龄,种族,亮度稳定性,运动平滑度,头部姿势多样性和数据质量进行全面分析,以证明CelebV-HQ的多样性和时间连贯性。此外,其多功能性和潜力在两个代表性任务(即无条件的视频生成和视频面部属性编辑)上得到了验证。此外,我们设想了Celebv-HQ的未来潜力,以及它将带来相关研究方向的新机会和挑战。数据,代码和模型公开可用。项目页面:https://celebv-hq.github.io。
translated by 谷歌翻译
人的步态被认为是一种独特的生物识别标识符,其可以在距离处以覆盖方式获取。但是,在受控场景中捕获的现有公共领域步态数据集接受的模型导致应用于现实世界无约束步态数据时的剧烈性能下降。另一方面,视频人员重新识别技术在大规模公共可用数据集中实现了有希望的性能。鉴于服装特性的多样性,衣物提示对于人们的认可不可靠。因此,实际上尚不清楚为什么最先进的人重新识别方法以及他们的工作。在本文中,我们通过从现有的视频人重新识别挑战中提取剪影来构建一个新的步态数据集,该挑战包括1,404人以不受约束的方式行走。基于该数据集,可以进行步态认可与人重新识别之间的一致和比较研究。鉴于我们的实验结果表明,目前在受控情景收集的数据下设计的目前的步态识别方法不适合真实监视情景,我们提出了一种名为Realgait的新型步态识别方法。我们的结果表明,在实际监视情景中识别人的步态是可行的,并且潜在的步态模式可能是视频人重新设计在实践中的真正原因。
translated by 谷歌翻译
闭塞者重新识别(REID)旨在匹配遮挡人物在不同的相机视图上的整体上。目标行人(TP)通常受到非行人闭塞(NPO)和Nontarget行人(NTP)的干扰。以前的方法主要集中在忽略NTP的特征污染的同时越来越越来越多的模型对非NPO的鲁棒性。在本文中,我们提出了一种新颖的特征擦除和扩散网络(FED),同时处理NPO和NTP。具体地,我们的建议闭塞擦除模块(OEM)消除了NPO特征,并由NPO增强策略辅助,该策略模拟整体行人图像上的NPO并产生精确的遮挡掩模。随后,我们随后,我们将行人表示与其他记忆特征弥散,以通过学习的跨关注机构通过新颖的特征扩散模块(FDM)实现的特征空间中的NTP特征。随着OEM的闭塞分数的指导,特征扩散过程主要在可见的身体部位上进行,保证合成的NTP特性的质量。通过在我们提出的联邦网络中联合优化OEM和FDM,我们可以大大提高模型对TP的看法能力,并减轻NPO和NTP的影响。此外,所提出的FDM仅用作用于训练的辅助模块,并将在推理阶段中丢弃,从而引入很少的推理计算开销。遮挡和整体人员Reid基准的实验表明了美联储最先进的优越性,喂养的含量在封闭式封闭的内容上取得了86.3%的排名 - 1准确性,超过其他人至少4.7%。
translated by 谷歌翻译
微表达(MES)是非自愿的面部运动,揭示了人们在高利害情况下隐藏的感受,并对医疗,国家安全,审讯和许多人机交互系统具有实际重要性。早期的MER方法主要基于传统的外观和几何特征。最近,随着各种领域的深度学习(DL)的成功,神经网络已得到MER的兴趣。不同于宏观表达,MES是自发的,微妙的,快速的面部运动,导致数据收集困难,因此具有小规模的数据集。由于上述我的角色,基于DL的MER变得挑战。迄今为止,已提出各种DL方法来解决我的问题并提高MER表现。在本调查中,我们对深度微表达识别(MER)进行了全面的审查,包括数据集,深度MER管道和最具影响力方法的基准标记。本调查定义了该领域的新分类法,包括基于DL的MER的所有方面。对于每个方面,总结和讨论了基本方法和高级发展。此外,我们得出了坚固的深层MER系统设计的剩余挑战和潜在方向。据我们所知,这是对深度MEL方法的第一次调查,该调查可以作为未来MER研究的参考点。
translated by 谷歌翻译
The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.
translated by 谷歌翻译
计算机视觉(CV)取得了显着的结果,在几个任务中表现优于人类。尽管如此,如果不正确处理,可能会导致重大歧视,因为CV系统高度依赖于他们所用的数据,并且可以在此类数据中学习和扩大偏见。因此,理解和发现偏见的问题至关重要。但是,没有关于视觉数据集中偏见的全面调查。因此,这项工作的目的是:i)描述可能在视觉数据集中表现出来的偏差; ii)回顾有关视觉数据集中偏置发现和量化方法的文献; iii)讨论现有的尝试收集偏见视觉数据集的尝试。我们研究的一个关键结论是,视觉数据集中发现和量化的问题仍然是开放的,并且在方法和可以解决的偏见范围方面都有改进的余地。此外,没有无偏见的数据集之类的东西,因此科学家和从业者必须意识到其数据集中的偏见并使它们明确。为此,我们提出了一个清单,以在Visual DataSet收集过程中发现不同类型的偏差。
translated by 谷歌翻译