This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks. In this paper, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.
translated by 谷歌翻译
无监督的域自适应人重新识别(重新ID)任务是一个挑战,因为与常规域自适应任务不同,人物重新ID中的源域数据和目标域数据之间没有重叠,这导致一个重要的领域差距。最先进的无监督的RE-ID方法使用基于内存的对比损耗训练神经网络。然而,通过将每个未标记的实例视为类来执行对比学习,作为类将导致阶级冲突的问题,并且由于在存储库中更新时不同类别的实例数量的差异,更新强度是不一致的。为了解决此类问题,我们提出了对人的重新ID的原型字典学习,其能够通过一个训练阶段利用源域数据和目标域数据,同时避免类碰撞问题和群集更新强度不一致的问题原型字典学习。为了减少模型上域间隙的干扰,我们提出了一个本地增强模块,以改善模型的域适应而不增加模型参数的数量。我们在两个大型数据集上的实验证明了原型字典学习的有效性。 71.5 \%地图是在市场到Duke任务中实现的,这是与最先进的无监督域自适应RE-ID方法相比的2.3 \%的改进。它在Duke-to-Market任务中实现了83.9 \%地图,而与最先进的无监督的自适应重新ID方法相比,该任务在4.4 \%中提高了4.4%。
translated by 谷歌翻译
最近,无监督的人重新识别(RE-ID)引起了人们的关注,因为其开放世界情景设置有限,可用的带注释的数据有限。现有的监督方法通常无法很好地概括在看不见的域上,而无监督的方法(大多数缺乏多范围的信息),并且容易患有确认偏见。在本文中,我们旨在从两个方面从看不见的目标域上找到更好的特征表示形式,1)在标记的源域上进行无监督的域适应性和2)2)在未标记的目标域上挖掘潜在的相似性。此外,提出了一种协作伪标记策略,以减轻确认偏见的影响。首先,使用生成对抗网络将图像从源域转移到目标域。此外,引入了人身份和身份映射损失,以提高生成图像的质量。其次,我们提出了一个新颖的协作多元特征聚类框架(CMFC),以学习目标域的内部数据结构,包括全局特征和部分特征分支。全球特征分支(GB)在人体图像的全球特征上采用了无监督的聚类,而部分特征分支(PB)矿山在不同人体区域内的相似性。最后,在两个基准数据集上进行的广泛实验表明,在无监督的人重新设置下,我们的方法的竞争性能。
translated by 谷歌翻译
无监督的人重新识别(RE-ID)由于其可扩展性和对现实世界应用的可能性而吸引了增加的研究兴趣。最先进的无监督的重新ID方法通常遵循基于聚类的策略,该策略通过聚类来生成伪标签,并维护存储器以存储实例功能并代表群集的质心进行对比​​学习。这种方法遇到了两个问题。首先,无监督学习产生的质心可能不是一个完美的原型。强迫图像更接近质心,强调了聚类的结果,这可能会在迭代过程中积累聚类错误。其次,以前的方法利用在不同的训练迭代中获得的功能代表一种质心,这与当前的训练样本不一致,因为这些特征不是直接可比的。为此,我们通过随机学习策略提出了一种无监督的重新ID方法。具体来说,我们采用了随机更新的内存,其中使用集群的随机实例来更新群集级内存以进行对比度学习。这样,学会了随机选择的图像对之间的关​​系,以避免由不可靠的伪标签引起的训练偏见。随机内存也始终是最新的,以保持一致性。此外,为了减轻摄像机方差的问题,在聚类过程中提出了一个统一的距离矩阵,其中减少了不同摄像头域的距离偏置,并强调了身份的差异。
translated by 谷歌翻译
无监督的人重新识别是计算机视觉中的一项具有挑战性且有前途的任务。如今,无监督的人重新识别方法通过使用伪标签培训取得了巨大进步。但是,如何以无监督的方式进行纯化的特征和标签噪声的显式研究。为了净化功能,我们考虑了来自不同本地视图的两种其他功能,以丰富功能表示。所提出的多视图功能仔细地集成到我们的群体对比度学习中,以利用全球功能容易忽略和偏见的更具歧视性线索。为了净化标签噪声,我们建议在离线方案中利用教师模型的知识。具体来说,我们首先从嘈杂的伪标签培训教师模型,然后使用教师模型指导我们的学生模型的学习。在我们的环境中,学生模型可以在教师模型的监督下快速融合,因此,随着教师模型的影响很大,嘈杂标签的干扰。在仔细处理功能学习中的噪音和偏见之后,我们的纯化模块被证明对无监督的人的重新识别非常有效。对三个受欢迎人重新识别数据集进行的广泛实验证明了我们方法的优势。尤其是,我们的方法在具有挑战性的Market-1501基准中,在完全无监督的环境下,在具有挑战性的Market-1501基准中实现了最先进的精度85.8 \%@map和94.5 \% @rank-1。代码将发布。
translated by 谷歌翻译
未经监督的人重新识别(重新ID)由于其解决监督重新ID模型的可扩展性问题而吸引了越来越多的关注。大多数现有的无监督方法采用迭代聚类机制,网络基于由无监督群集生成的伪标签进行培训。但是,聚类错误是不可避免的。为了产生高质量的伪标签并减轻聚类错误的影响,我们提出了一种新的群集关系建模框架,用于无监督的人重新ID。具体地,在聚类之前,基于曲线图相关学习(GCL)模块探索未标记图像之间的关系,然后将其用于聚类以产生高质量的伪标签。本,GCL适自适应地挖掘样本之间的关系迷你批次以减少培训时异常聚类的影响。为了更有效地训练网络,我们进一步提出了一种选择性对比学习(SCL)方法,具有选择性存储器银行更新策略。广泛的实验表明,我们的方法比在Market1501,Dukemtmc-Reid和MSMT17数据集上的大多数最先进的无人监督方法显示出更好的结果。我们将发布模型再现的代码。
translated by 谷歌翻译
最近,许多方法通过基于伪标签的对比学习来解决无监督的域自适应人员重新识别(UDA RE-ID)问题。在培训期间,通过简单地平均来自具有相同伪标签的集群的所有实例特征来获得UNI-Firedroid表示。然而,由于群集结果不完美的聚类结果,群集可能包含具有不同标识(标签噪声)的图像,这使得UNI质心表示不适当。在本文中,我们介绍了一种新的多质心存储器(MCM),以在群集中自适应地捕获不同的身份信息。 MCM可以通过为查询图像选择适当的正/负质心来有效地减轻标签噪声问题。此外,我们进一步提出了两种策略来改善对比学习过程。首先,我们介绍了一个域特定的对比度学习(DSCL)机制,通过仅通过相同域进行比较样本来完全探索局部信息。其次,我们提出了二阶最近的插值(Soni)以获得丰富和信息性的负样本。我们将MCM,DSCL和Soni集成到一个名为Multi-Firedroid表示网络(MCRN)的统一框架中。广泛的实验证明了MCRN在多个UDA重新ID任务上的最先进方法和完全无监督的重新ID任务的优越性。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
最先进的无监督的RE-ID方法使用基于内存的非参数软制AX丢失训练神经网络。存储在存储器中的实例特征向量通过群集和更新在实例级别中分配伪标签。然而,不同的簇大小导致每个群集的更新进度中的不一致。为了解决这个问题,我们呈现了存储特征向量的集群对比度,并计算群集级别的对比度损耗。我们的方法采用唯一的群集表示来描述每个群集,从而产生群集级存储字典。以这种方式,可以有效地保持聚类的一致性,在整个阶段,可以显着降低GPU存储器消耗。因此,我们的方法可以解决集群不一致的问题,并且适用于较大的数据集。此外,我们采用不同的聚类算法来展示我们框架的鲁棒性和泛化。与标准无监督的重新ID管道的集群对比的应用达到了9.9%,8.3%,12.1%的显着改善,而最新的无人纯粹无监督的重新ID方法和5.5%,4.8%,4.4%地图相比与市场,公爵和MSMT17数据集上的最先进的无监督域适应重新ID方法相比。代码可在https://github.com/alibaba/cluster-contrast获得。
translated by 谷歌翻译
最近,由于受监督人员重新识别(REID)的表现不佳,域名概括(DG)人REID引起了很多关注,旨在学习一个不敏感的模型,并可以抵抗域的影响偏见。在本文中,我们首先通过实验验证样式因素是域偏差的重要组成部分。基于这个结论,我们提出了一种样式变量且无关紧要的学习方法(SVIL)方法,以消除样式因素对模型的影响。具体来说,我们在SVIL中设计了样式的抖动模块(SJM)。 SJM模块可以丰富特定源域的样式多样性,并减少各种源域的样式差异。这导致该模型重点关注与身份相关的信息,并对样式变化不敏感。此外,我们将SJM模块与元学习算法有机结合,从而最大程度地提高了好处并进一步提高模型的概括能力。请注意,我们的SJM模块是插件和推理,无需成本。广泛的实验证实了我们的SVIL的有效性,而我们的方法的表现优于DG-REID基准测试的最先进方法。
translated by 谷歌翻译
未经监督的人重新识别(Reid)是一个具有挑战性的任务,没有数据注释,以指导歧视性学习。现有方法通过群集提取的嵌入式来尝试解决此问题以生成伪标签。然而,大多数方法忽略了摄像机样式方差引起的类内间隙,并且一些方法是相对复杂和间接的,尽管它们试图解决相机样式对特征分布的负面影响。为了解决这个问题,我们提出了一种相机感知的风格分离和对比学习方法(CA-Ureid),它直接将相机样式与设计的相机感知的注意模块直接分离在功能空间中。它可以将学习功能明确地将学习功能分为特定于相机和相机不可知的部件,从而降低了不同摄像机的影响。此外,为了进一步缩小相机的差距,我们设计了一个摄像机感知对比中心损失,以了解每个身份的更多歧视性嵌入。广泛的实验证明了我们对无监督者Reid任务的最先进方法的方法的优越性。
translated by 谷歌翻译
Systems for person re-identification (ReID) can achieve a high accuracy when trained on large fully-labeled image datasets. However, the domain shift typically associated with diverse operational capture conditions (e.g., camera viewpoints and lighting) may translate to a significant decline in performance. This paper focuses on unsupervised domain adaptation (UDA) for video-based ReID - a relevant scenario that is less explored in the literature. In this scenario, the ReID model must adapt to a complex target domain defined by a network of diverse video cameras based on tracklet information. State-of-art methods cluster unlabeled target data, yet domain shifts across target cameras (sub-domains) can lead to poor initialization of clustering methods that propagates noise across epochs, thus preventing the ReID model to accurately associate samples of same identity. In this paper, an UDA method is introduced for video person ReID that leverages knowledge on video tracklets, and on the distribution of frames captured over target cameras to improve the performance of CNN backbones trained using pseudo-labels. Our method relies on an adversarial approach, where a camera-discriminator network is introduced to extract discriminant camera-independent representations, facilitating the subsequent clustering. In addition, a weighted contrastive loss is proposed to leverage the confidence of clusters, and mitigate the risk of incorrect identity associations. Experimental results obtained on three challenging video-based person ReID datasets - PRID2011, iLIDS-VID, and MARS - indicate that our proposed method can outperform related state-of-the-art methods. Our code is available at: \url{https://github.com/dmekhazni/CAWCL-ReID}
translated by 谷歌翻译
Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation.Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a Cy-cleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.
translated by 谷歌翻译
最近,通过计算各个特征和集群记忆之间的对比损失,群集对比度学习已被证明对人员Reid有效。但是,使用各个功能以势头更新群集内存的现有方法对嘈杂的样本不稳健,例如具有错误注释标签或伪标签的样本。与基于个人的更新机制不同,基于质心的更新机制应用每个群集的平均特征更新群集内存对少数噪声样本是强大的。因此,我们制定了一个名为双集群对比学习(DCC)的统一集群对比框架中的基于个人的更新和基于质心的更新机制,它维护了两种类型的存储体:个人和质心集群存储库。值得注意的是,基于各个功能更新各个集群内存。质心群集内存应用每个Cluter的平均特征以更新相应的群集内存。除了每个存储器的Vallina对比损耗之外,应用了一致性约束,以保证两个存储器输出的一致性。请注意,通过使用聚类方法生成的地面真理标签或伪标签,可以轻松地应用于无监督或监督人员REID。在监督人员REID和无人监督者REID下的两项基准的大量实验证明了拟议的DCC的优越。代码可用:https://github.com/htyao89/dual-cluster-contrastive/
translated by 谷歌翻译
更广泛的人重新识别(Reid)在最近的计算机视觉社区中引起了不断的关注。在这项工作中,我们在身份标签,特定特定因素(衣服/鞋子颜色等)和域特定因素(背景,观点等)之间构建结构因果模型。根据因果分析,我们提出了一种新颖的域不变表示,以获得概括的人重新识别(DIR-REID)框架。具体而言,我们首先建议解散特定于特定的和域特定的特征空间,我们提出了一种有效的算法实现,用于后台调整,基本上是朝向SCM的因果干预。已经进行了广泛的实验,表明Dir-Reid在大规模域泛化Reid基准上表现出最先进的方法。
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译
人搜索是一项具有挑战性的任务,旨在实现共同的行人检测和人重新识别(REID)。以前的作品在完全和弱监督的设置下取得了重大进步。但是,现有方法忽略了人搜索模型的概括能力。在本文中,我们采取了进一步的步骤和现在的域自适应人员搜索(DAPS),该搜索旨在将模型从标记的源域概括为未标记的目标域。在这种新环境下出现了两个主要挑战:一个是如何同时解决检测和重新ID任务的域未对准问题,另一个是如何在目标域上训练REID子任务而不可靠的检测结果。为了应对这些挑战,我们提出了一个强大的基线框架,并使用两个专用设计。 1)我们设计一个域对齐模块,包括图像级和任务敏感的实例级别对齐,以最大程度地减少域差异。 2)我们通过动态聚类策略充分利用未标记的数据,并使用伪边界框来支持目标域上的REID和检测训练。通过上述设计,我们的框架在MAP中获得了34.7%的地图,而PRW数据集的TOP-1则达到80.6%,超过了直接转移基线的大幅度。令人惊讶的是,我们无监督的DAPS模型的性能甚至超过了一些完全和弱监督的方法。该代码可在https://github.com/caposerenity/daps上找到。
translated by 谷歌翻译
最近对比学习在从未标记数据学习视觉表现方面表现出显着进展。核心思想正在培训骨干,以不变的实例的不同增强。虽然大多数方法只能最大化两个增强数据之间的特征相似性,但我们进一步产生了更具挑战性的训练样本,并强迫模型继续预测这些硬样品上的判别表示。在本文中,我们提出了Mixsiam,传统暹罗网络的混合方法。一方面,我们将实例的两个增强图像输入到骨干,并通过执行两个特征的元素最大值来获得辨别结果。另一方面,我们将这些增强图像的混合物作为输入,并期望模型预测接近鉴别的表示。以这种方式,模型可以访问实例的更多变体数据样本,并继续预测它们的不变判别表示。因此,与先前的对比学习方法相比,学习模型更加强大。大型数据集的广泛实验表明,Mixsiam稳步提高了基线,并通过最先进的方法实现了竞争结果。我们的代码即将发布。
translated by 谷歌翻译
Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
translated by 谷歌翻译