This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline * Equal contributions. This work was partially done when Hao Luo and Xingyu Liao were interns at Megvii Inc.
translated by 谷歌翻译
人重新识别(RE-ID)旨在确定非重叠捕获摄像机的同一个人人员,这在视觉监控应用和计算机视觉研究领域起着重要作用。由于高广阔的注释未标记数据的标识,拟合有限的基于外观的表示提取器具有有限的收集的训练数据对于人物重新ID是至关重要的。在这项工作中,我们为人员重新ID提出了更强大的基线,即当前现行方法的增强版本,即强大的基线,具有微小的修改,但更快的收敛速度和更高的识别性能。借助于更强大的基线,我们在2021个vipriors中获得了第三名(即0.94,在地图中)重新识别挑战,而没有基于想象的预训练的参数初始化和任何额外的补充数据集的辅助。
translated by 谷歌翻译
基于软马克斯的损失函数及其变体(例如,界面,圆顶和弧形)可显着改善野生无约束场景中的面部识别性能。这些算法的一种常见实践是对嵌入特征和线性转换矩阵之间的乘法进行优化。但是,在大多数情况下,基于传统的设计经验给出了嵌入功能的尺寸,并且在给出固定尺寸时,使用该功能本身提高性能的研究较少。为了应对这一挑战,本文提出了一种称为subface的软关系近似方法,该方法采用了子空间功能来促进面部识别的性能。具体而言,我们在训练过程中动态选择每个批次中的非重叠子空间特征,然后使用子空间特征在基于软磁性的损失之间近似完整功能,因此,深层模型的可区分性可以显着增强,以增强面部识别。在基准数据集上进行的综合实验表明,我们的方法可以显着提高香草CNN基线的性能,这强烈证明了基于利润率的损失的子空间策略的有效性。
translated by 谷歌翻译
最近,无监督的人重新识别(RE-ID)引起了人们的关注,因为其开放世界情景设置有限,可用的带注释的数据有限。现有的监督方法通常无法很好地概括在看不见的域上,而无监督的方法(大多数缺乏多范围的信息),并且容易患有确认偏见。在本文中,我们旨在从两个方面从看不见的目标域上找到更好的特征表示形式,1)在标记的源域上进行无监督的域适应性和2)2)在未标记的目标域上挖掘潜在的相似性。此外,提出了一种协作伪标记策略,以减轻确认偏见的影响。首先,使用生成对抗网络将图像从源域转移到目标域。此外,引入了人身份和身份映射损失,以提高生成图像的质量。其次,我们提出了一个新颖的协作多元特征聚类框架(CMFC),以学习目标域的内部数据结构,包括全局特征和部分特征分支。全球特征分支(GB)在人体图像的全球特征上采用了无监督的聚类,而部分特征分支(PB)矿山在不同人体区域内的相似性。最后,在两个基准数据集上进行的广泛实验表明,在无监督的人重新设置下,我们的方法的竞争性能。
translated by 谷歌翻译
我们研究人员重新识别(RE-ID)的向后兼容问题,该问题旨在限制更新的新模型的功能,以与画廊中旧模型的现有功能相提并论。大多数现有作品都采用基于蒸馏的方法,这些方法着重于推动新功能模仿旧功能。但是,基于蒸馏的方法本质上是最佳的,因为它迫使新的特征空间模仿旧特征空间。为了解决这个问题,我们提出了基于排名的向后兼容学习(RBCL),该学习直接优化了新功能和旧功能之间的排名指标。与以前的方法不同,RBCL仅推动新功能以在旧功能空间而不是严格对齐中找到最佳的位置,并且与向后检索的最终目标保持一致。但是,用于使排名度量可区分的尖锐的Sigmoid函数也会导致梯度消失的问题,因此在训练后期的时期造成了排名的完善。为了解决这个问题,我们提出了动态梯度重新激活(DGR),可以通过在远期步骤中添加动态计算的常数来重新激活抑制梯度。为了进一步帮助目标最佳位置,我们包括邻居上下文代理(NCAS),以近似训练期间的整个旧特征空间。与以前仅在内域设置上测试的作品不同,我们首次尝试引入跨域设置(包括受监督和无监督的),这更有意义和困难。所有五个设置上的实验结果表明,在所有设置下,提出的RBCL都以大幅度优于先前的最新方法。
translated by 谷歌翻译
人重新识别(Reid)旨在从不同摄像机捕获的图像中检索一个人。对于基于深度学习的REID方法,已经证明,使用本地特征与人物图像的全局特征可以帮助为人员检索提供强大的特征表示。人类的姿势信息可以提供人体骨架的位置,有效地指导网络在这些关键领域更加关注这些关键领域,也可能有助于减少来自背景或闭塞的噪音分散。然而,先前与姿势相关的作品提出的方法可能无法充分利用姿势信息的好处,并没有考虑不同当地特征的不同贡献。在本文中,我们提出了一种姿势引导图注意网络,一个多分支架构,包括一个用于全局特征的一个分支,一个用于中粒体特征的一个分支,一个分支用于细粒度关键点特征。我们使用预先训练的姿势估计器来生成本地特征学习的关键点热图,并仔细设计图表卷积层以通过建模相似关系来重新评估提取的本地特征的贡献权重。实验结果表明我们对歧视特征学习的方法的有效性,我们表明我们的模型在几个主流评估数据集上实现了最先进的表演。我们还对我们的网络进行了大量的消融研究和设计不同类型的比较实验,以证明其有效性和鲁棒性,包括整体数据集,部分数据集,遮挡数据集和跨域测试。
translated by 谷歌翻译
学习模态不变功能是可见热跨模板人员重新凝视(VT-REID)问题的核心,其中查询和画廊图像来自不同的模式。现有工作通过使用对抗性学习或仔细设计特征提取模块来隐式地将像素和特征空间中的模态对齐。我们提出了一个简单但有效的框架MMD-REID,通过明确的差异减少约束来降低模态差距。 MMD-REID从最大均值(MMD)中获取灵感,广泛使用的统计工具用于确定两个分布之间的距离。 MMD-REID采用新的基于边缘的配方,以匹配可见和热样品的类条件特征分布,以最大限度地减少级别的距离,同时保持特征辨别性。 MMD-Reid是一个简单的架构和损失制定方面的框架。我们对MMD-REID的有效性进行了广泛的实验,以使MMD-REID对调整边缘和阶级条件分布的有效性,从而学习模型无关和身份的一致特征。所提出的框架显着优于Sysu-MM01和RegDB数据集的最先进的方法。代码将在https://github.com/vcl-iisc/mmd -reid发布
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
最近的研究表明,明确的深度特征匹配以及大规模和多样化的训练数据都可以显着提高人员重新识别的泛化。然而,在大规模数据上学习深度匹配者的效率尚未得到充分研究。虽然使用分类参数或课程内存是一种流行的方式,但它会引发大的内存和计算成本。相比之下,迷你批量内的成对深度度量学习将是一个更好的选择。然而,最受欢迎的随机采样方法,众所周知的PK采样器,对深度度量学习不是信息性和有效的。虽然在线硬示例挖掘在一定程度上提高了学习效率,但随机采样后迷你批次仍然有限。这激发了我们在数据采样阶段之前探讨了先前使用硬示例挖掘。为此,在本文中,我们提出了一种有效的跨批量采样方法,称为图形采样(GS),用于大规模深度度量学习。基本思想是为每个时代开始的所有类构建最近的邻居关系图。然后,每个迷你批处理由随机选择的类和其最近的邻类组成,以便为学习提供信息和具有挑战性的例子。与适应的竞争性基线一起,我们在更广泛的人中改善了先前的最先进状态,在MAP中最明显重新鉴定,高达24%和13.8%。此外,所提出的方法还优于竞争性基线在地图中排名-1和5.3%的竞争性基线。同时,培训时间明显减少了多达五次,例如五次。在具有8,000个身份的大型数据集中培训12.2小时至2.3小时。代码可在https://github.com/shengcailiao/qaconv获得。
translated by 谷歌翻译
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
translated by 谷歌翻译
人重新识别(RE-ID)在监督场景中取得了巨大成功。但是,由于模型过于适合所见源域,因此很难将监督模型直接传输到任意看不见的域。在本文中,我们旨在从数据增强的角度来解决可推广的多源人员重新ID任务(即,在培训期间看不见测试域,并且在培训期间看不见测试域,因此我们提出了一种新颖的方法,称为Mixnorm,由域感知的混合范围(DMN)和域软件中心正则化(DCR)组成。不同于常规数据增强,提出的域吸引的混合范围化,以增强从神经网络的标准化视图中训练期间特征的多样性,这可以有效地减轻模型过度适应源域,从而提高概括性。在看不见的域中模型的能力。为了更好地学习域不变的模型,我们进一步开发了域吸引的中心正规化,以更好地将产生的各种功能映射到同一空间中。在多个基准数据集上进行的广泛实验验证了所提出的方法的有效性,并表明所提出的方法可以胜过最先进的方法。此外,进一步的分析还揭示了所提出的方法的优越性。
translated by 谷歌翻译
最近,由于受监督人员重新识别(REID)的表现不佳,域名概括(DG)人REID引起了很多关注,旨在学习一个不敏感的模型,并可以抵抗域的影响偏见。在本文中,我们首先通过实验验证样式因素是域偏差的重要组成部分。基于这个结论,我们提出了一种样式变量且无关紧要的学习方法(SVIL)方法,以消除样式因素对模型的影响。具体来说,我们在SVIL中设计了样式的抖动模块(SJM)。 SJM模块可以丰富特定源域的样式多样性,并减少各种源域的样式差异。这导致该模型重点关注与身份相关的信息,并对样式变化不敏感。此外,我们将SJM模块与元学习算法有机结合,从而最大程度地提高了好处并进一步提高模型的概括能力。请注意,我们的SJM模块是插件和推理,无需成本。广泛的实验证实了我们的SVIL的有效性,而我们的方法的表现优于DG-REID基准测试的最先进方法。
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译
Deep metric learning aims to learn an embedding space, where semantically similar samples are close together and dissimilar ones are repelled against. To explore more hard and informative training signals for augmentation and generalization, recent methods focus on generating synthetic samples to boost metric learning losses. However, these methods just use the deterministic and class-independent generations (e.g., simple linear interpolation), which only can cover the limited part of distribution spaces around original samples. They have overlooked the wide characteristic changes of different classes and can not model abundant intra-class variations for generations. Therefore, generated samples not only lack rich semantics within the certain class, but also might be noisy signals to disturb training. In this paper, we propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning. We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining and boost metric learning losses. Further, for most datasets that have a few samples within the class, we propose the neighbor correction to revise the inaccurate estimations, according to our correlation discovery where similar classes generally have similar variation distributions. Extensive experiments on five benchmarks show our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%. Our code is available at https://github.com/darkpromise98/IAA
translated by 谷歌翻译
Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeping dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets.
translated by 谷歌翻译
计算机视觉的挑战之一是它需要适应可变环境中的颜色偏差。因此,将颜色偏差对预测的不利影响最小化是视觉任务的主要目标之一。当前的解决方案着重于使用生成模型增强训练数据以增强输入变化的不变性。但是,这种方法通常会引入新的噪声,从而限制了生成数据的增益。为此,本文提出了一种策略,消除了偏差的偏差,该偏差称为随机颜色辍学(RCD)。我们的假设是,如果查询图像和画廊图像之间存在颜色偏差,那么在忽略颜色信息之后,某些示例的检索结果会更好。具体而言,该策略通过在训练数据中辍学的部分颜色信息来平衡神经网络中颜色特征和无关的特征之间的权重,以克服颜色devitaion的效果。所提出的RCD可以与各种现有的REID模型相结合而不更改学习策略,并且可以应用于其他计算机视野字段,例如对象检测。在几个REID基线和三个常见的大规模数据集(例如Market1501,Dukemtmc和MSMT17)上进行的实验已验证了该方法的有效性。跨域测试的实验表明,该策略显着消除了域间隙。此外,为了了解RCD的工作机制,我们从分类的角度分析了该策略的有效性,这表明在具有强大域变化的视觉任务中,最好利用许多而不是所有颜色信息。
translated by 谷歌翻译
从图像中学习代表,健壮和歧视性信息对于有效的人重新识别(RE-ID)至关重要。在本文中,我们提出了一种基于身体和手部图像的人重新ID的端到端判别深度学习的复合方法。我们仔细设计了本地感知的全球注意力网络(Laga-Net),这是一个多分支深度网络架构,由一个用于空间注意力的分支组成,一个用于渠道注意。注意分支集中在图像的相关特征上,同时抑制了无关紧要的背景。为了克服注意力机制的弱点,与像素改组一样,我们将相对位置编码整合到空间注意模块中以捕获像素的空间位置。全球分支机构打算保留全球环境或结构信息。对于打算捕获细粒度信息的本地分支,我们进行统一的分区以水平在Conv-Layer上生成条纹。我们通过执行软分区来检索零件,而无需明确分区图像或需要外部线索,例如姿势估计。一组消融研究表明,每个组件都会有助于提高拉加网络的性能。对四个受欢迎的人体重新ID基准和两个公开可用的手数据集的广泛评估表明,我们的建议方法始终优于现有的最新方法。
translated by 谷歌翻译
基于文本的人搜索是一项具有挑战性的任务,旨在搜索具有查询文本描述的图像库中具有相同身份的行人图像。近年来,基于文本的人搜索取得了良好的进步,而最先进的方法通过学习图像和文本之间的本地细粒度对应来实现出色的性能。但是,现有方法通过手工制作的拆分或外部工具从图像和文本中明确提取图像零件和文本短语,然后进行复杂的跨模式本地匹配。此外,现有方法很少考虑由图像特定信息引起的方式之间的信息不平等问题。在本文中,我们提出了一个有效的联合信息和语义对齐网络(ISANET),用于基于文本的人搜索。具体而言,我们首先设计一个特定图像的信息抑制模块,该模块分别通过关系引导定位和通道注意过滤抑制图像背景和环境因素。该设计可以有效地减轻信息不平等问题,并实现图像和文本之间的信息对齐。其次,我们建议一个隐性的本地对齐模块,以将图像和文本功能适应一组模态共享的语义主题中心,并隐式地学习图像和文本之间的本地细粒度对应关系,而无需其他监督信息和复杂的跨模式互动。此外,引入了全球一致性作为当地观点的补充。在多个数据库上进行的广泛实验证明了所提出的ISANET的有效性和优势。
translated by 谷歌翻译
The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.
translated by 谷歌翻译
人员搜索统一人员检测和人重新识别(重新ID),以从全景画廊图像找到查询人员。一个主要挑战来自于不平衡的长尾人身份分布,这可以防止一步人搜索模型学习歧视性人员特征,以获得最终重新识别。但是,探索了如何解决一步人员搜索的重型不平衡的身份分布。设计用于长尾分类任务的技术,例如,图像级重新采样策略很难被有效地应用于与基于检测的多个多个多的人检测和重新ID子任务共同解决人员检测和重新ID子任务 - 框架框架。为了解决这个问题,我们提出了一个子任务主导的传输学习(STL)方法。 STL方法解决了主导的重新ID子批次的预测阶段的长尾问题,并通过转移学习来改善普试模型的一步人搜索。我们进一步设计了一个多级ROI融合池层,以提高一步人搜索的人特征的辨别能力。 Cuhk-Sysu和Prw Datasets的广泛实验证明了该方法的优越性和有效性。
translated by 谷歌翻译