In this paper, we are interested in learning a generalizable person re-identification (re-ID) representation from unlabeled videos. Compared with 1) the popular unsupervised re-ID setting where the training and test sets are typically under the same domain, and 2) the popular domain generalization (DG) re-ID setting where the training samples are labeled, our novel scenario combines their key challenges: the training samples are unlabeled, and collected form various domains which do no align with the test domain. In other words, we aim to learn a representation in an unsupervised manner and directly use the learned representation for re-ID in novel domains. To fulfill this goal, we make two main contributions: First, we propose Cycle Association (CycAs), a scalable self-supervised learning method for re-ID with low training complexity; and second, we construct a large-scale unlabeled re-ID dataset named LMP-video, tailored for the proposed method. Specifically, CycAs learns re-ID features by enforcing cycle consistency of instance association between temporally successive video frame pairs, and the training cost is merely linear to the data size, making large-scale training possible. On the other hand, the LMP-video dataset is extremely large, containing 50 million unlabeled person images cropped from over 10K Youtube videos, therefore is sufficient to serve as fertile soil for self-supervised learning. Trained on LMP-video, we show that CycAs learns good generalization towards novel domains. The achieved results sometimes even outperform supervised domain generalizable models. Remarkably, CycAs achieves 82.2% Rank-1 on Market-1501 and 49.0% Rank-1 on MSMT17 with zero human annotation, surpassing state-of-the-art supervised DG re-ID methods. Moreover, we also demonstrate the superiority of CycAs under the canonical unsupervised re-ID and the pretrain-and-finetune scenarios.
translated by 谷歌翻译
步态描绘了个人独特而区别的步行模式,并已成为人类识别最有希望的生物识别特征之一。作为一项精细的识别任务,步态识别很容易受到许多因素的影响,并且通常需要大量完全注释的数据,这些数据是昂贵且无法满足的。本文提出了一个大规模的自我监督基准,以通过对比度学习进行步态识别,旨在通过提供信息丰富的步行先验和各种现实世界中的多样化的变化,从大型的无标记的步行视频中学习一般步态代表。具体而言,我们收集了一个由1.02m步行序列组成的大规模的无标记的步态数据集gaitu-1m,并提出了一个概念上简单而经验上强大的基线模型步态。在实验上,我们在四个广泛使用的步态基准(Casia-B,Ou-Mvlp,Grew and Grew and Gait3d)上评估了预训练的模型,或者在不转移学习的情况下。无监督的结果与基于早期模型和基于GEI的早期方法相当甚至更好。在转移学习后,我们的方法在大多数情况下都超过现有方法。从理论上讲,我们讨论了步态特异性对比框架的关键问题,并提供了一些进一步研究的见解。据我们所知,Gaitlu-1M是第一个大规模未标记的步态数据集,而GaitSSB是第一种在上述基准测试基准上取得显着无监督结果的方法。 GaitSSB的源代码将集成到OpenGait中,可在https://github.com/shiqiyu/opengait上获得。
translated by 谷歌翻译
无监督的人重新识别(RE-ID)由于其可扩展性和对现实世界应用的可能性而吸引了增加的研究兴趣。最先进的无监督的重新ID方法通常遵循基于聚类的策略,该策略通过聚类来生成伪标签,并维护存储器以存储实例功能并代表群集的质心进行对比​​学习。这种方法遇到了两个问题。首先,无监督学习产生的质心可能不是一个完美的原型。强迫图像更接近质心,强调了聚类的结果,这可能会在迭代过程中积累聚类错误。其次,以前的方法利用在不同的训练迭代中获得的功能代表一种质心,这与当前的训练样本不一致,因为这些特征不是直接可比的。为此,我们通过随机学习策略提出了一种无监督的重新ID方法。具体来说,我们采用了随机更新的内存,其中使用集群的随机实例来更新群集级内存以进行对比度学习。这样,学会了随机选择的图像对之间的关​​系,以避免由不可靠的伪标签引起的训练偏见。随机内存也始终是最新的,以保持一致性。此外,为了减轻摄像机方差的问题,在聚类过程中提出了一个统一的距离矩阵,其中减少了不同摄像头域的距离偏置,并强调了身份的差异。
translated by 谷歌翻译
跟踪视频感兴趣的对象是计算机视觉中最受欢迎和最广泛应用的问题之一。然而,随着年的几年,寒武纪的用例和基准已经将问题分散在多种不同的实验设置中。因此,文献也已经分散,现在社区提出的新方法通常是专门用于仅适合一个特定的设置。要了解在多大程度上,这项专业化是必要的,在这项工作中,我们展示了UnitRack,一个解决方案来解决同一框架内的五个不同任务。 Unitrack由单一和任务不可知的外观模型组成,可以以监督或自我监督的方式学习,以及解决个人任务的多个`“头”,并且不需要培训。我们展示了在该框架内可以解决的大多数跟踪任务,并且可以成功地成功地使用相同的外观模型来获得对针对考虑大多数任务的专业方法具有竞争力的结果。该框架还允许我们分析具有最新自我监督方法获得的外观模型,从而扩展了他们的评估并与更大种类的重要问题进行比较。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
Systems for person re-identification (ReID) can achieve a high accuracy when trained on large fully-labeled image datasets. However, the domain shift typically associated with diverse operational capture conditions (e.g., camera viewpoints and lighting) may translate to a significant decline in performance. This paper focuses on unsupervised domain adaptation (UDA) for video-based ReID - a relevant scenario that is less explored in the literature. In this scenario, the ReID model must adapt to a complex target domain defined by a network of diverse video cameras based on tracklet information. State-of-art methods cluster unlabeled target data, yet domain shifts across target cameras (sub-domains) can lead to poor initialization of clustering methods that propagates noise across epochs, thus preventing the ReID model to accurately associate samples of same identity. In this paper, an UDA method is introduced for video person ReID that leverages knowledge on video tracklets, and on the distribution of frames captured over target cameras to improve the performance of CNN backbones trained using pseudo-labels. Our method relies on an adversarial approach, where a camera-discriminator network is introduced to extract discriminant camera-independent representations, facilitating the subsequent clustering. In addition, a weighted contrastive loss is proposed to leverage the confidence of clusters, and mitigate the risk of incorrect identity associations. Experimental results obtained on three challenging video-based person ReID datasets - PRID2011, iLIDS-VID, and MARS - indicate that our proposed method can outperform related state-of-the-art methods. Our code is available at: \url{https://github.com/dmekhazni/CAWCL-ReID}
translated by 谷歌翻译
Previous work on action representation learning focused on global representations for short video clips. In contrast, many practical applications, such as video alignment, strongly demand learning the intensive representation of long videos. In this paper, we introduce a new framework of contrastive action representation learning (CARL) to learn frame-wise action representation in a self-supervised or weakly-supervised manner, especially for long videos. Specifically, we introduce a simple but effective video encoder that considers both spatial and temporal context by combining convolution and transformer. Inspired by the recent massive progress in self-supervised learning, we propose a new sequence contrast loss (SCL) applied to two related views obtained by expanding a series of spatio-temporal data in two versions. One is the self-supervised version that optimizes embedding space by minimizing KL-divergence between sequence similarity of two augmented views and prior Gaussian distribution of timestamp distance. The other is the weakly-supervised version that builds more sample pairs among videos using video-level labels by dynamic time wrapping (DTW). Experiments on FineGym, PennAction, and Pouring datasets show that our method outperforms previous state-of-the-art by a large margin for downstream fine-grained action classification and even faster inference. Surprisingly, although without training on paired videos like in previous works, our self-supervised version also shows outstanding performance in video alignment and fine-grained frame retrieval tasks.
translated by 谷歌翻译
我们为视频对象分割(VOS)提出了一种对无监督学习的新方法。与以前的工作不同,我们的配方允许直接在完全卷积的制度中学习密集特征表示。我们依靠统一的网格采样来提取一组锚点并培训我们的模型,以消除它们之间的间间和视频间级别之间的消除。然而,训练这种模型的天真的方案导致退化的解决方案。我们建议使用简单的正则化方案来防止这种情况,将分段任务的标准性属性与相似性转换的平衡性。我们的培训目标承认有效实施并展示快速培训趋同。在已建立的VOS基准测试中,我们的方法尽管使用明显更少的培训数据和计算能力,但我们的方法超出了以前的工作的分割准确性。
translated by 谷歌翻译
与基于现代聚类算法的完全监督的REID方法相比,未经监督的人重新识别(U-Reid)最近达到了竞争性能。然而,这种基于聚类的方案对大规模数据集来说变得对计算方式。如何探讨如何有效利用具有有限计算资源的无限未标记的数据,以便更好地进行更好的U-Reid。在本文中,我们首次尝试大规模U-Reid并提出一个“大型任务的小数据”范式被称为Meta聚类学习(MCL)。 MCL仅通过群集伪标记整个未标记数据的子集,以节省第一期训练的计算。之后,被学习的集群中心称为我们的MCL中的元原型,被视为代理注释器,以便轻松注释其它未标记数据以进一步抛光模型。为了缓解抛光阶段的潜在嘈杂的标签问题,我们强制执行两个精心设计的损失限制,以保证境内统一的一致性和相互识别的强烈相关性。对于多个广泛使用的U-REID基准测试,我们的方法显着节省了计算成本,同时与先前作品相比,实现了可比或更好的性能。
translated by 谷歌翻译
近年来,随着深度神经网络方法的普及,手术计算机视觉领域经历了相当大的突破。但是,用于培训的标准全面监督方法需要大量的带注释的数据,从而实现高昂的成本;特别是在临床领域。已经开始在一般计算机视觉社区中获得吸引力的自我监督学习(SSL)方法代表了对这些注释成本的潜在解决方案,从而使仅从未标记的数据中学习有用的表示形式。尽管如此,SSL方法在更复杂和有影响力的领域(例如医学和手术)中的有效性仍然有限且未开发。在这项工作中,我们通过在手术计算机视觉的背景下研究了四种最先进的SSL方法(Moco V2,Simclr,Dino,SWAV),以解决这一关键需求。我们对这些方法在cholec80数据集上的性能进行了广泛的分析,以在手术环境理解,相位识别和工具存在检测中为两个基本和流行的任务。我们检查了它们的参数化,然后在半监督设置中相对于训练数据数量的行为。如本工作所述和进行的那样,将这些方法的正确转移到手术中,可以使SSL的一般用途获得可观的性能 - 相位识别率高达7%,而在工具存在检测方面,则具有20% - 半监督相位识别方法高达14%。该代码将在https://github.com/camma-public/selfsupsurg上提供。
translated by 谷歌翻译
不同对象之间的闭塞是多对象跟踪(MOT)中的典型挑战,这通常导致由于丢失的检测到的对象导致较差的跟踪结果。多对象跟踪中的常见做法是重新识别出现后的错过对象。虽然重新识别可以提高跟踪性能,但是需要培训型号的身份的注释。此外,这种重新识别的做法仍然不能在探测器错过时跟踪那些高度遮挡的物体。在本文中,我们专注于在线多目标跟踪和设计两种新颖的模块,无监督的重新识别学习模块和遮挡估计模块,处理这些问题。具体地,所提出的无监督重新识别学习模块不需要任何(伪)身份信息,也不需要缩放性问题。所提出的遮挡估计模块尝试预测闭塞发生的位置,其用于估计探测器错过对象的位置。我们的研究表明,当应用于最先进的MOT方法时,所提出的无监督的重新识别学习与监督重新识别学习相当,并且通过所提出的遮挡估计模块进一步改善了跟踪性能。
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
具有大量空间和时间跨境的情景中的人重新识别(RE-ID)尚未完全探索。这部分原因是,现有的基准数据集主要由有限的空间和时间范围收集,例如,使用在校园特定区域的相机录制的视频中使用的视频。这种有限的空间和时间范围使得难以模拟真实情景中的人的困难。在这项工作中,我们贡献了一个新的大型时空上次最后一个数据集,包括10,862个图像,具有超过228k的图像。与现有数据集相比,最后一个具有挑战性和高度多样性的重新ID设置,以及显着更大的空间和时间范围。例如,每个人都可以出现在不同的城市或国家,以及在白天到夜间的各个时隙,以及春季到冬季的不同季节。为了我们的最佳知识,最后是一个新的Perse Re-ID数据集,具有最大的时空范围。基于最后,我们通过对14个RE-ID算法进行全面的绩效评估来验证其挑战。我们进一步提出了一种易于实施的基线,适用于如此挑战的重新ID设置。我们还验证了初步训练的模型可以在具有短期和更改方案的现有数据集中概括。我们期待持续激发未来的工程,以更现实和挑战的重新识别任务。有关DataSet的更多信息,请访问https://github.com/shuxjweb/last.git。
translated by 谷歌翻译
最近,由于受监督人员重新识别(REID)的表现不佳,域名概括(DG)人REID引起了很多关注,旨在学习一个不敏感的模型,并可以抵抗域的影响偏见。在本文中,我们首先通过实验验证样式因素是域偏差的重要组成部分。基于这个结论,我们提出了一种样式变量且无关紧要的学习方法(SVIL)方法,以消除样式因素对模型的影响。具体来说,我们在SVIL中设计了样式的抖动模块(SJM)。 SJM模块可以丰富特定源域的样式多样性,并减少各种源域的样式差异。这导致该模型重点关注与身份相关的信息,并对样式变化不敏感。此外,我们将SJM模块与元学习算法有机结合,从而最大程度地提高了好处并进一步提高模型的概括能力。请注意,我们的SJM模块是插件和推理,无需成本。广泛的实验证实了我们的SVIL的有效性,而我们的方法的表现优于DG-REID基准测试的最先进方法。
translated by 谷歌翻译
监督人员重新识别(RE-ID)方法需要大量的成对手动标记数据,这些数据不适用于重新ID部署的大多数真实情景。另一方面,无监督的RE-ID方法依赖于未标记的数据来培训模型,但与监督的重新ID方法相比,执行差劲。在这项工作中,我们的目标是将无监督的重新识别学习与少数人的注释相结合,以实现竞争性能。为此目标,我们提出了一个无人监督的聚类主动学习(UCAL)重新ID深度学习方法。它能够逐步地发现代表性的质心对并要求人类注释它们。这些标记的代表成对数据可以通过其他大量未标记的数据来改善无监督的表示学习模型。更重要的是,由于选择了代表性质心对注释,UCAL可以使用非常低成本的人力努力工作。广泛的实验表明,在三个重新ID基准数据集上展示了拟议的模型的优越性。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Well-annotated medical datasets enable deep neural networks (DNNs) to gain strong power in extracting lesion-related features. Building such large and well-designed medical datasets is costly due to the need for high-level expertise. Model pre-training based on ImageNet is a common practice to gain better generalization when the data amount is limited. However, it suffers from the domain gap between natural and medical images. In this work, we pre-train DNNs on ultrasound (US) domains instead of ImageNet to reduce the domain gap in medical US applications. To learn US image representations based on unlabeled US videos, we propose a novel meta-learning-based contrastive learning method, namely Meta Ultrasound Contrastive Learning (Meta-USCL). To tackle the key challenge of obtaining semantically consistent sample pairs for contrastive learning, we present a positive pair generation module along with an automatic sample weighting module based on meta-learning. Experimental results on multiple computer-aided diagnosis (CAD) problems, including pneumonia detection, breast cancer classification, and breast tumor segmentation, show that the proposed self-supervised method reaches state-of-the-art (SOTA). The codes are available at https://github.com/Schuture/Meta-USCL.
translated by 谷歌翻译
最近,无监督的人重新识别(RE-ID)引起了人们的关注,因为其开放世界情景设置有限,可用的带注释的数据有限。现有的监督方法通常无法很好地概括在看不见的域上,而无监督的方法(大多数缺乏多范围的信息),并且容易患有确认偏见。在本文中,我们旨在从两个方面从看不见的目标域上找到更好的特征表示形式,1)在标记的源域上进行无监督的域适应性和2)2)在未标记的目标域上挖掘潜在的相似性。此外,提出了一种协作伪标记策略,以减轻确认偏见的影响。首先,使用生成对抗网络将图像从源域转移到目标域。此外,引入了人身份和身份映射损失,以提高生成图像的质量。其次,我们提出了一个新颖的协作多元特征聚类框架(CMFC),以学习目标域的内部数据结构,包括全局特征和部分特征分支。全球特征分支(GB)在人体图像的全球特征上采用了无监督的聚类,而部分特征分支(PB)矿山在不同人体区域内的相似性。最后,在两个基准数据集上进行的广泛实验表明,在无监督的人重新设置下,我们的方法的竞争性能。
translated by 谷歌翻译
基于变压器的监督预培训在重新识别(REID)中实现了良好的性能。但是,由于想象成和Reid数据集之间的域间隙,它通常需要更大的预训练数据集(例如,ImageNet-21k),以提高性能,因为变压器的强大数据拟合能力。为了解决这一挑战,这项工作可以分别从数据和模型结构的角度降低预训练和REID数据集之间的差距。我们首先调查在未标记的人物图像(Luperson DataSet)上的视觉变压器(VIV)的自我监督为了进一步降低域间隙并加速预训练,提出了灾难性的遗忘得分(CFS)来评估预训练和微调数据之间的差距。基于CFS,通过采样靠近下游REID数据的相关数据来选择一个子集,并从预训练的数据集中过滤无关数据。对于模型结构,提出了一种名为基于IBN的卷积词条(ICS)的特定于REID的模块来通过学习更不变的功能来弥合域间隙。已经进行了广泛的实验,以微调在监督学习,无监督域适应(UDA)和无监督的学习(USL)设置下进行预训练模型。我们成功将Luperson DataSet缩小为50%,没有性能下降。最后,我们在市场-1501和MSMT17上实现了最先进的表现。例如,我们的VIT-S / 16在Market1501上实现了91.3%/ 89.9%/ 89.6%用于监督/ UDA / USL REID的11501。代码和模型将发布到https://github.com/michuanhaohao/transreid -sl。
translated by 谷歌翻译
无监督的域自适应人重新识别(重新ID)任务是一个挑战,因为与常规域自适应任务不同,人物重新ID中的源域数据和目标域数据之间没有重叠,这导致一个重要的领域差距。最先进的无监督的RE-ID方法使用基于内存的对比损耗训练神经网络。然而,通过将每个未标记的实例视为类来执行对比学习,作为类将导致阶级冲突的问题,并且由于在存储库中更新时不同类别的实例数量的差异,更新强度是不一致的。为了解决此类问题,我们提出了对人的重新ID的原型字典学习,其能够通过一个训练阶段利用源域数据和目标域数据,同时避免类碰撞问题和群集更新强度不一致的问题原型字典学习。为了减少模型上域间隙的干扰,我们提出了一个本地增强模块,以改善模型的域适应而不增加模型参数的数量。我们在两个大型数据集上的实验证明了原型字典学习的有效性。 71.5 \%地图是在市场到Duke任务中实现的,这是与最先进的无监督域自适应RE-ID方法相比的2.3 \%的改进。它在Duke-to-Market任务中实现了83.9 \%地图,而与最先进的无监督的自适应重新ID方法相比,该任务在4.4 \%中提高了4.4%。
translated by 谷歌翻译