在线相关性反馈(RF)在实例搜索(INS)任务中被广泛使用,以进一步完善排名结果,但相互作用效率通常很低。主动学习(AL)技术通过选择有价值的反馈候选者来解决此问题。但是,主流AL方法需要一个初始标记的设置以进行冷启动,并且通常在计算上要解决。因此,他们无法完全满足交互式INS任务中在线RF的要求。为了解决此问题,我们提出了一种具有信心的主动反馈方法(CAAF),该方法专门为在线RF设计,以交互式INS任务。受到自定进度学习的显式难度建模方案的启发,CAAF利用成对的歧管排名损失来评估每个未标记样本的排名置信度。排名置信不仅通过指示有价值的反馈候选者,而且通过调节多种多样排名中的扩散权重来提高相互作用效率。此外,我们设计了两种加速策略,即近似优化方案和TOP-K搜索方案,以降低CAAF的计算复杂性。对图像INS任务和视频INS任务进行了广泛的实验,以搜索建筑物,景观,人员和人类行为都证明了该方法的有效性。值得注意的是,在现实世界中,NIST Trecvid 2021的大规模视频INS任务中,CAAF使用的反馈样本减少了25%,以实现几乎等同于Champion解决方案的性能。此外,有了相同数量的反馈样本,CAAF的地图为51.9%,大大超过了5.9%的冠军解决方案。
translated by 谷歌翻译
When considering person re-identification (re-ID) as a retrieval process, re-ranking is a critical step to improve its accuracy. Yet in the re-ID community, limited effort has been devoted to re-ranking, especially those fully automatic, unsupervised solutions. In this paper, we propose a -reciprocal encoding method to re-rank the re-ID results. Our hypothesis is that if a gallery image is similar to the probe in the -reciprocal nearest neighbors, it is more likely to be a true match. Specifically, given an image, areciprocal feature is calculated by encoding its -reciprocal nearest neighbors into a single vector, which is used for reranking under the Jaccard distance. The final distance is computed as the combination of the original distance and the Jaccard distance. Our re-ranking method does not require any human interaction or any labeled data, so it is applicable to large-scale datasets. Experiments on the largescale Market-1501, CUHK03, MARS, and PRW datasets confirm the effectiveness of our method 1 .
translated by 谷歌翻译
As an important data selection schema, active learning emerges as the essential component when iterating an Artificial Intelligence (AI) model. It becomes even more critical given the dominance of deep neural network based models, which are composed of a large number of parameters and data hungry, in application. Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions. In this paper, we present a review of active learning through deep active learning approaches from the following perspectives: 1) technical advancements in active learning, 2) applications of active learning in computer vision, 3) industrial systems leveraging or with potential to leverage active learning for data iteration, 4) current limitations and future research directions. We expect this paper to clarify the significance of active learning in a modern AI model manufacturing process and to bring additional research attention to active learning. By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies by boosting model production at scale.
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
主动学习(al)试图通过标记最少的样本来最大限度地提高模型的性能增益。深度学习(DL)是贪婪的数据,需要大量的数据电源来优化大量参数,因此模型了解如何提取高质量功能。近年来,由于互联网技术的快速发展,我们处于信息种类的时代,我们有大量的数据。通过这种方式,DL引起了研究人员的强烈兴趣,并已迅速发展。与DL相比,研究人员对Al的兴趣相对较低。这主要是因为在DL的崛起之前,传统的机器学习需要相对较少的标记样品。因此,早期的Al很难反映其应得的价值。虽然DL在各个领域取得了突破,但大多数这一成功都是由于大量现有注释数据集的宣传。然而,收购大量高质量的注释数据集消耗了很多人力,这在某些领域不允许在需要高专业知识,特别是在语音识别,信息提取,医学图像等领域中, al逐渐受到适当的关注。自然理念是AL是否可用于降低样本注释的成本,同时保留DL的强大学习能力。因此,已经出现了深度主动学习(DAL)。虽然相关的研究非常丰富,但它缺乏对DAL的综合调查。本文要填补这一差距,我们为现有工作提供了正式的分类方法,以及全面和系统的概述。此外,我们还通过申请的角度分析并总结了DAL的发展。最后,我们讨论了DAL中的混乱和问题,为DAL提供了一些可能的发展方向。
translated by 谷歌翻译
时间动作定位(TAL)旨在预测未修剪视频(即开始和结束时间)中动作实例的动作类别和时间边界。通常在大多数现有作品中都采用了完全监督的解决方案,并被证明是有效的。这些解决方案中的实际瓶颈之一是所需的大量标记培训数据。为了降低昂贵的人类标签成本,本文着重于很少调查但实用的任务,称为半监督TAL,并提出了一种有效的主动学习方法,名为Al-Stal。我们利用四个步骤来积极选择具有很高信息性的视频样本,并培训本地化模型,名为\ emph {火车,查询,注释,附加}。考虑定位模型的不确定性的两个评分函数配备了ALSTAL,从而促进了视频样本等级和选择。一个人将预测标签分布的熵作为不确定性的度量,称为时间提案熵(TPE)。另一个引入了基于相邻行动建议之间的共同信息的新指标,并评估视频样本的信息性,称为时间上下文不一致(TCI)。为了验证拟议方法的有效性,我们在两个基准数据集Thumos'14和ActivityNet 1.3上进行了广泛的实验。实验结果表明,与完全监督的学习相比,AL-Stal的表现优于现有竞争对手,并实现令人满意的表现。
translated by 谷歌翻译
人类每天产生的exabytes数据,导致越来越需要对大数据带来的多标签学习的大挑战的新努力。例如,极端多标签分类是一个有效且快速增长的研究区域,可以处理具有极大数量的类或标签的分类任务;利用具有有限监督的大规模数据构建一个多标签分类模型对实际应用变得有价值。除此之外,如何收获深度学习的强大学习能力,有巨大努力,以更好地捕获多标签的标签依赖性学习,这是深入学习解决现实世界分类任务的关键。然而,有人指出,缺乏缺乏系统性研究,明确关注分析大数据时代的多标签学习的新兴趋势和新挑战。呼吁综合调查旨在满足这项任务和描绘未来的研究方向和新应用。
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
Recent aerial object detection models rely on a large amount of labeled training data, which requires unaffordable manual labeling costs in large aerial scenes with dense objects. Active learning is effective in reducing the data labeling cost by selectively querying the informative and representative unlabelled samples. However, existing active learning methods are mainly with class-balanced setting and image-based querying for generic object detection tasks, which are less applicable to aerial object detection scenario due to the long-tailed class distribution and dense small objects in aerial scenes. In this paper, we propose a novel active learning method for cost-effective aerial object detection. Specifically, both object-level and image-level informativeness are considered in the object selection to refrain from redundant and myopic querying. Besides, an easy-to-use class-balancing criterion is incorporated to favor the minority objects to alleviate the long-tailed class distribution problem in model training. To fully utilize the queried information, we further devise a training loss to mine the latent knowledge in the undiscovered image regions. Extensive experiments are conducted on the DOTA-v1.0 and DOTA-v2.0 benchmarks to validate the effectiveness of the proposed method. The results show that it can save more than 75% of the labeling cost to reach the same performance compared to the baselines and state-of-the-art active object detection methods. Code is available at https://github.com/ZJW700/MUS-CDB
translated by 谷歌翻译
我们将简要介绍本文Trecvid2021中WHU-nercms的实验方法和结果。今年,我们参加了实例搜索的自动和交互式任务(INS)。对于自动任务,检索目标分为两个部分,人检索和动作检索。我们采用了两阶段方法,包括对人检索的面部检测和面部识别以及由三种基于框架的人类对象相互作用检测方法和两种基于视频的一般动作检测方法组成的两种动作检测方法。在那之后,人的检索结果和动作检索结果被融合以初始化结果排名列表。此外,我们尝试使用互补方法进一步提高搜索性能。对于交互式任务,我们在融合结果上测试了两种不同的交互策略。我们分别为自动和交互式任务提交4次运行。每次运行的引入显示在表1中。官方评估表明,所提出的策略在自动和交互式轨道中排名第一。
translated by 谷歌翻译
无监督的特征表示学习是一个具有挑战性的,重要的问题,用于分析没有语义标签的大量多媒体数据。最近提出的基于神经网络的无监督学习方法已经成功获得适合于多媒体数据分类的特征。但是,尚未探讨对基于内容的匹配,比较或检索多媒体数据的特征表示的无监督学习尚未探讨。为了获得这种检索适应特征,我们介绍了基于神经网络的无监督功能学习的特征歧管的扩散距离的想法。这个想法是一种称为DeepDiffifum(DD)的新型算法。 DD同时优化两个组件,由深神经网络嵌入的特征和距离度量,其利用在潜像歧管上的扩散在一起。 DD依赖于其损耗功能但不是编码器架构。因此,它可以用各自的编码器架构应用于不同的多媒体数据类型。使用3D形状和2D图像的实验评估表明了DD算法的多功能性和高精度。代码可在https://github.com/takahikof/deepdiffifumence获得
translated by 谷歌翻译
本文提出了FLGC,这是一个简单但有效的全线性图形卷积网络,用于半监督和无人监督的学习。基于计算具有解耦步骤的全局最优闭合液解决方案而不是使用梯度下降,而不是使用梯度下降。我们展示(1)FLGC强大的是处理图形结构化数据和常规数据,(2)具有闭合形式解决方案的训练图卷积模型提高了计算效率而不会降低性能,而(3)FLGC作为自然概括非欧几里德域的经典线性模型,例如Ridge回归和子空间聚类。此外,我们通过引入初始剩余策略来实现半监督的FLGC和无监督的FLGC,使FLGC能够聚集长距离邻域并减轻过平滑。我们将我们的半监督和无人监督的FLGC与各种分类和聚类基准的许多最先进的方法进行比较,表明建议的FLGC模型在准确性,鲁棒性和学习效率方面始终如一地优于先前的方法。我们的FLGC的核心代码在https://github.com/angrycai/flgc下发布。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
虽然深度学习(DL)是渴望数据的,并且通常依靠广泛的标记数据来提供良好的性能,但主动学习(AL)通过从未标记的数据中选择一小部分样本进行标签和培训来降低标签成本。因此,近年来,在有限的标签成本/预算下,深入的积极学习(DAL)是可行的解决方案,可在有限的标签成本/预算下最大化模型性能。尽管已经开发了大量的DAL方法并进行了各种文献综述,但在公平比较设置下对DAL方法的性能评估尚未可用。我们的工作打算填补这一空白。在这项工作中,我们通过重新实现19种引用的DAL方法来构建DAL Toolkit,即Deepal+。我们调查和分类与DAL相关的作品,并构建经常使用的数据集和DAL算法的比较实验。此外,我们探讨了影响DAL功效的一些因素(例如,批处理大小,训练过程中的时期数),这些因素为研究人员设计其DAL实验或执行DAL相关应用程序提供了更好的参考。
translated by 谷歌翻译
多个内核聚类(MKC)致力于从一组基础内核中实现最佳信息融合。事实证明,构建精确和局部核矩阵在应用中具有至关重要的意义,因为不可靠的远距离相似性估计将降低群集的每种形式。尽管与全球设计的竞争者相比,现有的局部MKC算法表现出改善的性能,但其中大多数通过考虑{\ tau} - 最终的邻居来定位内核矩阵来定位内核矩阵。但是,这种粗糙的方式遵循了一种不合理的策略,即不同邻居的排名重要性是相等的,这在应用程序中是不切实际的。为了减轻此类问题,本文提出了一种新型的本地样品加权多核聚类(LSWMKC)模型。我们首先在内核空间中构建共识判别亲和力图,从而揭示潜在的局部结构。此外,学习亲和力图的最佳邻域内核具有自然稀疏特性和清晰的块对角结构。此外,LSWMKC立即优化了具有相应样品的不同邻居的适应性权重。实验结果表明,我们的LSWMKC具有更好的局部流形表示,并且优于现有内核或基于图的聚类算法算法。可以从https://github.com/liliangnudt/lswmkc公开访问LSWMKC的源代码。
translated by 谷歌翻译
医学图像分割是基于人工智能的临床决策系统的基本问题之一。目前的自动医学图像分割方法往往未能满足临床要求。因此,提出了一系列交互式分段算法来利用专家校正信息。然而,现有方法在长期互动之后遭受一些分割炼制失败问题,以及来自专家注释的一些成本问题,这阻碍了临床应用。本文通过引入纠正措施评估,提出了一种互动分割框架,称为交互式医疗细分,通过引入纠正措施评估,该纠正措施评估结合了基于动作的置信度学习和多智能体增强学习(Marl)。通过新颖的基于行动的置信网络建立评估,并从Marl获得纠正措施。基于机密信息,旨在提供更详细的反馈,并在无监督数据上提出模拟标签生成机制,以减少对标记数据的过度依赖性的模拟标签生成机制。各种医学图像数据集的实验结果显示了所提出的算法的显着性能。
translated by 谷歌翻译
本文解决了几秒钟学习问题,旨在从几个例子中学习新的视觉概念。在几次拍摄分类中的常见问题设置假设在获取数据标签中的随机采样策略,其在实际应用中效率低下。在这项工作中,我们介绍了一个新的预算感知几秒钟学习问题,不仅旨在学习新的对象类别,还需要选择信息实例来注释以实现数据效率。我们为我们的预算感知几秒钟学习任务开发了一个元学习策略,该任务共同了解基于图形卷积网络(GCN)和基于示例的少量拍摄分类器的新型数据选择策略。我们的选择策略通过图形消息传递计算每个未标记数据的上下文敏感表示,然后用于预测顺序选择的信息性分数。我们在迷你想象网,分层 - 想象项目和omniglot数据集上进行广泛的实验验证我们的方法。结果表明,我们的几次学习策略优于一个相当大的边缘,这表明了我们的方法的功效。
translated by 谷歌翻译
实例对象检测在智能监视,视觉导航,人机交互,智能服务和其他字段中扮演重要作用。灵感来自深度卷积神经网络(DCNN)的巨大成功,基于DCNN的实例对象检测已成为一个有前途的研究主题。为了解决DCNN始终需要大规模注释数据集来监督其培训的问题,而手动注释是耗尽和耗时的,我们提出了一种基于共同训练的新框架,称为克自我标记和检测(Gram-SLD) 。建议的克拉姆-SLD可以自动注释大量数据,具有非常有限的手动标记的关键数据并实现竞争性能。在我们的框架中,克朗损失被定义并用于构造两个完全冗余和独立的视图和一个关键的样本选择策略以及自动注释策略,可以全面考虑精度并回忆,以产生高质量的伪标签。 Public Gmu厨房数据集的实验,活动视觉数据集和自制的Bhid-Item DataSetDemonstrite,只有5%的标记训练数据,我们的克斯LLD比较了对象检测中的竞争性能(少于2%的地图丢失)通过完全监督的方法。在具有复杂和变化环境的实际应用中,所提出的方法可以满足实例对象检测的实时和准确性要求。
translated by 谷歌翻译
通过整合人类的知识和经验,人在循环旨在以最低成本培训准确的预测模型。人类可以为机器学习应用提供培训数据,并直接完成在基于机器的方法中对管道中计算机中的难以实现的任务。在本文中,我们从数据的角度调查了人类循环的现有工作,并将它们分为三类具有渐进关系:(1)从数据处理中提高模型性能的工作,(2)通过介入模型培训提高模型性能,(3)系统的设计独立于循环的设计。使用上述分类,我们总结了该领域的主要方法;随着他们的技术优势/弱点以及自然语言处理,计算机愿景等的简单分类和讨论。此外,我们提供了一些开放的挑战和机遇。本调查打算为人类循环提供高级别的摘要,并激励有兴趣的读者,以考虑设计有效的循环解决方案的方法。
translated by 谷歌翻译
Most existing person re-identification methods compute the matching relations between person images across camera views based on the ranking of the pairwise similarities. This matching strategy with the lack of the global viewpoint and the context's consideration inevitably leads to ambiguous matching results and sub-optimal performance. Based on a natural assumption that images belonging to the same person identity should not match with images belonging to multiple different person identities across views, called the unicity of person matching on the identity level, we propose an end-to-end person unicity matching architecture for learning and refining the person matching relations. First, we adopt the image samples' contextual information in feature space to generate the initial soft matching results by using graph neural networks. Secondly, we utilize the samples' global context relationship to refine the soft matching results and reach the matching unicity through bipartite graph matching. Given full consideration to real-world person re-identification applications, we achieve the unicity matching in both one-shot and multi-shot settings of person re-identification and further develop a fast version of the unicity matching without losing the performance. The proposed method is evaluated on five public benchmarks, including four multi-shot datasets MSMT17, DukeMTMC, Market1501, CUHK03, and a one-shot dataset VIPeR. Experimental results show the superiority of the proposed method on performance and efficiency.
translated by 谷歌翻译