深度度量学习(DML)模型通常需要强大的本地和全球表示,但是,DML模型培训中的本地和全球特征的有效整合是一项挑战。 DML模型通常具有特定损耗功能,包括基于成对和基于代理的损失。基于成对的损耗函数利用数据点之间丰富的语义关系,然而,在DML模型训练期间经常遭受缓慢的收敛。另一方面,基于代理的损耗功能通常会导致培训期间收敛的显着加速,而基于代理的损失通常不会完全探索数据点之间的丰富关系。在本文中,我们提出了一种新的DML方法来解决这些挑战。所提出的DML方法通过集成对基于基于代理的损耗函数来利用丰富的数据到数据关系以及快速收敛来利用混合丢失来利用混合丢失。此外,所提出的DML方法利用全局和本地功能在DML模型培训中获得丰富的表示。最后,我们还使用二阶注意功能增强,以提高准确和有效的检索。在我们的实验中,我们在四个公共基准中广泛评估了所提出的DML方法,实验结果表明,该方法在所有基准上实现了最先进的性能。
translated by 谷歌翻译
Recent methods for deep metric learning have been focusing on designing different contrastive loss functions between positive and negative pairs of samples so that the learned feature embedding is able to pull positive samples of the same class closer and push negative samples from different classes away from each other. In this work, we recognize that there is a significant semantic gap between features at the intermediate feature layer and class labels at the final output layer. To bridge this gap, we develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity in a contrastive learning setting. This contrastive Bayesian analysis leads to a new loss function for deep metric learning. To improve the generalization capability of the proposed method onto new classes, we further extend the contrastive Bayesian loss with a metric variance constraint. Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning in both supervised and pseudo-supervised scenarios, outperforming existing methods by a large margin.
translated by 谷歌翻译
Supervision for metric learning has long been given in the form of equivalence between human-labeled classes. Although this type of supervision has been a basis of metric learning for decades, we argue that it hinders further advances of the field. In this regard, we propose a new regularization method, dubbed HIER, to discover the latent semantic hierarchy of training data, and to deploy the hierarchy to provide richer and more fine-grained supervision than inter-class separability induced by common metric learning losses. HIER achieved this goal with no annotation for the semantic hierarchy but by learning hierarchical proxies in hyperbolic spaces. The hierarchical proxies are learnable parameters, and each of them is trained to serve as an ancestor of a group of data or other proxies to approximate the semantic hierarchy among them. HIER deals with the proxies along with data in hyperbolic space since geometric properties of the space are well-suited to represent their hierarchical structure. The efficacy of HIER was evaluated on four standard benchmarks, where it consistently improved performance of conventional methods when integrated with them, and consequently achieved the best records, surpassing even the existing hyperbolic metric learning technique, in almost all settings.
translated by 谷歌翻译
Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeping dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets.
translated by 谷歌翻译
Deep metric learning aims to learn an embedding space, where semantically similar samples are close together and dissimilar ones are repelled against. To explore more hard and informative training signals for augmentation and generalization, recent methods focus on generating synthetic samples to boost metric learning losses. However, these methods just use the deterministic and class-independent generations (e.g., simple linear interpolation), which only can cover the limited part of distribution spaces around original samples. They have overlooked the wide characteristic changes of different classes and can not model abundant intra-class variations for generations. Therefore, generated samples not only lack rich semantics within the certain class, but also might be noisy signals to disturb training. In this paper, we propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning. We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining and boost metric learning losses. Further, for most datasets that have a few samples within the class, we propose the neighbor correction to revise the inaccurate estimations, according to our correlation discovery where similar classes generally have similar variation distributions. Extensive experiments on five benchmarks show our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%. Our code is available at https://github.com/darkpromise98/IAA
translated by 谷歌翻译
深度度量学习(DML)有助于学习嵌入功能,以将语义上的数据投射到附近的嵌入空间中,并在许多应用中起着至关重要的作用,例如图像检索和面部识别。但是,DML方法的性能通常很大程度上取决于采样方法,从训练中的嵌入空间中选择有效的数据。实际上,嵌入空间中的嵌入是通过一些深层模型获得的,其中嵌入空间通常由于缺乏训练点而在贫瘠的区域中,导致所谓的“缺失嵌入”问题。此问题可能会损害样品质量,从而导致DML性能退化。在这项工作中,我们研究了如何减轻“缺失”问题以提高采样质量并实现有效的DML。为此,我们提出了一个密集锚定的采样(DAS)方案,该方案将嵌入的数据点视为“锚”,并利用锚附近的嵌入空间来密集地生成无数据点的嵌入。具体而言,我们建议用判别性特征缩放(DFS)和多个锚点利用单个锚周围的嵌入空间,并具有记忆转换转换(MTS)。通过这种方式,通过有或没有数据点的嵌入方式,我们能够提供更多的嵌入以促进采样过程,从而提高DML的性能。我们的方法毫不费力地集成到现有的DML框架中,并在没有铃铛和哨声的情况下改进了它们。在三个基准数据集上进行的广泛实验证明了我们方法的优势。
translated by 谷歌翻译
A family of loss functions built on pair-based computation have been proposed in the literature which provide a myriad of solutions for deep metric learning. In this paper, we provide a general weighting framework for understanding recent pair-based loss functions. Our contributions are three-fold: (1) we establish a General Pair Weighting (GPW) framework, which casts the sampling problem of deep metric learning into a unified view of pair weighting through gradient analysis, providing a powerful tool for understanding recent pair-based loss functions; (2) we show that with GPW, various existing pair-based methods can be compared and discussed comprehensively, with clear differences and key limitations identified; (3) we propose a new loss called multi-similarity loss (MS loss) under the GPW, which is implemented in two iterative steps (i.e., mining and weighting). This allows it to fully consider three similarities for pair weighting, providing a more principled approach for collecting and weighting informative pairs. Finally, the proposed MS loss obtains new state-of-the-art performance on four image retrieval benchmarks, where it outperforms the most recent approaches, such as ABE [14] and HTL [4], by a large margin, e.g., , and 80.9% → 88.0% on In-Shop Clothes Retrieval dataset
translated by 谷歌翻译
深度指标学习旨在学习嵌入空间,即使在训练期间他们的类是看不见的,数据之间的距离反映了他们的类等价。然而,培训中可用的有限数量排除了学习嵌入空间的概括。由此激励,我们介绍了一种新的数据增强方法,该方法合成了新颖类及其嵌入向量。我们的方法可以向嵌入式模型提供丰富的语义信息,通过在原始数据中使用新类别增强培训数据来提高其泛化。我们通过学习和利用条件生成模型来实现这个想法,其中,给定类标签和噪声,产生类的随机嵌入向量。我们所提出的发电机允许损失通过增强现实和多样的类来使用更丰富的级关系,从而更好地推广了看不见的样本。公共基准数据集上的实验结果表明,我们的方法明确提高了基于代理的损失的性能。
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
深度度量学习(DML)了解映射,该映射到嵌入空间,其中类似数据接近并且不同的数据远远。然而,DML的传统基于代理的损失有两个问题:渐变问题并使用多个本地中心应用现实世界数据集。此外,DML性能指标也有一些问题具有稳定性和灵活性。本文提出了多代理锚(MPA)丢失和归一化折扣累积增益(NDCG @ K)度量。本研究贡献了三个以下:(1)MPA损失能够使用多代理学习现实世界数据集。(2)MPA损失提高了神经网络的培训能力,解决了梯度问题。(3)NDCG @ K度量标准鼓励对各种数据集进行全面评估。最后,我们展示了MPA损失的有效性,MPA损失在两个用于细粒度图像的数据集上实现了最高准确性。
translated by 谷歌翻译
深度度量学习(DML)旨在最大程度地减少嵌入图像中成对内部/间阶层接近性违规的经验预期损失。我们将DML与有限机会限制的可行性问题联系起来。我们表明,基于代理的DML的最小化器满足了某些机会限制,并且基于代理方法的最坏情况可以通过围绕类代理的最小球的半径来表征,以覆盖相应类的整个域样本,建议每课多个代理有助于表现。为了提供可扩展的算法并利用更多代理,我们考虑了基于代理的DML实例的最小化者所隐含的机会限制,并将DML重新制定为在此类约束的交叉点中找到可行的点,从而导致问题近似解决。迭代预测。简而言之,我们反复训练基于代理的损失,并用故意选择的新样本的嵌入来重新定位代理。我们将我们的方法应用于公认的损失,并在四个流行的基准数据集上评估图像检索。优于最先进的方法,我们的方法一致地提高了应用损失的性能。代码可在以下网址找到:https://github.com/yetigurbuz/ccp-dml
translated by 谷歌翻译
在本文中,我们提出了一种强大的样本生成方案来构建信息性三联网。所提出的硬样品生成是一种两级合成框架,通过两个阶段的有效正和负样品发生器产生硬样品。第一阶段将锚定向对具有分段线性操作,通过巧妙地设计条件生成的对抗网络来提高产生的样本的质量,以降低模式崩溃的风险。第二阶段利用自适应反向度量约束来生成最终的硬样本。在几个基准数据集上进行广泛的实验,验证了我们的方法比现有的硬样生成算法达到卓越的性能。此外,我们还发现,我们建议的硬样品生成方法结合现有的三态挖掘策略可以进一步提高深度度量学习性能。
translated by 谷歌翻译
在本文中,我们通过利用包含来自其他不同但相关类别的图像的标记数据集将来自新类的未标记的图像与新类别分组从新类别分组到不同的语义分区的问题。这是一个比传统的半监督学习更现实和具有挑战性的。我们为这个问题提出了一个双分支学习框架,一个分支专注于本地部分级信息和专注于整体特征的另一个分支。将知识从标记的数据传输到未标记的,我们建议使用两个分支机构的双重排名统计信息来生成伪标签,用于培训未标记的数据。我们进一步介绍了一个相互知识蒸馏方法,以允许信息交流并鼓励两个分支机构之间的协议,以发现新类别,允许我们的模型享受全球和当地特征的好处。我们全面评估了我们在通用对象分类的公共基准上的方法,以及用于细粒度的视觉识别的更具挑战性的数据集,实现最先进的性能。
translated by 谷歌翻译
学习相似性是医学图像分析的关键方面,尤其是在推荐系统或发现图像中解剖学数据的解释时。大多数现有方法使用单个公制学习者在嵌入空间中学习了这种相似性。但是,图像具有多种对象属性,例如颜色,形状或人工制品。使用单个公制学习者编码此类属性是不足的,并且可能无法概括。取而代之的是,多个学习者可以专注于总体嵌入子空间中这些属性的各个方面。但是,这意味着每个新数据集经验发现的学习者数量。这项工作,动态的子空间学习者,建议通过消除需要了解学习者的数量并在培训期间汇总新的子空间学习者来动态利用多个学习者。此外,通过将注意力模块整合到我们的方法中,可以实现此类子空间学习的视觉解释性。这种集成的注意机制提供了判别图像特征的视觉见解,这些特征有助于图像集的聚类和嵌入功能的视觉解释。在应用图像聚类,图像检索和弱监督分段的应用中,评估了我们基于注意力的动态子空间学习者的好处。我们的方法通过多个学习者基准的表现取得了竞争成果,并且在三个不同的公共基准数据集上的聚类和检索分数方面显着优于分类网络。此外,我们的注意力图提供了代理标签,与最先进的解释技术相比,骰子得分最高15%。
translated by 谷歌翻译
Advanced visual localization techniques encompass image retrieval challenges and 6 Degree-of-Freedom (DoF) camera pose estimation, such as hierarchical localization. Thus, they must extract global and local features from input images. Previous methods have achieved this through resource-intensive or accuracy-reducing means, such as combinatorial pipelines or multi-task distillation. In this study, we present a novel method called SuperGF, which effectively unifies local and global features for visual localization, leading to a higher trade-off between localization accuracy and computational efficiency. Specifically, SuperGF is a transformer-based aggregation model that operates directly on image-matching-specific local features and generates global features for retrieval. We conduct experimental evaluations of our method in terms of both accuracy and efficiency, demonstrating its advantages over other methods. We also provide implementations of SuperGF using various types of local features, including dense and sparse learning-based or hand-crafted descriptors.
translated by 谷歌翻译
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.
translated by 谷歌翻译
少量学习是一个基本和挑战性的问题,因为它需要识别只有几个例子的新型类别。识别对象具有多个变体,可以定位图像中的任何位置。直接将查询图像与示例图像进行比较无法处理内容未对准。比较的表示和度量是至关重要的,但由于在几次拍摄学习中的样本的稀缺和广泛变化而挑战。在本文中,我们提出了一种新颖的语义对齐模型来比较关系,这是对内容未对准的强大。我们建议为现有的几次射门学习框架添加两个关键成分,以获得更好的特征和度量学习能力。首先,我们介绍了语义对齐损失,以对准属于同一类别的样本的功能的关系统计。其次,引入了本地和全局互动信息,允许在图像中的结构位置包含本地一致和类别共享信息的表示。第三,我们通过考虑每个流的同性恋的不确定性来介绍一个原则的方法来称量多重损失功能。我们对几个几次拍摄的学习数据集进行了广泛的实验。实验结果表明,该方法能够比较与语义对准策略的关系,实现最先进的性能。
translated by 谷歌翻译
Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.
translated by 谷歌翻译
在距离度量学习网络的培训期间,典型损耗函数的最小值可以被认为是满足由训练数据施加的一组约束的“可行点”。为此,我们将距离度量学习问题重构为查找约束集的可行点,其中训练数据的嵌入向量满足所需的类内和帧间接近度。由约束集引起的可行性集被表示为仅针对训练数据的特定样本(来自每个类别的样本)强制执行接近约束的宽松可行集合。然后,通过在那些可行的组上执行交替的投影来大致解决可行点问题。这种方法引入了正则化术语,并导致最小化具有系统批量组结构的典型损失函数,其中这些批次被约束以包含来自每个类的相同样本,用于一定数量的迭代。此外,这些特定样品可以被认为是阶级代表,允许在批量构建期间有效地利用艰难的挖掘。所提出的技术应用于良好的损失,并在斯坦福在线产品,CAR196和CUB200-2011数据集进行了评估,用于图像检索和聚类。表现优于现有技术,所提出的方法一致地提高了综合损失函数的性能,没有额外的计算成本,并通过硬负面挖掘进一步提高性能。
translated by 谷歌翻译
这项工作旨在改善具有自我监督的实例检索。我们发现使用最近开发的自我监督(SSL)学习方法(如SIMCLR和MOCO)的微调未能提高实例检索的性能。在这项工作中,我们确定了例如检索的学习表示应该是不变的视点和背景等的大变化,而当前SSL方法应用的自增强阳性不能为学习强大的实例级别表示提供强大的信号。为了克服这个问题,我们提出了一种在\ texit {实例级别}对比度上建立的新SSL方法,以通过动态挖掘迷你批次和存储库来学习类内不变性训练。广泛的实验表明,insclr在实例检索上实现了比最先进的SSL方法更类似或更好的性能。代码可在https://github.com/zeludeng/insclr获得。
translated by 谷歌翻译