深度度量学习(DML)旨在最大程度地减少嵌入图像中成对内部/间阶层接近性违规的经验预期损失。我们将DML与有限机会限制的可行性问题联系起来。我们表明,基于代理的DML的最小化器满足了某些机会限制,并且基于代理方法的最坏情况可以通过围绕类代理的最小球的半径来表征,以覆盖相应类的整个域样本,建议每课多个代理有助于表现。为了提供可扩展的算法并利用更多代理,我们考虑了基于代理的DML实例的最小化者所隐含的机会限制,并将DML重新制定为在此类约束的交叉点中找到可行的点,从而导致问题近似解决。迭代预测。简而言之,我们反复训练基于代理的损失,并用故意选择的新样本的嵌入来重新定位代理。我们将我们的方法应用于公认的损失,并在四个流行的基准数据集上评估图像检索。优于最先进的方法,我们的方法一致地提高了应用损失的性能。代码可在以下网址找到:https://github.com/yetigurbuz/ccp-dml
translated by 谷歌翻译
在距离度量学习网络的培训期间,典型损耗函数的最小值可以被认为是满足由训练数据施加的一组约束的“可行点”。为此,我们将距离度量学习问题重构为查找约束集的可行点,其中训练数据的嵌入向量满足所需的类内和帧间接近度。由约束集引起的可行性集被表示为仅针对训练数据的特定样本(来自每个类别的样本)强制执行接近约束的宽松可行集合。然后,通过在那些可行的组上执行交替的投影来大致解决可行点问题。这种方法引入了正则化术语,并导致最小化具有系统批量组结构的典型损失函数,其中这些批次被约束以包含来自每个类的相同样本,用于一定数量的迭代。此外,这些特定样品可以被认为是阶级代表,允许在批量构建期间有效地利用艰难的挖掘。所提出的技术应用于良好的损失,并在斯坦福在线产品,CAR196和CUB200-2011数据集进行了评估,用于图像检索和聚类。表现优于现有技术,所提出的方法一致地提高了综合损失函数的性能,没有额外的计算成本,并通过硬负面挖掘进一步提高性能。
translated by 谷歌翻译
深度度量学习(DML)有助于学习嵌入功能,以将语义上的数据投射到附近的嵌入空间中,并在许多应用中起着至关重要的作用,例如图像检索和面部识别。但是,DML方法的性能通常很大程度上取决于采样方法,从训练中的嵌入空间中选择有效的数据。实际上,嵌入空间中的嵌入是通过一些深层模型获得的,其中嵌入空间通常由于缺乏训练点而在贫瘠的区域中,导致所谓的“缺失嵌入”问题。此问题可能会损害样品质量,从而导致DML性能退化。在这项工作中,我们研究了如何减轻“缺失”问题以提高采样质量并实现有效的DML。为此,我们提出了一个密集锚定的采样(DAS)方案,该方案将嵌入的数据点视为“锚”,并利用锚附近的嵌入空间来密集地生成无数据点的嵌入。具体而言,我们建议用判别性特征缩放(DFS)和多个锚点利用单个锚周围的嵌入空间,并具有记忆转换转换(MTS)。通过这种方式,通过有或没有数据点的嵌入方式,我们能够提供更多的嵌入以促进采样过程,从而提高DML的性能。我们的方法毫不费力地集成到现有的DML框架中,并在没有铃铛和哨声的情况下改进了它们。在三个基准数据集上进行的广泛实验证明了我们方法的优势。
translated by 谷歌翻译
Deep metric learning aims to learn an embedding space, where semantically similar samples are close together and dissimilar ones are repelled against. To explore more hard and informative training signals for augmentation and generalization, recent methods focus on generating synthetic samples to boost metric learning losses. However, these methods just use the deterministic and class-independent generations (e.g., simple linear interpolation), which only can cover the limited part of distribution spaces around original samples. They have overlooked the wide characteristic changes of different classes and can not model abundant intra-class variations for generations. Therefore, generated samples not only lack rich semantics within the certain class, but also might be noisy signals to disturb training. In this paper, we propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning. We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining and boost metric learning losses. Further, for most datasets that have a few samples within the class, we propose the neighbor correction to revise the inaccurate estimations, according to our correlation discovery where similar classes generally have similar variation distributions. Extensive experiments on five benchmarks show our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%. Our code is available at https://github.com/darkpromise98/IAA
translated by 谷歌翻译
Recent methods for deep metric learning have been focusing on designing different contrastive loss functions between positive and negative pairs of samples so that the learned feature embedding is able to pull positive samples of the same class closer and push negative samples from different classes away from each other. In this work, we recognize that there is a significant semantic gap between features at the intermediate feature layer and class labels at the final output layer. To bridge this gap, we develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity in a contrastive learning setting. This contrastive Bayesian analysis leads to a new loss function for deep metric learning. To improve the generalization capability of the proposed method onto new classes, we further extend the contrastive Bayesian loss with a metric variance constraint. Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning in both supervised and pseudo-supervised scenarios, outperforming existing methods by a large margin.
translated by 谷歌翻译
最近,深度度量学习(DML)的实质性研究努力集中在设计复杂的成对距离损失,这需要卷积方案来缓解优化,例如样本挖掘或配对加权。分类的标准交叉熵损失在DML中大大忽略了。在表面上,交叉熵可能看起来不相关,与度量学习无关,因为它没有明确地涉及成对距离。但是,我们提供了一个理论分析,将交叉熵链接到几个众所周知的和最近的成对损耗。我们的连接是从两种不同的观点绘制:一个基于明确的优化洞察力;另一个关于标签与学到的相互信息的判别和生成观点。首先,我们明确证明交叉熵是新的成对损耗的上限,其具有类似于各种成对损耗的结构:它最大限度地减少了课堂内距离,同时最大化了阶级间距离。结果,最小化交叉熵可以被视为近似束缚 - 优化(或大大最小化)算法,以最小化该成对丢失。其次,我们表明,更一般地,最小化跨熵实际上是相当于最大化互联信息的相同信息,我们连接多个众所周知的成对损耗。此外,我们表明,各种标准成对损耗可以通过绑定的关系彼此明确地与彼此有关。我们的研究结果表明,交叉熵代表了最大化相互信息的代理 - 作为成对损耗,没有必要进行复杂的样品挖掘启发式。我们对四个标准DML基准测试的实验强烈支持我们的调查结果。我们获得最先进的结果,优于最近和复杂的DML方法。
translated by 谷歌翻译
深度度量学习(DML)模型通常需要强大的本地和全球表示,但是,DML模型培训中的本地和全球特征的有效整合是一项挑战。 DML模型通常具有特定损耗功能,包括基于成对和基于代理的损失。基于成对的损耗函数利用数据点之间丰富的语义关系,然而,在DML模型训练期间经常遭受缓慢的收敛。另一方面,基于代理的损耗功能通常会导致培训期间收敛的显着加速,而基于代理的损失通常不会完全探索数据点之间的丰富关系。在本文中,我们提出了一种新的DML方法来解决这些挑战。所提出的DML方法通过集成对基于基于代理的损耗函数来利用丰富的数据到数据关系以及快速收敛来利用混合丢失来利用混合丢失。此外,所提出的DML方法利用全局和本地功能在DML模型培训中获得丰富的表示。最后,我们还使用二阶注意功能增强,以提高准确和有效的检索。在我们的实验中,我们在四个公共基准中广泛评估了所提出的DML方法,实验结果表明,该方法在所有基准上实现了最先进的性能。
translated by 谷歌翻译
We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship -an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.
translated by 谷歌翻译
Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N -pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples -more specifically, N -1 negative examples -and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N +1)×N . We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.
translated by 谷歌翻译
在图像检索中,标准评估度量依赖于分数排名,例如:平均精度(AP)。在本文中,我们介绍了一种稳健和可分解的平均精度(路线图)的方法,解决了对AP的深神经网络的端到端训练的两个主要挑战:非差异性和不分解性。首先,我们提出了一种新的等级函数的新可分辨性近似,这提供了AP损耗的上限并确保了鲁棒训练。其次,我们设计简单但有效的损失功能,以减少整个训练集中的AP之间的分解性差距及其平均批量近似,我们提供理论保证。在三个图像检索数据集上进行的广泛实验表明,路线图优于最近的几种AP近似方法,并突出了我们两个贡献的重要性。最后,使用用于训练的路线图,深度模型产生非常好的表现,表现出三个数据集的最先进结果。
translated by 谷歌翻译
A family of loss functions built on pair-based computation have been proposed in the literature which provide a myriad of solutions for deep metric learning. In this paper, we provide a general weighting framework for understanding recent pair-based loss functions. Our contributions are three-fold: (1) we establish a General Pair Weighting (GPW) framework, which casts the sampling problem of deep metric learning into a unified view of pair weighting through gradient analysis, providing a powerful tool for understanding recent pair-based loss functions; (2) we show that with GPW, various existing pair-based methods can be compared and discussed comprehensively, with clear differences and key limitations identified; (3) we propose a new loss called multi-similarity loss (MS loss) under the GPW, which is implemented in two iterative steps (i.e., mining and weighting). This allows it to fully consider three similarities for pair weighting, providing a more principled approach for collecting and weighting informative pairs. Finally, the proposed MS loss obtains new state-of-the-art performance on four image retrieval benchmarks, where it outperforms the most recent approaches, such as ABE [14] and HTL [4], by a large margin, e.g., , and 80.9% → 88.0% on In-Shop Clothes Retrieval dataset
translated by 谷歌翻译
大多数深度度量学习(DML)方法采用了一种策略,该策略迫使所有积极样本在嵌入空间中靠近,同时使它们远离负面样本。但是,这种策略忽略了正(负)样本的内部关系,并且通常导致过度拟合,尤其是在存在硬样品和标签错误的情况下。在这项工作中,我们提出了一个简单而有效的正则化,即列表自我验证(LSD),该化逐渐提炼模型的知识,以适应批处理中每个样本对的更合适的距离目标。LSD鼓励在正(负)样本中更平稳的嵌入和信息挖掘,以减轻过度拟合并从而改善概括。我们的LSD可以直接集成到一般的DML框架中。广泛的实验表明,LSD始终提高多个数据集上各种度量学习方法的性能。
translated by 谷歌翻译
深度度量学习(DML)了解映射,该映射到嵌入空间,其中类似数据接近并且不同的数据远远。然而,DML的传统基于代理的损失有两个问题:渐变问题并使用多个本地中心应用现实世界数据集。此外,DML性能指标也有一些问题具有稳定性和灵活性。本文提出了多代理锚(MPA)丢失和归一化折扣累积增益(NDCG @ K)度量。本研究贡献了三个以下:(1)MPA损失能够使用多代理学习现实世界数据集。(2)MPA损失提高了神经网络的培训能力,解决了梯度问题。(3)NDCG @ K度量标准鼓励对各种数据集进行全面评估。最后,我们展示了MPA损失的有效性,MPA损失在两个用于细粒度图像的数据集上实现了最高准确性。
translated by 谷歌翻译
Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.
translated by 谷歌翻译
Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeping dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets.
translated by 谷歌翻译
深度指标学习旨在学习嵌入空间,即使在训练期间他们的类是看不见的,数据之间的距离反映了他们的类等价。然而,培训中可用的有限数量排除了学习嵌入空间的概括。由此激励,我们介绍了一种新的数据增强方法,该方法合成了新颖类及其嵌入向量。我们的方法可以向嵌入式模型提供丰富的语义信息,通过在原始数据中使用新类别增强培训数据来提高其泛化。我们通过学习和利用条件生成模型来实现这个想法,其中,给定类标签和噪声,产生类的随机嵌入向量。我们所提出的发电机允许损失通过增强现实和多样的类来使用更丰富的级关系,从而更好地推广了看不见的样本。公共基准数据集上的实验结果表明,我们的方法明确提高了基于代理的损失的性能。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
Precision-Recall曲线(AUPRC)下区域的随机优化是机器学习的关键问题。尽管已经对各种算法进行了广泛研究以进行AUPRC优化,但仅在多Query情况下保证了概括。在这项工作中,我们介绍了随机AUPRC优化的一次性概括中的第一个试验。对于更庞大的概括范围,我们专注于算法依赖性概括。我们目的地都有算法和理论障碍。从算法的角度来看,我们注意到,仅当采样策略偏见时,大多数现有随机估计器才会偏向,并且由于不可兼容性而不稳定。为了解决这些问题,我们提出了一个具有卓越稳定性的采样率不变的无偏随机估计器。最重要的是,AUPRC优化是作为组成优化问题配制的,并提出了随机算法来解决此问题。从理论的角度来看,算法依赖性概括分析的标准技术不能直接应用于这种列表的组成优化问题。为了填补这一空白,我们将模型稳定性从实例损失扩展到列表损失,并弥合相应的概括和稳定性。此外,我们构建状态过渡矩阵以描述稳定性的复发,并通过矩阵频谱简化计算。实际上,关于三个图像检索数据集的实验结果谈到了我们框架的有效性和健全性。
translated by 谷歌翻译
对比损失长期以来一直是深度度量学习的关键成分,现在由于自我监督学习的成功而正在变得越来越受欢迎。最近的研究表明,在学习代表网络时以互补的方式分解这种损失的损失:正期和熵项。虽然因此整体损失被定义为两种术语的组合,但这两个术语的余额通常隐藏在实施细节之后,并且在实践中很大程度上被忽略和次优。在这项工作中,我们将对比损失的平衡作为超参数优化问题,并提出了一种基于坐标的下降的搜索方法,可有效地找到优化评估性能的超参数。在此过程中,我们将现有的余额分析扩展到对比度边缘损失,包括批次大小在余额中,并解释如何从批处理中汇总损耗元素,以在更大范围内保持近最佳性能。来自深度度量学习和自我监督学习的基准的广泛实验表明,使用我们的方法比其他常用搜索方法更快地找到最佳超参数。
translated by 谷歌翻译
基于代理的深度度量学习(DML)通过将图像嵌入与班级代表接近的图像(通常相对于它们之间的角度)来学习深度表示。但是,这无视嵌入规范,该规范可以带有其他有益的环境,例如类或图像 - 内在不确定性。此外,基于代理的DML努力学习课堂内部结构。为了立即解决这两个问题,我们引入了基于概率的非各向异性概率代理DML。我们将图像模拟为高超球的定向von mises-fisher(VMF)分布,可以反映图像内部不确定性。此外,我们为类代理提供了非异向von mises-fisher(NIVMF)分布,以更好地表示复杂的类别特异性方差。为了衡量这些模型之间的代理到图像距离,我们开发并研究了多个分布到分配和分布指标。每种框架选择都是由一系列消融研究激励的,这些研究展示了我们对基于代理的DML的概率方法的有益特性,例如不确定性意识,在培训期间较好的梯度以及总体改善的概括性能。后者尤其反映在标准DML基准测试中的竞争性能中,我们的方法可以进行比较,这表明现有的基于代理的DML可以从更概率的治疗中受益匪浅。代码可在github.com/explainableml/probabilistic_deep_metric_learning上找到。
translated by 谷歌翻译