Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domainspecific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms Im-ageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets.
translated by 谷歌翻译
Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current nonensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.
translated by 谷歌翻译
Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from Ima-geNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested.
translated by 谷歌翻译
几乎所有用于计算机视觉任务的最先进的神经网络都受到(1)在目标数据集上的大规模数据集和(2)FINETUNING上的预培训(1)预培训。该策略有助于减少对目标数据集的依赖,并提高目标任务的收敛速率和泛化。虽然对大型数据集进行预训练非常有用,但其最重要的缺点是高培训成本。要解决此问题,我们提出了有效的过滤方法,以从训练前的数据集中选择相关子集。此外,我们发现,在训练前的图像分辨率降低图像分辨率在成本和性能之间提供了很大的权衡。我们通过在无监督和监督设置中的想象中进行预测,并在各种目标数据集和任务集合中进行预测,通过预先培训来验证我们的技术。我们提出的方法大大降低了预训练成本并提供了强大的性能提升。最后,我们通过在我们的子集上调整可用模型来提高标准ImageNet预培训1-3%,并在从更大的规模数据集中过滤的数据集上进行预训练。
translated by 谷歌翻译
转移学习使我们能够利用从一项任务中获得的知识来协助解决另一个或相关任务。在现代计算机视觉研究中,问题是哪个架构对给定的数据集更好地表现更好。在本文中,我们将14种预先训练的Imagenet模型的性能进行比较在组织病理学癌症检测数据集上,其中每个模型都被配置为天真的模型,特征提取器模型或微调模型。DENSENET161已被证明具有高精度,而RESET101具有高召回。适用于后续检查成本高的高精度模型,而低精度,但在后续检查成本低时,可以使用高召回/灵敏度模型。结果还表明,转移学习有助于更快地收敛模型。
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between 'enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-theart results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
translated by 谷歌翻译
We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate "fewshot" models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the "body", and from the "body" to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
为了充分利用潜在的分钟和微妙的差异,细粒度分类器收集有关阶级变异的信息。由于同一类实体中的颜色,观点和结构之间的差异,任务是非常具有挑战性的。由于与自己的其他类别和差异的观点之间的差异之间的相似性,分类变得更加困难。在这项工作中,我们调查了地标通用CNN分类器的性能,它在细粒度数据集上呈现了大规模分类数据集的顶部缺口结果,并将其与最先进的细粒度分类器进行比较。在本文中,我们提出了两个特定问题:(i)一般的CNN分类器是否可以实现与细粒度的分类器相当的结果? (ii)将军CNN分类器是否需要任何特定信息来改善细粒度的信息?在整个工作中,我们培训一般的CNN分类器而不引入特定于细粒度数据集的任何方面。我们对六个数据集进行了广泛的评估,以确定细粒度分类器是否能够在实验中提升基线。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
由于标记数据的稀缺性,使用在ImageNet上预先培训的模型是遥感场景分类中的事实上标准。虽然最近,几个较大的高分辨率遥感(HRRS)数据集具有建立新基准的目标,但在这些数据集上从头开始的培训模型的尝试是零星的。在本文中,我们显示来自划痕的训练模型在几个较新数据集中产生可比较的结果,可以进行微调在想象中预先培训的模型。此外,在HRRS数据集上学到的表示,更好地或至少与想象中学到的那些类似的人力出现场景分类任务转移到其他HRRS场景分类任务。最后,我们表明,在许多情况下,通过使用域名数据的第二轮预训练,即域 - 自适应预训练,获得最佳表示。源代码和预先训练的模型可用于\ url {https://github.com/risojevicv/rssc-transfer。}
translated by 谷歌翻译
作为主导范式,微调目标数据的预先训练模型广泛用于许多深度学习应用,特别是对于小数据集。然而,最近的研究已经明确表明,一旦培训迭代的数量增加,划痕训练都没有比这一训练前策略更糟糕的最终表现。在这项工作中,我们从学习理论中流行的泛化分析的角度重新审视这种现象。我们的结果表明,最终预测精度可能具有对预训练模型的弱依赖性,特别是在大训练迭代的情况下。观察激励我们利用预训练预调整的数据,因为此数据也可用于微调。使用预训练数据的泛化结果表明,当适当的预训练数据包含在微调中时,可以提高目标任务的最终性能。随着理论发现的洞察力,我们提出了一种新颖的选择策略来选择从预训练数据中的子集,以帮助改善目标任务的概括。 8个基准数据集上的图像分类任务的广泛实验结果验证了基于数据选择的微调管道的有效性。
translated by 谷歌翻译
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly.
translated by 谷歌翻译
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g. by loss re-weighting, data re-sampling, or transfer learning from head-to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.
translated by 谷歌翻译
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of longtailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula (1−β n )/(1−β), where n is the number of samples and β ∈ [0, 1) is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets. * The work was performed while Yin Cui and Yang Song worked at Google (a subsidiary of Alphabet Inc.).
translated by 谷歌翻译
大多数杂草物种都会通过竞争高价值作物所需的营养而产生对农业生产力的不利影响。手动除草对于大型种植区不实用。已经开展了许多研究,为农业作物制定了自动杂草管理系统。在这个过程中,其中一个主要任务是识别图像中的杂草。但是,杂草的认可是一个具有挑战性的任务。它是因为杂草和作物植物的颜色,纹理和形状类似,可以通过成像条件,当记录图像时的成像条件,地理或天气条件进一步加剧。先进的机器学习技术可用于从图像中识别杂草。在本文中,我们调查了五个最先进的深神经网络,即VGG16,Reset-50,Inception-V3,Inception-Resnet-V2和MobileNetv2,并评估其杂草识别的性能。我们使用了多种实验设置和多个数据集合组合。特别是,我们通过组合几个较小的数据集,通过数据增强构成了一个大型DataSet,缓解了类别不平衡,并在基于深度神经网络的基准测试中使用此数据集。我们通过保留预先训练的权重来调查使用转移学习技术来利用作物和杂草数据集的图像提取特征和微调它们。我们发现VGG16比小规模数据集更好地执行,而ResET-50比其他大型数据集上的其他深网络更好地执行。
translated by 谷歌翻译
基于变压器的监督预培训在重新识别(REID)中实现了良好的性能。但是,由于想象成和Reid数据集之间的域间隙,它通常需要更大的预训练数据集(例如,ImageNet-21k),以提高性能,因为变压器的强大数据拟合能力。为了解决这一挑战,这项工作可以分别从数据和模型结构的角度降低预训练和REID数据集之间的差距。我们首先调查在未标记的人物图像(Luperson DataSet)上的视觉变压器(VIV)的自我监督为了进一步降低域间隙并加速预训练,提出了灾难性的遗忘得分(CFS)来评估预训练和微调数据之间的差距。基于CFS,通过采样靠近下游REID数据的相关数据来选择一个子集,并从预训练的数据集中过滤无关数据。对于模型结构,提出了一种名为基于IBN的卷积词条(ICS)的特定于REID的模块来通过学习更不变的功能来弥合域间隙。已经进行了广泛的实验,以微调在监督学习,无监督域适应(UDA)和无监督的学习(USL)设置下进行预训练模型。我们成功将Luperson DataSet缩小为50%,没有性能下降。最后,我们在市场-1501和MSMT17上实现了最先进的表现。例如,我们的VIT-S / 16在Market1501上实现了91.3%/ 89.9%/ 89.6%用于监督/ UDA / USL REID的11501。代码和模型将发布到https://github.com/michuanhaohao/transreid -sl。
translated by 谷歌翻译
微调被广泛应用于图像分类任务中,作为转移学习方法。它重新使用源任务中的知识来学习和获得目标任务中的高性能。微调能够减轻培训数据不足和新数据昂贵标签的挑战。但是,标准微调在复杂的数据分布中的性能有限。为了解决这个问题,我们提出了适应性的多调整方法,该方法可适应地确定每个数据样本的微调策略。在此框架中,定义了多个微调设置和一个策略网络。适应性多调整中的策略网络可以动态地调整为最佳权重,以将不同的样本馈入使用不同的微调策略训练的模型。我们的方法的表现优于标准的微调方法1.69%,数据集FGVC-Aircraft和可描述的纹理优于2.79%,在Stanford Cars,CIFAR-10和时尚范围内产生可比的性能。
translated by 谷歌翻译