由于标记数据的稀缺性,使用在ImageNet上预先培训的模型是遥感场景分类中的事实上标准。虽然最近,几个较大的高分辨率遥感(HRRS)数据集具有建立新基准的目标,但在这些数据集上从头开始的培训模型的尝试是零星的。在本文中,我们显示来自划痕的训练模型在几个较新数据集中产生可比较的结果,可以进行微调在想象中预先培训的模型。此外,在HRRS数据集上学到的表示,更好地或至少与想象中学到的那些类似的人力出现场景分类任务转移到其他HRRS场景分类任务。最后,我们表明,在许多情况下,通过使用域名数据的第二轮预训练,即域 - 自适应预训练,获得最佳表示。源代码和预先训练的模型可用于\ url {https://github.com/risojevicv/rssc-transfer。}
translated by 谷歌翻译
Deep transfer learning (DTL) has formed a long-term quest toward enabling deep neural networks (DNNs) to reuse historical experiences as efficiently as humans. This ability is named knowledge transferability. A commonly used paradigm for DTL is firstly learning general knowledge (pre-training) and then reusing (fine-tuning) them for a specific target task. There are two consensuses of transferability of pre-trained DNNs: (1) a larger domain gap between pre-training and downstream data brings lower transferability; (2) the transferability gradually decreases from lower layers (near input) to higher layers (near output). However, these consensuses were basically drawn from the experiments based on natural images, which limits their scope of application. This work aims to study and complement them from a broader perspective by proposing a method to measure the transferability of pre-trained DNN parameters. Our experiments on twelve diverse image classification datasets get similar conclusions to the previous consensuses. More importantly, two new findings are presented, i.e., (1) in addition to the domain gap, a larger data amount and huge dataset diversity of downstream target task also prohibit the transferability; (2) although the lower layers learn basic image features, they are usually not the most transferable layers due to their domain sensitivity.
translated by 谷歌翻译
Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domainspecific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms Im-ageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets.
translated by 谷歌翻译
深层模型必须学习强大而可转移的表示形式,以便在新领域上表现良好。尽管已经提出了域转移方法(例如,域的适应性,域的概括)来学习跨域的可转移表示,但通常将它们应用于在Imagenet上预先训练的重置骨架。因此,现有作品很少关注预训练对域转移任务的影响。在本文中,我们对领域适应和泛化的预训练进行了广泛的研究和深入分析,即:网络体系结构,大小,训练损失和数据集。我们观察到,仅使用最先进的主链优于现有的最先进的域适应基线,并将新的基本线设置为Office-Home和Domainnet在10.7 \%和5.5 \%上提高。我们希望这项工作可以为未来的领域转移研究提供更多见解。
translated by 谷歌翻译
转移学习的一种常见做法是通过预先培训数据丰富的上游任务来初始化下游模型权重。在对象检测中,特征主链通常用成像网分类器的权重初始化,并在对象检测任务上进行微调。最近的作品表明,在更长的培训方案下,这不是严格必要的,并提供了从头开始训练骨干的食谱。我们研究了这种端到端训练趋势的相反方向:我们表明,一种极端的知识保存形式 - 冻结分类器至关重要的骨干 - 始终改善许多不同的检测模型,并导致可观的资源节省。我们假设并通过实验证实,其余的检测器成分的容量和结构是利用冷冻骨架的关键因素。我们发现的直接应用包括对严重案例的绩效改进,例如检测长尾对象类别以及计算和内存资源节省,这有助于使该领域更容易访问具有更少的计算资源的研究人员。
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
几乎所有用于计算机视觉任务的最先进的神经网络都受到(1)在目标数据集上的大规模数据集和(2)FINETUNING上的预培训(1)预培训。该策略有助于减少对目标数据集的依赖,并提高目标任务的收敛速率和泛化。虽然对大型数据集进行预训练非常有用,但其最重要的缺点是高培训成本。要解决此问题,我们提出了有效的过滤方法,以从训练前的数据集中选择相关子集。此外,我们发现,在训练前的图像分辨率降低图像分辨率在成本和性能之间提供了很大的权衡。我们通过在无监督和监督设置中的想象中进行预测,并在各种目标数据集和任务集合中进行预测,通过预先培训来验证我们的技术。我们提出的方法大大降低了预训练成本并提供了强大的性能提升。最后,我们通过在我们的子集上调整可用模型来提高标准ImageNet预培训1-3%,并在从更大的规模数据集中过滤的数据集上进行预训练。
translated by 谷歌翻译
Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from Ima-geNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested.
translated by 谷歌翻译
深度学习体系结构的令人印象深刻的性能与模型复杂性的大量增加有关。需要对数百万个参数进行调整,并相应地进行训练和推理时间扩展。但是需要进行大规模的微调吗?在本文中,专注于图像分类,我们考虑了一种简单的转移学习方法利用预卷积特征作为快速内核方法的输入。我们将这种方法称为最佳调整,因为只有内核分类器经过培训。通过执行2500多个培训过程,我们表明这种最佳调整方法提供了可比的精度W.R.T.进行微调,训练时间较小在一个和两个数量级之间。这些结果表明,顶级调整为中小型数据集中的微调提供了有用的替代方法,尤其是在训练效率至关重要的情况下。
translated by 谷歌翻译
微调被广泛应用于图像分类任务中,作为转移学习方法。它重新使用源任务中的知识来学习和获得目标任务中的高性能。微调能够减轻培训数据不足和新数据昂贵标签的挑战。但是,标准微调在复杂的数据分布中的性能有限。为了解决这个问题,我们提出了适应性的多调整方法,该方法可适应地确定每个数据样本的微调策略。在此框架中,定义了多个微调设置和一个策略网络。适应性多调整中的策略网络可以动态地调整为最佳权重,以将不同的样本馈入使用不同的微调策略训练的模型。我们的方法的表现优于标准的微调方法1.69%,数据集FGVC-Aircraft和可描述的纹理优于2.79%,在Stanford Cars,CIFAR-10和时尚范围内产生可比的性能。
translated by 谷歌翻译
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
translated by 谷歌翻译
转移学习可以在源任务上重新使用知识来帮助学习目标任务。一种简单的转移学习形式在当前的最先进的计算机视觉模型中是常见的,即预先训练ILSVRC数据集上的图像分类模型,然后在任何目标任务上进行微调。然而,先前对转移学习的系统研究已经有限,并且预计工作的情况并不完全明白。在本文中,我们对跨越不同的图像域进行了广泛的转移学习实验探索(消费者照片,自主驾驶,空中图像,水下,室内场景,合成,特写镜头)和任务类型(语义分割,物体检测,深度估计,关键点检测)。重要的是,这些都是与现代计算机视觉应用相关的复杂的结构化的输出任务类型。总共执行超过2000年的转移学习实验,包括许多来源和目标来自不同的图像域,任务类型或两者。我们系统地分析了这些实验,了解图像域,任务类型和数据集大小对传输学习性能的影响。我们的研究导致了几个见解和具体建议:(1)对于大多数任务,存在一个显着优于ILSVRC'12预培训的来源; (2)图像领域是实现阳性转移的最重要因素; (3)源数据集应该\ \ emph {include}目标数据集的图像域以获得最佳结果; (4)与此同时,当源任务的图像域比目标的图像域时,我们只观察小的负面影响; (5)跨任务类型的转移可能是有益的,但其成功严重依赖于源和目标任务类型。
translated by 谷歌翻译
转移学习已成为利用计算机视觉中预先训练模型的流行方法。然而,在不执行计算上昂贵的微调的情况下,难以量化哪个预先训练的源模型适用于特定目标任务,或者相反地,可以容易地适应预先训练的源模型的任务。在这项工作中,我们提出了高斯Bhattacharyya系数(GBC),一种用于量化源模型和目标数据集之间的可转换性的新方法。在第一步中,我们在由源模型定义的特征空间中嵌入所有目标图像,并表示使用每类高斯。然后,我们使用Bhattacharyya系数估计它们的成对类可分离性,从而产生了一种简单有效的源模型转移到目标任务的程度。我们在数据集和架构选择的上下文中评估GBC在图像分类任务上。此外,我们还对更复杂的语义分割转移性估算任务进行实验。我们证明GBC在语义分割设置中大多数评估标准上的最先进的可转移性度量,匹配图像分类中的数据集转移性的最高方法的性能,并且在图像分类中执行最佳的架构选择问题。
translated by 谷歌翻译
我们对最近的自我和半监督ML技术进行严格的评估,从而利用未标记的数据来改善下游任务绩效,以河床分割的三个遥感任务,陆地覆盖映射和洪水映射。这些方法对于遥感任务特别有价值,因为易于访问未标记的图像,并获得地面真理标签通常可以昂贵。当未标记的图像(标记数据集之外)提供培训时,我们量化性能改进可以对这些遥感分割任务进行期望。我们还设计实验以测试这些技术的有效性,当测试集相对于训练和验证集具有域移位时。
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译
Increasing model, data and compute budget scale in the pre-training has been shown to strongly improve model generalization and transfer learning in vast line of work done in language modeling and natural image recognition. However, most studies on the positive effect of larger scale were done in scope of in-domain setting, with source and target data being in close proximity. To study effect of larger scale for both in-domain and out-of-domain setting when performing full and few-shot transfer, we combine here for the first time large, openly available medical X-Ray chest imaging datasets to reach a scale for medical imaging domain comparable to ImageNet-1k, routinely used for pre-training in natural image domain. We then conduct supervised pre-training, while varying network size and source data scale and domain, being either large natural (ImageNet-1k/21k) or large medical chest X-Ray datasets, and transfer pre-trained models to different natural or medical targets. We observe strong improvement due to larger pre-training scale for intra-domain natural-natural and medical-medical transfer. For inter-domain natural-medical transfer, we find improvements due to larger pre-training scale on larger X-Ray targets in full shot regime, while for smaller targets and for few-shot regime the improvement is not visible. Remarkably, large networks pre-trained on very large natural ImageNet-21k are as good or better than networks pre-trained on largest available medical X-Ray data when performing transfer to large X-Ray targets. We conclude that substantially increasing model and generic, medical domain-agnostic natural image source data scale in the pre-training can enable high quality out-of-domain transfer to medical domain specific targets, removing dependency on large medical domain-specific source data often not available in the practice.
translated by 谷歌翻译
我们提出“ AITLAS:基准竞技场” - 一个开源基准测试框架,用于评估地球观察中图像分类的最新深度学习方法(EO)。为此,我们介绍了从九种不同的最先进的体系结构得出的400多个模型的全面比较分析,并将它们与来自22个具有不同尺寸的数据集的各种多级和多标签分类任务进行比较和属性。除了完全在这些数据集上训练的模型外,我们还基于在转移学习的背景下训练的模型,利用预训练的模型变体,因为通常在实践中执行。所有提出的方法都是一般的,可以轻松地扩展到本研究中未考虑的许多其他遥感图像分类任务。为了确保可重复性并促进更好的可用性和进一步的开发,所有实验资源在内的所有实验资源,包括训练的模型,模型配置和数据集的处理详细信息(以及用于培训和评估模型的相应拆分)都在存储库上公开可用:HTTPS ://github.com/biasvariancelabs/aitlas-arena。
translated by 谷歌翻译
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between 'enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-theart results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
translated by 谷歌翻译
我们解决了转移学习中的集合选择问题:给出了大量的源模型,我们要选择一个模型的集合,在对目标训练集的微调后,在目标测试集上产生最佳性能。由于微调所有可能的合奏是计算禁止的,因此我们目的是使用计算上有效的可转换度量来预测目标数据集的性能。我们提出了用于此任务的几个新的可转换性指标,并在对语义细分的具有挑战性和现实的转移学习设置中进行评估:我们通过考虑涵盖各种图像域的各种数据集来创建一个大型和多样化的源模型池,两种不同架构和两个预训练计划。鉴于此池,我们自动选择子集,以在给定的目标数据集上形成良好的集合。我们将通过我们的方法选择的合奏与两个基线进行比较,该基线选择单个源模型,其中(1)与我们的方法相同;或(2)从包含大源模型的池,每个池具有与集合相似的容量。平均超过17个目标数据集,我们分别以6.0%和2.5%的相对平均值越优于这些基线。
translated by 谷歌翻译
给定空中图像,空中场景解析(ASP)目标,以解释图像内容的语义结构,例如,通过将语义标签分配给图像的每个像素来解释图像内容的语义结构。随着数据驱动方法的推广,过去几十年通过在使用高分辨率航空图像时,通过接近基于瓦片级场景分类或分段的图像分析的方案来解决了对ASP的有希望的进展。然而,前者的方案通常会产生瓷砖技术边界的结果,而后者需要处理从像素到语义的复杂建模过程,这通常需要具有像素 - 明智语义标签的大规模和良好的图像样本。在本文中,我们在ASP中解决了这些问题,从瓷砖级场景分类到像素明智语义标签的透视图。具体而言,我们首先通过文献综述重新审视空中图像解释。然后,我们提出了一个大规模的场景分类数据集,其中包含一百万个空中图像被称为百万援助。使用所提出的数据集,我们还通过经典卷积神经网络(CNN)报告基准测试实验。最后,我们通过统一瓦片级场景分类和基于对象的图像分析来实现ASP,以实现像素明智的语义标记。密集实验表明,百万援助是一个具有挑战性但有用的数据集,可以作为评估新开发的算法的基准。当从百万辅助救援方面传输知识时,百万辅助的微调CNN模型始终如一,而不是那些用于空中场景分类的预磨料想象。此外,我们设计的分层多任务学习方法实现了对挑战GID的最先进的像素 - 明智的分类,拓宽了用于航空图像解释的像素明智语义标记的瓦片级场景分类。
translated by 谷歌翻译