Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network.To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer.
translated by 谷歌翻译
Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding handcrafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer. * Contributed during an internship at Amazon.
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https: //github.com/szagoruyko/wide-residual-networks.
translated by 谷歌翻译
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.
translated by 谷歌翻译
在本文中,我们建议使用注意机制和全球环境进行图像分类的一般框架,该框架可以与各种网络体系结构结合起来以提高其性能。为了调查全球环境的能力,我们比较了四个数学模型,并观察到分开的条件生成模型中编码的全球环境可以提供更多的指导,因为“知道什么是任务无关紧要的,也将知道什么是相关的”。基于此观察结果,我们定义了一个新型的分离全球环境(CDGC),并设计了一个深层网络来获得它。通过参加CDGC,基线网络可以更准确地识别感兴趣的对象,从而改善性能。我们将框架应用于许多不同的网络体系结构,并与四个公开可用数据集的最新框架进行比较。广泛的结果证明了我们方法的有效性和优势。代码将在纸上接受公开。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
translated by 谷歌翻译
神经网络可以从单个图像中了解视觉世界的内容是什么?虽然它显然不能包含存在的可能对象,场景和照明条件 - 在所有可能的256 ^(3x224x224)224尺寸的方形图像中,它仍然可以在自然图像之前提供强大的。为了分析这一假设,我们通过通过监控掠夺教师的知识蒸馏来制定一种训练神经网络的培训神经网络。有了这个,我们发现上述问题的答案是:“令人惊讶的是,很多”。在定量术语中,我们在CiFar-10/100上找到了94%/ 74%的前1个精度,在想象中,通过将这种方法扩展到音频,84%的语音组合。在广泛的分析中,我们解除了增强,源图像和网络架构的选择,以及在从未见过熊猫的网络中发现“熊猫神经元”。这项工作表明,一个图像可用于推断成千上万的对象类,并激励关于增强和图像的基本相互作用的更新的研究议程。
translated by 谷歌翻译
分解表示形式通常被用于年龄不变的面部识别(AIFR)任务。但是,这些方法已经达到了一些局限性,(1)具有年龄标签的大规模面部识别(FR)培训数据的要求,这在实践中受到限制; (2)高性能的重型深网架构; (3)他们的评估通常是在与年龄相关的面部数据库上进行的,同时忽略了标准的大规模FR数据库以确保鲁棒性。这项工作提出了一种新颖的轻巧的角度蒸馏(LIAAD)方法,用于克服这些限制的大规模轻量级AIFR。鉴于两个具有不同专业知识的教师,LIAAD引入了学习范式,以有效地提炼老年人的专注和棱角分明的知识,从这些老师到轻量级的学生网络,使其更强大,以更高的fr准确性和稳健的年龄,从而有效地提炼了一个学习范式因素。因此,LIAAD方法能够采用带有和不具有年龄标签的两个FR数据集的优势来训练AIFR模型。除了先前的蒸馏方法主要关注封闭设置问题中的准确性和压缩比,我们的LIAAD旨在解决开放式问题,即大规模的面部识别。对LFW,IJB-B和IJB-C Janus,AgeDB和Megaface-Fgnet的评估证明了拟议方法在轻重量结构上的效率。这项工作还提出了一个新的纵向面部衰老(Logiface)数据库\ footNote {将提供该数据库},以进一步研究未来与年龄相关的面部问题。
translated by 谷歌翻译
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
translated by 谷歌翻译
深度学习的巨大成功主要是由于大规模的网络架构和高质量的培训数据。但是,在具有有限的内存和成像能力的便携式设备上部署最近的深层模型仍然挑战。一些现有的作品通过知识蒸馏进行了压缩模型。不幸的是,这些方法不能处理具有缩小图像质量的图像,例如低分辨率(LR)图像。为此,我们采取了开创性的努力,从高分辨率(HR)图像到达将处理LR图像的紧凑型网络模型中学习的繁重网络模型中蒸馏有用的知识,从而推动了新颖的像素蒸馏的当前知识蒸馏技术。为实现这一目标,我们提出了一名教师助理 - 学生(TAS)框架,将知识蒸馏分解为模型压缩阶段和高分辨率表示转移阶段。通过装备新颖的特点超分辨率(FSR)模块,我们的方法可以学习轻量级网络模型,可以实现与重型教师模型相似的准确性,但参数更少,推理速度和较低分辨率的输入。在三个广泛使用的基准,\即,幼崽200-2011,Pascal VOC 2007和ImageNetsub上的综合实验证明了我们方法的有效性。
translated by 谷歌翻译
知识蒸馏是通过知识转移模型压缩的有效稳定的方法。传统知识蒸馏(KD)是将来自大型和训练有素的教师网络的知识转移到小型学生网络,这是一种单向过程。最近,已经提出了深度相互学习(DML)来帮助学生网络协同和同时学习。然而,据我们所知,KD和DML从未在统一的框架中共同探索,以解决知识蒸馏问题。在本文中,我们调查教师模型在KD中支持更值得信赖的监督信号,而学生则在DML中捕获教师的类似行为。基于这些观察,我们首先建议将KD与DML联合在统一的框架中。此外,我们提出了一个半球知识蒸馏(SOKD)方法,有效提高了学生和教师的表现。在这种方法中,我们在DML中介绍了同伴教学培训时尚,以缓解学生的模仿困难,并利用KD训练有素的教师提供的监督信号。此外,我们还显示我们的框架可以轻松扩展到基于功能的蒸馏方法。在CiFAR-100和Imagenet数据集上的广泛实验证明了所提出的方法实现了最先进的性能。
translated by 谷歌翻译
Knowledge Distillation (KD) consists of transferring "knowledge" from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student's compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and nonpredicted classes.
translated by 谷歌翻译
采用注意机制的普遍性引起了人们对注意力分布的解释性的关注。尽管它提供了有关模型如何运行的见解,但由于对模型预测的解释仍然非常怀疑,但它利用了注意力。社区仍在寻求更容易解释的策略,以更好地识别最终决定最大的本地活跃地区。为了提高现有注意模型的解释性,我们提出了一种新型的双线性代表性非参数注意(BR-NPA)策略,该策略捕获了与任务相关的人类解剖信息。目标模型首先要蒸馏以具有高分辨率中间特征图。然后,根据本地成对特征相似性将代表性特征分组,以产生更精确的,更精确的注意力图,突出显示输入的任务相关部分。获得的注意图根据化合物特征的活性水平进行对,该功能提供了有关突出显示区域的重要水平的信息。提出的模型可以很容易地在涉及分类的各种现代深层模型中进行调整。与最先进的注意力模型和可视化方法相比,广泛的定量和定性实验显示了更全面和准确的视觉解释,以及跨多个任务的可视化方法,包括细粒度的图像分类,很少的射击分类和人重新识别,而无需损害该方法分类精度。提出的可视化模型急切地阐明了神经网络如何在不同任务中以不同的方式“注意他们的注意力”。
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
在这项工作中,我们提出了相互信息最大化知识蒸馏(MIMKD)。我们的方法使用对比目标来同时估计,并最大化教师和学生网络之间的本地和全球特征表示的相互信息的下限。我们通过广泛的实验证明,这可以通过将知识从更加性能但计算昂贵的模型转移来改善低容量模型的性能。这可用于产生更好的模型,可以在具有低计算资源的设备上运行。我们的方法灵活,我们可以将具有任意网络架构的教师蒸馏到任意学生网络。我们的经验结果表明,MIMKD优于各种学生教师对的竞争方法,具有不同的架构,以及学生网络的容量极低。我们能够通过从Reset-50蒸馏出来的知识,从基线精度为Shufflenetv2获得74.55%的精度。在Imagenet上,我们使用Reset-34教师网络将Reset-18网络从68.88%提高到70.32%的准确度(1.44%+)。
translated by 谷歌翻译
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation.
translated by 谷歌翻译
特征回归是将大型神经网络模型蒸馏到较小的功能回归。我们表明,随着网络架构的简单变化,回归可能会优于自我监督模型的知识蒸馏更复杂的最先进方法。令人惊讶的是,即使仅在蒸馏过程中仅使用并且在下游任务中丢弃时,将多层的Perceptron头部添加到CNN骨架上是有益的。因此,更深的非线性投影可以使用在不改变推理架构和时间的情况下准确地模仿老师。此外,我们利用独立的投影头来同时蒸馏多个教师网络。我们还发现,使用与教师和学生网络的输入相同的弱增强图像辅助蒸馏。Imagenet DataSet上的实验证明了各种自我监督蒸馏环境中提出的变化的功效。
translated by 谷歌翻译
随着AI芯片(例如GPU,TPU和NPU)的改进以及物联网(IOT)的快速发展,一些强大的深神经网络(DNN)通常由数百万甚至数亿个参数组成,这些参数是可能不适合直接部署在低计算和低容量单元(例如边缘设备)上。最近,知识蒸馏(KD)被认为是模型压缩的有效方法之一,以减少模型参数。 KD的主要概念是从大型模型(即教师模型)的特征图中提取有用的信息,以引用成功训练一个小型模型(即学生模型),该模型大小比老师小得多。尽管已经提出了许多基于KD的方法来利用教师模型中中间层的特征图中的信息,但是,它们中的大多数并未考虑教师模型和学生模型之间的特征图的相似性,这可能让学生模型学习无用的信息。受到注意机制的启发,我们提出了一种新颖的KD方法,称为代表教师钥匙(RTK),该方法不仅考虑了特征地图的相似性,而且还会过滤掉无用的信息以提高目标学生模型的性能。在实验中,我们使用多个骨干网络(例如Resnet和wideresnet)和数据集(例如CIFAR10,CIFAR100,SVHN和CINIC10)验证了我们提出的方法。结果表明,我们提出的RTK可以有效地提高基于注意的KD方法的分类精度。
translated by 谷歌翻译