We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
translated by 谷歌翻译
Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding handcrafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer. * Contributed during an internship at Amazon.
translated by 谷歌翻译
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
随着AI芯片(例如GPU,TPU和NPU)的改进以及物联网(IOT)的快速发展,一些强大的深神经网络(DNN)通常由数百万甚至数亿个参数组成,这些参数是可能不适合直接部署在低计算和低容量单元(例如边缘设备)上。最近,知识蒸馏(KD)被认为是模型压缩的有效方法之一,以减少模型参数。 KD的主要概念是从大型模型(即教师模型)的特征图中提取有用的信息,以引用成功训练一个小型模型(即学生模型),该模型大小比老师小得多。尽管已经提出了许多基于KD的方法来利用教师模型中中间层的特征图中的信息,但是,它们中的大多数并未考虑教师模型和学生模型之间的特征图的相似性,这可能让学生模型学习无用的信息。受到注意机制的启发,我们提出了一种新颖的KD方法,称为代表教师钥匙(RTK),该方法不仅考虑了特征地图的相似性,而且还会过滤掉无用的信息以提高目标学生模型的性能。在实验中,我们使用多个骨干网络(例如Resnet和wideresnet)和数据集(例如CIFAR10,CIFAR100,SVHN和CINIC10)验证了我们提出的方法。结果表明,我们提出的RTK可以有效地提高基于注意的KD方法的分类精度。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
深度学习的巨大成功主要是由于大规模的网络架构和高质量的培训数据。但是,在具有有限的内存和成像能力的便携式设备上部署最近的深层模型仍然挑战。一些现有的作品通过知识蒸馏进行了压缩模型。不幸的是,这些方法不能处理具有缩小图像质量的图像,例如低分辨率(LR)图像。为此,我们采取了开创性的努力,从高分辨率(HR)图像到达将处理LR图像的紧凑型网络模型中学习的繁重网络模型中蒸馏有用的知识,从而推动了新颖的像素蒸馏的当前知识蒸馏技术。为实现这一目标,我们提出了一名教师助理 - 学生(TAS)框架,将知识蒸馏分解为模型压缩阶段和高分辨率表示转移阶段。通过装备新颖的特点超分辨率(FSR)模块,我们的方法可以学习轻量级网络模型,可以实现与重型教师模型相似的准确性,但参数更少,推理速度和较低分辨率的输入。在三个广泛使用的基准,\即,幼崽200-2011,Pascal VOC 2007和ImageNetsub上的综合实验证明了我们方法的有效性。
translated by 谷歌翻译
知识蒸馏(KD)是压缩边缘设备深层分类模型的有效工具。但是,KD的表现受教师和学生网络之间较大容量差距的影响。最近的方法已诉诸KD的多个教师助手(TA)设置,该设置依次降低了教师模型的大小,以相对弥合这些模型之间的尺寸差距。本文提出了一种称为“知识蒸馏”课程专家选择的新技术,以有效地增强在容量差距问题下对紧凑型学生的学习。该技术建立在以下假设的基础上:学生网络应逐渐使用分层的教学课程来逐步指导,因为它可以从较低(较高的)容量教师网络中更好地学习(硬)数据样本。具体而言,我们的方法是一种基于TA的逐渐的KD技术,它每个输入图像选择单个教师,该课程是基于通过对图像进行分类的难度驱动的课程的。在这项工作中,我们凭经验验证了我们的假设,并对CIFAR-10,CIFAR-100,CINIC-10和Imagenet数据集进行了严格的实验,并在类似VGG的模型,Resnets和WideresNets架构上显示出提高的准确性。
translated by 谷歌翻译
与从头开始的传统学习相比,知识蒸馏有时会使DNN实现卓越的性能。本文提供了一种新的观点,可以根据信息理论来解释知识蒸馏的成功,即量化在DNN的中间层中编码的知识点。为此,我们将DNN中的信号处理视为丢弃层的信息。知识点称为输入单元,其信息比其他输入单元所丢弃的信息要少得多。因此,我们根据知识点的量化提出了三个用于知识蒸馏的假设。 1. DNN从知识蒸馏中学习比从头开始学习的DNN学习更多的知识点。 2.知识蒸馏使DNN更有可能同时学习不同的知识点。相比之下,从头开始的DNN学习倾向于顺序编码各种知识点。 3.与从头开始学习的DNN学习通常更稳定地优化了从知识蒸馏中学习的DNN学习。为了验证上述假设,我们设计了具有前景对象注释的三种类型的指标,以分析DNN的功能表示,\ textit {i.e。}知识点的数量和质量,不同知识点的学习速度,以及优化方向的稳定性。在实验中,我们诊断出各种DNN的不同分类任务,即图像分类,3D点云分类,二进制情感分类和问题回答,这些问题验证了上述假设。
translated by 谷歌翻译
Figure 1. An illustration of standard knowledge distillation. Despite widespread use, an understanding of when the student can learn from the teacher is missing.
translated by 谷歌翻译
Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network.To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer.
translated by 谷歌翻译
近年来,深度卷积神经网络在病理学图像分割方面取得了重大进展。然而,病理图像分割遇到困境,其中更高绩效网络通常需要更多的计算资源和存储。由于病理图像的固有高分辨率,这种现象限制了实际场景中的高精度网络的就业。为了解决这个问题,我们提出了一种用于病理胃癌细分的新型跨层相关(COCO)知识蒸馏网络。知识蒸馏,通过从繁琐的网络从知识转移提高紧凑型网络的性能的一般技术。具体而言,我们的Coco Distillnet模拟了不同层之间的通道混合空间相似性的相关性,然后将这些知识从预培训的繁琐的教师网络传送到非培训的紧凑学生网络。此外,我们还利用了对抗性学习策略来进一步提示被称为对抗性蒸馏(AD)的蒸馏程序。此外,为了稳定我们的培训程序,我们利用无监督的释义模块(PM)来提高教师网络中的知识释义。结果,对胃癌细分数据集进行的广泛实验表明了Coco Distillnet的突出能力,实现了最先进的性能。
translated by 谷歌翻译
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation.
translated by 谷歌翻译
在这项工作中,我们提出了相互信息最大化知识蒸馏(MIMKD)。我们的方法使用对比目标来同时估计,并最大化教师和学生网络之间的本地和全球特征表示的相互信息的下限。我们通过广泛的实验证明,这可以通过将知识从更加性能但计算昂贵的模型转移来改善低容量模型的性能。这可用于产生更好的模型,可以在具有低计算资源的设备上运行。我们的方法灵活,我们可以将具有任意网络架构的教师蒸馏到任意学生网络。我们的经验结果表明,MIMKD优于各种学生教师对的竞争方法,具有不同的架构,以及学生网络的容量极低。我们能够通过从Reset-50蒸馏出来的知识,从基线精度为Shufflenetv2获得74.55%的精度。在Imagenet上,我们使用Reset-34教师网络将Reset-18网络从68.88%提高到70.32%的准确度(1.44%+)。
translated by 谷歌翻译
知识蒸馏(KD)是一种有效的方法,可以将知识从大型“教师”网络转移到较小的“学生”网络。传统的KD方法需要大量标记的培训样本和白盒老师(可以访问参数)才能培训好学生。但是,这些资源并不总是在现实世界应用中获得。蒸馏过程通常发生在我们无法访问大量数据的外部政党方面,并且由于安全性和隐私问题,教师没有披露其参数。为了克服这些挑战,我们提出了一种黑盒子少的KD方法,以培训学生很少的未标记培训样本和一个黑盒老师。我们的主要思想是通过使用混合和有条件的变异自动编码器生成一组不同的分布合成图像来扩展训练集。这些合成图像及其从老师获得的标签用于培训学生。我们进行了广泛的实验,以表明我们的方法在图像分类任务上明显优于最近的SOTA/零射击KD方法。代码和型号可在以下网址找到:https://github.com/nphdang/fs-bbt
translated by 谷歌翻译
知识蒸馏在模型压缩方面取得了显着的成就。但是,大多数现有方法需要原始的培训数据,而实践中的实际数据通常是不可用的,因为隐私,安全性和传输限制。为了解决这个问题,我们提出了一种有条件的生成数据无数据知识蒸馏(CGDD)框架,用于培训有效的便携式网络,而无需任何实际数据。在此框架中,除了使用教师模型中提取的知识外,我们将预设标签作为额外的辅助信息介绍以培训发电机。然后,训练有素的发生器可以根据需要产生指定类别的有意义的培训样本。为了促进蒸馏过程,除了使用常规蒸馏损失,我们将预设标签视为地面真理标签,以便学生网络直接由合成训练样本类别监督。此外,我们强制学生网络模仿教师模型的注意图,进一步提高了其性能。为了验证我们方法的优越性,我们设计一个新的评估度量称为相对准确性,可以直接比较不同蒸馏方法的有效性。培训的便携式网络通过提出的数据无数据蒸馏方法获得了99.63%,99.07%和99.84%的CIFAR10,CIFAR100和CALTECH101的相对准确性。实验结果表明了所提出的方法的优越性。
translated by 谷歌翻译
由于其能够学习全球关系和卓越的表现,变形金刚引起了很多关注。为了实现更高的性能,将互补知识从变形金刚到卷积神经网络(CNN)是很自然的。但是,大多数现有的知识蒸馏方法仅考虑同源 - 建筑蒸馏,例如将知识从CNN到CNN蒸馏。在申请跨架构方案时,它们可能不合适,例如从变压器到CNN。为了解决这个问题,提出了一种新颖的跨架构知识蒸馏方法。具体而言,引入了部分交叉注意投影仪和小组线性投影仪,而不是直接模仿老师的输出/中级功能,以使学生的功能与教师的功能保持一致。并进一步提出了多视图强大的训练方案,以提高框架的稳健性和稳定性。广泛的实验表明,所提出的方法在小规模和大规模数据集上均优于14个最先进的方法。
translated by 谷歌翻译
In recent years, Siamese network based trackers have significantly advanced the state-of-the-art in real-time tracking. Despite their success, Siamese trackers tend to suffer from high memory costs, which restrict their applicability to mobile devices with tight memory budgets. To address this issue, we propose a distilled Siamese tracking framework to learn small, fast and accurate trackers (students), which capture critical knowledge from large Siamese trackers (teachers) by a teacher-students knowledge distillation model. This model is intuitively inspired by the one teacher vs. multiple students learning method typically employed in schools. In particular, our model contains a single teacher-student distillation module and a student-student knowledge sharing mechanism. The former is designed using a tracking-specific distillation strategy to transfer knowledge from a teacher to students. The latter is utilized for mutual learning between students to enable in-depth knowledge understanding. Extensive empirical evaluations on several popular Siamese trackers demonstrate the generality and effectiveness of our framework. Moreover, the results on five tracking benchmarks show that the proposed distilled trackers achieve compression rates of up to 18$\times$ and frame-rates of $265$ FPS, while obtaining comparable tracking accuracy compared to base models.
translated by 谷歌翻译