深度学习方法通常需要大量的培训数据和缺乏可解释性。在本文中,我们提出了一种用于医学图像分类的新颖知识蒸馏和模型解释框架,共同解决了上述两个问题。具体而言,为了解决数据饥饿的问题,通过从繁琐的预训练教师模型中蒸馏知识来学习一个小学生模型。为了解释教师模型并协助学生的学习,引入了解释器模块,以突出显示对教师模型的预测很重要的输入。此外,联合框架通过来自信息理论的角度来源的原理方式训练。与眼底数据集上的最先进方法相比,我们的框架优于知识蒸馏和模型解释任务。
translated by 谷歌翻译
Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding handcrafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer. * Contributed during an internship at Amazon.
translated by 谷歌翻译
在这项工作中,我们提出了相互信息最大化知识蒸馏(MIMKD)。我们的方法使用对比目标来同时估计,并最大化教师和学生网络之间的本地和全球特征表示的相互信息的下限。我们通过广泛的实验证明,这可以通过将知识从更加性能但计算昂贵的模型转移来改善低容量模型的性能。这可用于产生更好的模型,可以在具有低计算资源的设备上运行。我们的方法灵活,我们可以将具有任意网络架构的教师蒸馏到任意学生网络。我们的经验结果表明,MIMKD优于各种学生教师对的竞争方法,具有不同的架构,以及学生网络的容量极低。我们能够通过从Reset-50蒸馏出来的知识,从基线精度为Shufflenetv2获得74.55%的精度。在Imagenet上,我们使用Reset-34教师网络将Reset-18网络从68.88%提高到70.32%的准确度(1.44%+)。
translated by 谷歌翻译
深度学习的巨大成功主要是由于大规模的网络架构和高质量的培训数据。但是,在具有有限的内存和成像能力的便携式设备上部署最近的深层模型仍然挑战。一些现有的作品通过知识蒸馏进行了压缩模型。不幸的是,这些方法不能处理具有缩小图像质量的图像,例如低分辨率(LR)图像。为此,我们采取了开创性的努力,从高分辨率(HR)图像到达将处理LR图像的紧凑型网络模型中学习的繁重网络模型中蒸馏有用的知识,从而推动了新颖的像素蒸馏的当前知识蒸馏技术。为实现这一目标,我们提出了一名教师助理 - 学生(TAS)框架,将知识蒸馏分解为模型压缩阶段和高分辨率表示转移阶段。通过装备新颖的特点超分辨率(FSR)模块,我们的方法可以学习轻量级网络模型,可以实现与重型教师模型相似的准确性,但参数更少,推理速度和较低分辨率的输入。在三个广泛使用的基准,\即,幼崽200-2011,Pascal VOC 2007和ImageNetsub上的综合实验证明了我们方法的有效性。
translated by 谷歌翻译
在线知识蒸馏会在所有学生模型之间进行知识转移,以减轻对预培训模型的依赖。但是,现有的在线方法在很大程度上依赖于预测分布并忽略了代表性知识的进一步探索。在本文中,我们提出了一种用于在线知识蒸馏的新颖的多尺度功能提取和融合方法(MFEF),其中包括三个关键组成部分:多尺度功能提取,双重注意和功能融合,以生成更有信息的特征图,以用于蒸馏。提出了在通道维度中的多尺度提取利用分界线和catenate,以提高特征图的多尺度表示能力。为了获得更准确的信息,我们设计了双重注意,以适应重要的渠道和空间区域。此外,我们通过功能融合来汇总并融合了以前的处理功能地图,以帮助培训学生模型。关于CIF AR-10,CIF AR-100和Cinic-10的广泛实验表明,MFEF转移了更有益的代表性知识,以蒸馏和胜过各种网络体系结构之间的替代方法
translated by 谷歌翻译
尽管深层模型在医学图像分割中表现出了有希望的性能,但它们在很大程度上依赖大量宣布的数据,这很难访问,尤其是在临床实践中。另一方面,高准确的深层模型通常有大型模型尺寸,从而限制了它们在实际情况下的工作。在这项工作中,我们提出了一个新颖的不对称联合教师框架ACT-NET,以减轻半监督知识蒸馏的昂贵注释和计算成本的负担。我们通过共同教师网络推进教师学习的学习,以通过交替的学生和教师角色来促进从大型模型到小模型的不对称知识蒸馏,从而获得了临床就业的微小但准确的模型。为了验证我们的行动网络的有效性,我们在实验中采用了ACDC数据集进行心脏子结构分段。广泛的实验结果表明,ACT-NET的表现优于其他知识蒸馏方法,并实现无损分割性能,参数少250倍。
translated by 谷歌翻译
近年来,知识蒸馏(KD)被认为是模型压缩和加速度的关键解决方案。在KD中,通过最大限度地减少两者的概率输出之间的分歧,一项小学生模型通常从大师模型中培训。然而,如我们实验中所示,现有的KD方法可能不会将老师的批判性解释知识转移给学生,即两种模型所做的预测的解释并不一致。在本文中,我们提出了一种新颖的可解释的知识蒸馏模型,称为XDistillation,通过该模型,解释信息都从教师模型转移到学生模型。 Xdistillation模型利用卷积的自动统计学器的想法来近似教师解释。我们的实验表明,由Xdistillation培训的模型优于传统KD方法的那些不仅在预测准确性的术语,而且对教师模型的忠诚度。
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation.
translated by 谷歌翻译
知识蒸馏已成为获得紧凑又有效模型的重要方法。为实现这一目标,培训小型学生模型以利用大型训练有素的教师模型的知识。然而,由于教师和学生之间的能力差距,学生的表现很难达到老师的水平。关于这个问题,现有方法建议通过代理方式减少教师知识的难度。我们认为这些基于代理的方法忽视了教师的知识损失,这可能导致学生遇到容量瓶颈。在本文中,我们从新的角度来缓解能力差距问题,以避免知识损失的目的。我们建议通过对抗性协作学习建立一个更有力的学生,而不是牺牲教师的知识。为此,我们进一步提出了一种逆势协作知识蒸馏(ACKD)方法,有效提高了知识蒸馏的性能。具体来说,我们用多个辅助学习者构建学生模型。同时,我们设计了对抗的对抗性协作模块(ACM),引入注意机制和对抗的学习,以提高学生的能力。四个分类任务的广泛实验显示了拟议的Ackd的优越性。
translated by 谷歌翻译
Electroencephalogram (EEG) has been one of the common neuromonitoring modalities for real-world brain-computer interfaces (BCIs) because of its non-invasiveness, low cost, and high temporal resolution. Recently, light-weight and portable EEG wearable devices based on low-density montages have increased the convenience and usability of BCI applications. However, loss of EEG decoding performance is often inevitable due to reduced number of electrodes and coverage of scalp regions of a low-density EEG montage. To address this issue, we introduce knowledge distillation (KD), a learning mechanism developed for transferring knowledge/information between neural network models, to enhance the performance of low-density EEG decoding. Our framework includes a newly proposed similarity-keeping (SK) teacher-student KD scheme that encourages a low-density EEG student model to acquire the inter-sample similarity as in a pre-trained teacher model trained on high-density EEG data. The experimental results validate that our SK-KD framework consistently improves motor-imagery EEG decoding accuracy when number of electrodes deceases for the input EEG data. For both common low-density headphone-like and headband-like montages, our method outperforms state-of-the-art KD methods across various EEG decoding model architectures. As the first KD scheme developed for enhancing EEG decoding, we foresee the proposed SK-KD framework to facilitate the practicality of low-density EEG-based BCI in real-world applications.
translated by 谷歌翻译
Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences like visual, auditory, etc., for acquiring and effectively processing information. Inspired by this concept, our work explores the idea of mixed information sharing with model compression in the context of Knowledge Distillation (KD) and Mutual Learning (ML). Unlike conventional techniques that share the same type of knowledge with all networks, we propose to train individual networks with different forms of information to enhance the learning process. We formulate a combined KD and ML framework with one teacher and two student networks that share or exchange information in the form of predictions and feature maps. Our comprehensive experiments with benchmark classification and segmentation datasets demonstrate that with 15% compression, the ensemble performance of networks trained with diverse forms of knowledge outperforms the conventional techniques both quantitatively and qualitatively.
translated by 谷歌翻译
知识蒸馏在模型压缩方面取得了显着的成就。但是,大多数现有方法需要原始的培训数据,而实践中的实际数据通常是不可用的,因为隐私,安全性和传输限制。为了解决这个问题,我们提出了一种有条件的生成数据无数据知识蒸馏(CGDD)框架,用于培训有效的便携式网络,而无需任何实际数据。在此框架中,除了使用教师模型中提取的知识外,我们将预设标签作为额外的辅助信息介绍以培训发电机。然后,训练有素的发生器可以根据需要产生指定类别的有意义的培训样本。为了促进蒸馏过程,除了使用常规蒸馏损失,我们将预设标签视为地面真理标签,以便学生网络直接由合成训练样本类别监督。此外,我们强制学生网络模仿教师模型的注意图,进一步提高了其性能。为了验证我们方法的优越性,我们设计一个新的评估度量称为相对准确性,可以直接比较不同蒸馏方法的有效性。培训的便携式网络通过提出的数据无数据蒸馏方法获得了99.63%,99.07%和99.84%的CIFAR10,CIFAR100和CALTECH101的相对准确性。实验结果表明了所提出的方法的优越性。
translated by 谷歌翻译
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.
translated by 谷歌翻译
作为模型压缩的一种有前途的方法,知识蒸馏通过从繁琐的知识转移知识来改善紧凑模型的性能。用于指导学生培训的知识很重要。语义分割中的先前蒸馏方法努力从这些特征中提取各种形式的知识,涉及依靠先前信息并具有有限的性能提高的精心手动设计。在本文中,我们提出了一种称为标准化功能蒸馏(NFD)的简单而有效的特征蒸馏方法,旨在实现原始功能的有效蒸馏,而无需手动设计新的知识形式。关键的想法是防止学生专注于模仿通过归一化的教师特征响应的幅度。我们的方法可获得有关CityScapes,VOC 2012和ADE20K数据集的语义细分的最新蒸馏结果。代码将可用。
translated by 谷歌翻译
大型预训练的变压器是现代语义分割基准的顶部,但具有高计算成本和冗长的培训。为了提高这种约束,我们从综合知识蒸馏的角度来研究有效的语义分割,并考虑弥合多源知识提取和特定于变压器特定的斑块嵌入之间的差距。我们提出了基于变压器的知识蒸馏(TransKD)框架,该框架通过蒸馏出大型教师变压器的特征地图和补丁嵌入来学习紧凑的学生变形金刚,绕过长期的预训练过程并将FLOPS降低> 85.0%。具体而言,我们提出了两个基本和两个优化模块:(1)交叉选择性融合(CSF)可以通过通道注意和层次变压器内的特征图蒸馏之间的知识转移; (2)嵌入对齐(PEA)在斑块过程中执行尺寸转换,以促进贴片嵌入蒸馏; (3)全局本地上下文混合器(GL-MIXER)提取了代表性嵌入的全局和局部信息; (4)嵌入助手(EA)是一种嵌入方法,可以无缝地桥接老师和学生模型,并具有老师的渠道数量。关于CityScapes,ACDC和NYUV2数据集的实验表明,TransKD的表现优于最先进的蒸馏框架,并竞争了耗时的预训练方法。代码可在https://github.com/ruipingl/transkd上找到。
translated by 谷歌翻译
具有更多参数数量的深卷积神经网络在自然图像上的对象检测任务中提高了精度,其中感兴趣的对象用水平边界框注释。从鸟类视角捕获的航空图像上,这些对模型架构和更深卷积层的改进也可以提高定向对象检测任务的性能。但是,很难直接在设备上使用有限的计算资源应用那些最先进的对象探测器,这需要通过模型压缩来实现轻量级模型。为了解决此问题,我们提出了一种模型压缩方法,用于通过知识蒸馏(即KD-RNET)在空中图像上旋转对象检测。凭借具有大量参数的训练有素的以教师为导向的对象探测器,获得的对象类别和位置信息都通过协作培训策略转移到KD-RNET的紧凑型学生网络中。传输类别信息是通过对预测概率分布的知识蒸馏来实现的,并且在处理位置信息传输中的位移时采用了软回归损失。大规模空中对象检测数据集(DOTA)的实验结果表明,提出的KD-RNET模型可以通过减少参数数量来提高均值平均精度(MAP),同时kd-rnet促进性能增强性能在提供高质量检测的情况下,与地面截然注释的重叠更高。
translated by 谷歌翻译
In this paper, we investigate the knowledge distillation strategy for training small semantic segmentation networks by making use of large networks. We start from the straightforward scheme, pixel-wise distillation, which applies the distillation scheme adopted for image classification and performs knowledge distillation for each pixel separately. We further propose to distill the structured knowledge from large networks to small networks, which is motivated by that semantic segmentation is a structured prediction problem. We study two structured distillation schemes: (i) pair-wise distillation that distills the pairwise similarities, and (ii) holistic distillation that uses GAN to distill holistic knowledge. The effectiveness of our knowledge distillation approaches is demonstrated by extensive experiments on three scene parsing datasets: Cityscapes, Camvid and ADE20K.
translated by 谷歌翻译
In recent years, Siamese network based trackers have significantly advanced the state-of-the-art in real-time tracking. Despite their success, Siamese trackers tend to suffer from high memory costs, which restrict their applicability to mobile devices with tight memory budgets. To address this issue, we propose a distilled Siamese tracking framework to learn small, fast and accurate trackers (students), which capture critical knowledge from large Siamese trackers (teachers) by a teacher-students knowledge distillation model. This model is intuitively inspired by the one teacher vs. multiple students learning method typically employed in schools. In particular, our model contains a single teacher-student distillation module and a student-student knowledge sharing mechanism. The former is designed using a tracking-specific distillation strategy to transfer knowledge from a teacher to students. The latter is utilized for mutual learning between students to enable in-depth knowledge understanding. Extensive empirical evaluations on several popular Siamese trackers demonstrate the generality and effectiveness of our framework. Moreover, the results on five tracking benchmarks show that the proposed distilled trackers achieve compression rates of up to 18$\times$ and frame-rates of $265$ FPS, while obtaining comparable tracking accuracy compared to base models.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译