与常规知识蒸馏(KD)不同,自我KD允许网络在没有额外网络的任何指导的情况下向自身学习知识。本文提议从图像混合物(Mixskd)执行自我KD,将这两种技术集成到统一的框架中。 Mixskd相互蒸馏以图形和概率分布在随机的原始图像和它们的混合图像之间以有意义的方式。因此,它通过对混合图像进行监督信号进行建模来指导网络学习跨图像知识。此外,我们通过汇总多阶段功能图来构建一个自学老师网络,以提供软标签以监督骨干分类器,从而进一步提高自我增强的功效。图像分类和转移学习到对象检测和语义分割的实验表明,混合物KD优于其他最先进的自我KD和数据增强方法。该代码可在https://github.com/winycg/self-kd-lib上找到。
translated by 谷歌翻译
知识蒸馏(KD)是一个有效的框架,旨在将有意义的信息从大型老师转移到较小的学生。通常,KD通常涉及如何定义和转移知识。以前的KD方法通常着重于挖掘各种形式的知识,例如功能地图和精致信息。但是,知识源自主要监督任务,因此是高度特定于任务的。在自我监督的代表学习的最新成功中,我们提出了一项辅助自我实施的增强任务,以指导网络学习更多有意义的功能。因此,我们可以从KD的这项任务中得出软性自我实施的增强分布作为更丰富的黑暗知识。与以前的知识不同,此分布编码从监督和自我监督的特征学习中编码联合知识。除了知识探索之外,我们建议在各个隐藏层上附加几个辅助分支,以充分利用分层特征图。每个辅助分支都被指导学习自学的增强任务,并将这种分布从教师到学生提炼。总体而言,我们称我们的KD方法为等级自我实施的增强知识蒸馏(HSSAKD)。标准图像分类的实验表明,离线和在线HSSAKD都在KD领域达到了最先进的表现。对象检测的进一步转移实验进一步验证了HSSAKD可以指导网络学习更好的功能。该代码可在https://github.com/winycg/hsakd上找到。
translated by 谷歌翻译
知识蒸馏通常涉及如何有效地定义和转移知识从教师到学生。尽管最近的自我监督的对比知识取得了最佳表现,但迫使网络学习此类知识可能会损害对原始班级识别任务的表示。因此,我们采用替代性的自我监督的增强任务来指导网络学习原始识别任务和自我监督的辅助任务的共同分布。它被证明是一种更丰富的知识,可以提高表示能力而不会失去正常的分类能力。此外,以前的方法仅在最终层之间传递概率知识是不完整的。我们建议将几个辅助分类器附加到层次中间特征图中,以生成多样化的自我监督知识,并执行一对一的转移以彻底教授学生网络。我们的方法显着超过了先前的SOTA SSKD,CIFAR-100的平均改善为2.56 \%,并且在广泛使用的网络对上的Imagenet上有0.77 \%的提高。代码可在https://github.com/winycg/hsakd上找到。
translated by 谷歌翻译
无教师的在线知识蒸馏(KD)旨在培训多个学生模型的合奏,并彼此提炼知识。尽管现有的在线KD方法实现了理想的性能,但它们通常专注于阶级概率作为核心知识类型,而忽略了宝贵的特征代表性信息。我们为在线KD提供了一个相互的对比学习(MCL)框架。 MCL的核心思想是以在线方式进行对比分布的相互交互和对比度分布的转移。我们的MCL可以汇总跨网络嵌入信息,并最大化两个网络之间的相互信息的下限。这使每个网络能够从他人那里学习额外的对比知识,从而提供更好的特征表示形式,从而提高视觉识别任务的性能。除最后一层外,我们还将MCL扩展到辅助特征细化模块辅助的几个中间层。这进一步增强了在线KD的表示能力。关于图像分类和转移学习到视觉识别任务的实验表明,MCL可以针对最新的在线KD方法带来一致的性能提高。优势表明,MCL可以指导网络生成更好的特征表示。我们的代码可在https://github.com/winycg/mcl上公开获取。
translated by 谷歌翻译
事实证明,数据混合对提高深神经网络的概括能力是有效的。虽然早期方法通过手工制作的策略(例如线性插值)混合样品,但最新方法利用显着性信息通过复杂的离线优化来匹配混合样品和标签。但是,在精确的混合政策和优化复杂性之间进行了权衡。为了应对这一挑战,我们提出了一个新颖的自动混合(Automix)框架,其中混合策略被参数化并直接实现最终分类目标。具体而言,Automix将混合分类重新定义为两个子任务(即混合样品生成和混合分类)与相应的子网络,并在双层优化框架中求解它们。对于这一代,可学习的轻质混合发电机Mix Block旨在通过在相应混合标签的直接监督下对贴片的关系进行建模,以生成混合样品。为了防止双层优化的降解和不稳定性,我们进一步引入了动量管道以端到端的方式训练汽车。与在各种分类场景和下游任务中的最新图像相比,九个图像基准的广泛实验证明了汽车的优势。
translated by 谷歌翻译
最近对知识蒸馏的研究发现,组合来自多位教师或学生的“黑暗知识”是有助于为培训创造更好的软目标,但以更大的计算和/或参数的成本为本。在这项工作中,我们通过在同一批量中传播和集合其他样本的知识来提供批处理知识合奏(烘焙)以生产用于锚固图像的精细柔软目标。具体地,对于每个感兴趣的样本,根据采样间的亲和力加权知识的传播,其与当前网络一起估计。然后可以集合传播的知识以形成更好的蒸馏靶。通过这种方式,我们的烘焙框架只通过单个网络跨多个样本进行在线知识。与现有知识合并方法相比,它需要最小的计算和内存开销。广泛的实验表明,轻质但有效的烘烤始终如一地提升多个数据集上各种架构的分类性能,例如,在想象网上的显着+ 0.7%的VINE-T的增益,只有+ 1.5%计算开销和零附加参数。烘焙不仅改善了Vanilla基线,还超越了所有基准的单一网络最先进。
translated by 谷歌翻译
自我介绍在训练过程中利用自身的非均匀软监管,并在没有任何运行时成本的情况下提高性能。但是,在训练过程中的开销经常被忽略,但是在巨型模型的时代,培训期间的时间和记忆开销越来越重要。本文提出了一种名为ZIPF标签平滑(ZIPF的LS)的有效自我验证方法,该方法使用网络的直立预测来生成软监管,该软监管在不使用任何对比样本或辅助参数的情况下符合ZIPF分布。我们的想法来自经验观察,即当对网络进行适当训练时,在按样品的大小和平均分类后,应遵循分布的分布,让人联想到ZIPF的自然语言频率统计信息,这是在按样品中的大小和平均值进行排序之后进行的。 。通过在样本级别和整个培训期内强制执行此属性,我们发现预测准确性可以大大提高。使用INAT21细粒分类数据集上的RESNET50,与香草基线相比,我们的技术获得了 +3.61%的准确性增长,而与先前的标签平滑或自我验证策略相比,增益增加了0.88%。该实现可在https://github.com/megvii-research/zipfls上公开获得。
translated by 谷歌翻译
在线知识蒸馏会在所有学生模型之间进行知识转移,以减轻对预培训模型的依赖。但是,现有的在线方法在很大程度上依赖于预测分布并忽略了代表性知识的进一步探索。在本文中,我们提出了一种用于在线知识蒸馏的新颖的多尺度功能提取和融合方法(MFEF),其中包括三个关键组成部分:多尺度功能提取,双重注意和功能融合,以生成更有信息的特征图,以用于蒸馏。提出了在通道维度中的多尺度提取利用分界线和catenate,以提高特征图的多尺度表示能力。为了获得更准确的信息,我们设计了双重注意,以适应重要的渠道和空间区域。此外,我们通过功能融合来汇总并融合了以前的处理功能地图,以帮助培训学生模型。关于CIF AR-10,CIF AR-100和Cinic-10的广泛实验表明,MFEF转移了更有益的代表性知识,以蒸馏和胜过各种网络体系结构之间的替代方法
translated by 谷歌翻译
混合样品正则化(MSR),例如混合或cutmix,是一种强大的数据增强策略,可以推广卷积神经网络。先前的经验分析说明了MSR与传统的离线知识蒸馏(KD)之间的正交性能增长。更具体地说,可以通过MSR参与顺序蒸馏的训练阶段来增强学生网络。然而,MSR和在线知识蒸馏之间的相互作用,这是一个更强的蒸馏范式,在那里,一群同伴互相学习的合奏仍然没有探索。为了弥合差距,我们首次尝试将cutmix纳入在线蒸馏中,我们从经验上观察到了重大改进。在这个事实的鼓舞下,我们提出了一个更强大的MSR,专门用于在线蒸馏,称为Cut^nMix。此外,一个新颖的在线蒸馏框架是在切割^nmix上设计的,以通过功能水平相互学习和自我启动的老师来增强蒸馏。对CIFAR10和CIFAR100进行六个网络体系结构的全面评估表明,我们的方法可以始终超过最先进的蒸馏方法。
translated by 谷歌翻译
可以通过对手动预定义目标的监督(例如,一hot或Hadamard代码)进行深入的表示学习来解决细粒度的视觉分类。这种目标编码方案对于模型间相关性的灵活性较小,并且对稀疏和不平衡的数据分布也很敏感。鉴于此,本文介绍了一种新颖的目标编码方案 - 动态目标关系图(DTRG),作为辅助特征正则化,是一个自生成的结构输出,可根据输入图像映射。具体而言,类级特征中心的在线计算旨在在表示空间中生成跨类别距离,因此可以通过非参数方式通过动态图来描绘。明确最大程度地减少锚定在这些级别中心的阶层内特征变化可以鼓励学习判别特征。此外,由于利用了类间的依赖性,提出的目标图可以减轻代表学习中的数据稀疏性和不稳定。受混合风格数据增强的最新成功的启发,本文将随机性引入了动态目标关系图的软结构,以进一步探索目标类别的关系多样性。实验结果可以证明我们方法对多个视觉分类任务的许多不同基准的有效性,尤其是在流行的细粒对象基准上实现最先进的性能以及针对稀疏和不平衡数据的出色鲁棒性。源代码可在https://github.com/akonlau/dtrg上公开提供。
translated by 谷歌翻译
混合是深度神经网络的流行数据依赖性增强技术,其包含两个子任务,混合生成和分类。社区通常将混合限制在监督学习(SL)中,并且生成子任务的目的是固定到采样的对,而不是考虑整个数据歧管。为了克服这些限制,我们系统地研究了两个子任务的目标,并为SL和自我监督的学习(SSL)方案,命名为Samix的两个子任务和提出情景 - 激动化混合。具体而言,我们假设并验证混合生成的核心目标,因为优化来自其他类别的全球歧视的两个类之间的局部平滑度。基于这一发现,提出了$ \ eta $ -Balanced混合丢失,以进行两个子任务的互补培训。同时,生成子任务被参数化为可优化的模块,混音器,其利用注意机制来生成混合样本而无需标记依赖性。对SL和SSL任务的广泛实验表明SAMIX始终如一地优于大边距。
translated by 谷歌翻译
Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD .
translated by 谷歌翻译
为了提高性能,深度神经网络需要更深入或更广泛的网络结构,以涉及大量的计算和记忆成本。为了减轻此问题,自我知识蒸馏方法通过提炼模型本身的内部知识来规范模型。常规的自我知识蒸馏方法需要其他可训练的参数或取决于数据。在本文中,我们提出了一种使用辍学(SD-Dropout)的简单有效的自我知识蒸馏方法。 SD-Dropout通过辍学采样来提炼多个模型的后验分布。我们的方法不需要任何其他可训练的模块,不依赖数据,只需要简单的操作。此外,这种简单的方法可以很容易地与各种自我知识蒸馏方法结合在一起。我们提供了对远期和反向KL-Diverence在工作中的影响的理论和实验分析。对各种视觉任务(即图像分类,对象检测和分布移动)进行的广泛实验表明,所提出的方法可以有效地改善单个网络的概括。进一步的实验表明,所提出的方法还提高了校准性能,对抗性鲁棒性和分布外检测能力。
translated by 谷歌翻译
In real teaching scenarios, an excellent teacher always teaches what he (or she) is good at but the student is not. This method gives the student the best assistance in making up for his (or her) weaknesses and becoming a good one overall. Enlightened by this, we introduce the approach to the knowledge distillation framework and propose a data-based distillation method named ``Teaching what you Should Teach (TST)''. To be specific, TST contains a neural network-based data augmentation module with the priori bias, which can assist in finding what the teacher is good at while the student are not by learning magnitudes and probabilities to generate suitable samples. By training the data augmentation module and the generalized distillation paradigm in turn, a student model that has excellent generalization ability can be created. To verify the effectiveness of TST, we conducted extensive comparative experiments on object recognition (CIFAR-100 and ImageNet-1k), detection (MS-COCO), and segmentation (Cityscapes) tasks. As experimentally demonstrated, TST achieves state-of-the-art performance on almost all teacher-student pairs. Furthermore, we conduct intriguing studies of TST, including how to solve the performance degradation caused by the stronger teacher and what magnitudes and probabilities are needed for the distillation framework.
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
知识蒸馏已成为获得紧凑又有效模型的重要方法。为实现这一目标,培训小型学生模型以利用大型训练有素的教师模型的知识。然而,由于教师和学生之间的能力差距,学生的表现很难达到老师的水平。关于这个问题,现有方法建议通过代理方式减少教师知识的难度。我们认为这些基于代理的方法忽视了教师的知识损失,这可能导致学生遇到容量瓶颈。在本文中,我们从新的角度来缓解能力差距问题,以避免知识损失的目的。我们建议通过对抗性协作学习建立一个更有力的学生,而不是牺牲教师的知识。为此,我们进一步提出了一种逆势协作知识蒸馏(ACKD)方法,有效提高了知识蒸馏的性能。具体来说,我们用多个辅助学习者构建学生模型。同时,我们设计了对抗的对抗性协作模块(ACM),引入注意机制和对抗的学习,以提高学生的能力。四个分类任务的广泛实验显示了拟议的Ackd的优越性。
translated by 谷歌翻译
知识蒸馏是通过知识转移模型压缩的有效稳定的方法。传统知识蒸馏(KD)是将来自大型和训练有素的教师网络的知识转移到小型学生网络,这是一种单向过程。最近,已经提出了深度相互学习(DML)来帮助学生网络协同和同时学习。然而,据我们所知,KD和DML从未在统一的框架中共同探索,以解决知识蒸馏问题。在本文中,我们调查教师模型在KD中支持更值得信赖的监督信号,而学生则在DML中捕获教师的类似行为。基于这些观察,我们首先建议将KD与DML联合在统一的框架中。此外,我们提出了一个半球知识蒸馏(SOKD)方法,有效提高了学生和教师的表现。在这种方法中,我们在DML中介绍了同伴教学培训时尚,以缓解学生的模仿困难,并利用KD训练有素的教师提供的监督信号。此外,我们还显示我们的框架可以轻松扩展到基于功能的蒸馏方法。在CiFAR-100和Imagenet数据集上的广泛实验证明了所提出的方法实现了最先进的性能。
translated by 谷歌翻译
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the prediction logits due to its inefficiency in distilling the localization information. In this paper, we investigate whether logit mimicking always lags behind feature imitation. Towards this goal, we first present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student. Second, we introduce the concept of valuable localization region that can aid to selectively distill the classification and localization knowledge for a certain region. Combining these two new components, for the first time, we show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years. The thorough studies exhibit the great potential of logit mimicking that can significantly alleviate the localization ambiguity, learn robust feature representation, and ease the training difficulty in the early stage. We also provide the theoretical connection between the proposed LD and the classification KD, that they share the equivalent optimization effect. Our distillation scheme is simple as well as effective and can be easily applied to both dense horizontal object detectors and rotated object detectors. Extensive experiments on the MS COCO, PASCAL VOC, and DOTA benchmarks demonstrate that our method can achieve considerable AP improvement without any sacrifice on the inference speed. Our source code and pretrained models are publicly available at https://github.com/HikariTJU/LD.
translated by 谷歌翻译
Mixup is a popular data augmentation technique based on creating new samples by linear interpolation between two given data samples, to improve both the generalization and robustness of the trained model. Knowledge distillation (KD), on the other hand, is widely used for model compression and transfer learning, which involves using a larger network's implicit knowledge to guide the learning of a smaller network. At first glance, these two techniques seem very different, however, we found that ``smoothness" is the connecting link between the two and is also a crucial attribute in understanding KD's interplay with mixup. Although many mixup variants and distillation methods have been proposed, much remains to be understood regarding the role of a mixup in knowledge distillation. In this paper, we present a detailed empirical study on various important dimensions of compatibility between mixup and knowledge distillation. We also scrutinize the behavior of the networks trained with a mixup in the light of knowledge distillation through extensive analysis, visualizations, and comprehensive experiments on image classification. Finally, based on our findings, we suggest improved strategies to guide the student network to enhance its effectiveness. Additionally, the findings of this study provide insightful suggestions to researchers and practitioners that commonly use techniques from KD. Our code is available at https://github.com/hchoi71/MIX-KD.
translated by 谷歌翻译
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation.
translated by 谷歌翻译