深度神经网络是参数化的数千或数百万个参数,并且在许多分类问题中表现出巨大的成功。然而,大量参数使得难以将这些模型集成到智能手机和可穿戴设备的边缘设备中。为了解决这个问题,知识蒸馏(KD)已被广泛采用,它使用预先训练的高容量网络来培训更小的网络,适用于边缘设备。本文首次研究了使用KD用于可穿戴设备的时间序列数据的适用性和挑战。 KD的成功应用需要在培训期间需要具体的数据增强方法。然而,如果在KD期间存在用于选择增强方法的相干策略,则尚不清楚。在本文中,我们报告了详细研究的结果,这些研究比较和对比基于KD的人类活动分析中的各种常见选择和一些混合数据增强策略。该领域的研究通常是有限的,因为公共领域没有可穿戴设备的全面数据库。我们的研究将数据库视为公共规模的数据库,以源于大规模介入研究的人类活动和久坐行为。我们发现,在KD期间的数据增强技术的选择具有对最终性能的可变影响程度,并发现最佳网络选择以及数据增强策略特定于手头的数据集。但是,我们还通过一系列关于数据库提供强大基线表现的一般建议。
translated by 谷歌翻译
Figure 1. An illustration of standard knowledge distillation. Despite widespread use, an understanding of when the student can learn from the teacher is missing.
translated by 谷歌翻译
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the teacher and the student are learning collaboratively, has also gained much interest due to its ability to improve on the performance of the networks involved. The Kullback-Leibler (KL) divergence ensures the proper knowledge transfer between the teacher and student. However, most online KD techniques present some bottlenecks under the network capacity gap. By cooperatively and simultaneously training, the models the KL distance becomes incapable of properly minimizing the teacher's and student's distributions. Alongside accuracy, critical edge device applications are in need of well-calibrated compact networks. Confidence calibration provides a sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of Divergences for online Knowledge Distillation. We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process. We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network. We conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state-of-the-art online and offline KD techniques.
translated by 谷歌翻译
Mixup is a popular data augmentation technique based on creating new samples by linear interpolation between two given data samples, to improve both the generalization and robustness of the trained model. Knowledge distillation (KD), on the other hand, is widely used for model compression and transfer learning, which involves using a larger network's implicit knowledge to guide the learning of a smaller network. At first glance, these two techniques seem very different, however, we found that ``smoothness" is the connecting link between the two and is also a crucial attribute in understanding KD's interplay with mixup. Although many mixup variants and distillation methods have been proposed, much remains to be understood regarding the role of a mixup in knowledge distillation. In this paper, we present a detailed empirical study on various important dimensions of compatibility between mixup and knowledge distillation. We also scrutinize the behavior of the networks trained with a mixup in the light of knowledge distillation through extensive analysis, visualizations, and comprehensive experiments on image classification. Finally, based on our findings, we suggest improved strategies to guide the student network to enhance its effectiveness. Additionally, the findings of this study provide insightful suggestions to researchers and practitioners that commonly use techniques from KD. Our code is available at https://github.com/hchoi71/MIX-KD.
translated by 谷歌翻译
知识蒸馏(KD)是压缩边缘设备深层分类模型的有效工具。但是,KD的表现受教师和学生网络之间较大容量差距的影响。最近的方法已诉诸KD的多个教师助手(TA)设置,该设置依次降低了教师模型的大小,以相对弥合这些模型之间的尺寸差距。本文提出了一种称为“知识蒸馏”课程专家选择的新技术,以有效地增强在容量差距问题下对紧凑型学生的学习。该技术建立在以下假设的基础上:学生网络应逐渐使用分层的教学课程来逐步指导,因为它可以从较低(较高的)容量教师网络中更好地学习(硬)数据样本。具体而言,我们的方法是一种基于TA的逐渐的KD技术,它每个输入图像选择单个教师,该课程是基于通过对图像进行分类的难度驱动的课程的。在这项工作中,我们凭经验验证了我们的假设,并对CIFAR-10,CIFAR-100,CINIC-10和Imagenet数据集进行了严格的实验,并在类似VGG的模型,Resnets和WideresNets架构上显示出提高的准确性。
translated by 谷歌翻译
Electroencephalogram (EEG) has been one of the common neuromonitoring modalities for real-world brain-computer interfaces (BCIs) because of its non-invasiveness, low cost, and high temporal resolution. Recently, light-weight and portable EEG wearable devices based on low-density montages have increased the convenience and usability of BCI applications. However, loss of EEG decoding performance is often inevitable due to reduced number of electrodes and coverage of scalp regions of a low-density EEG montage. To address this issue, we introduce knowledge distillation (KD), a learning mechanism developed for transferring knowledge/information between neural network models, to enhance the performance of low-density EEG decoding. Our framework includes a newly proposed similarity-keeping (SK) teacher-student KD scheme that encourages a low-density EEG student model to acquire the inter-sample similarity as in a pre-trained teacher model trained on high-density EEG data. The experimental results validate that our SK-KD framework consistently improves motor-imagery EEG decoding accuracy when number of electrodes deceases for the input EEG data. For both common low-density headphone-like and headband-like montages, our method outperforms state-of-the-art KD methods across various EEG decoding model architectures. As the first KD scheme developed for enhancing EEG decoding, we foresee the proposed SK-KD framework to facilitate the practicality of low-density EEG-based BCI in real-world applications.
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
基于蒸馏的压缩网络的性能受蒸馏质量的管辖。大型网络(教师)到较小网络(学生)的次优蒸馏的原因主要归因于给定教师与学生对的学习能力中的差距。虽然很难蒸馏所有教师的知识,但可以在很大程度上控制蒸馏质量以实现更好的性能。我们的实验表明,蒸馏品质主要受教师响应的质量来限制,这反过来又受到其反应中存在相似信息的影响。训练有素的大容量老师在学习细粒度辨别性质的过程中丢失了类别之间的相似性信息。没有相似性信息导致蒸馏过程从一个例子 - 许多阶级学习减少到一个示例 - 一类学习,从而限制了教师的不同知识的流程。由于隐式假设只能蒸馏出灌输所知,而不是仅关注知识蒸馏过程,我们仔细审查了知识序列过程。我们认为,对于给定的教师 - 学生对,通过在训练老师的同时找到批量大小和时代数量之间的甜蜜点,可以提高蒸馏品。我们讨论了找到这种甜蜜点以便更好地蒸馏的步骤。我们还提出了蒸馏假设,以区分知识蒸馏和正则化效果之间的蒸馏过程的行为。我们在三个不同的数据集中进行我们的所有实验。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)的目的是在没有培训数据的情况下培训从教师网络的轻量级学生网络。现有方法主要遵循生成信息样本的范式,并通过针对数据先验,边界样本或内存样本来逐步更新学生模型。但是,以前的DFKD方法很难在不同的训练阶段动态调整生成策略,这反过来又很难实现高效且稳定的训练。在本文中,我们探讨了如何从课程学习(CL)的角度来教学学生,并提出一种新方法,即“ CUDFKD”,即“使用课程的无数据知识蒸馏”。它逐渐从简单的样本到困难的样本学习,这类似于人类学习的方式。此外,我们还提供了对主要化最小化(MM)算法的理论分析,并解释了CUDFKD的收敛性。在基准数据集上进行的实验表明,使用简单的课程设计策略,CUDFKD可以在最先进的DFKD方法和不同的基准测试中实现最佳性能,例如CIFAR10上RESNET18模型的95.28 \%TOP1的精度,这是更好的而不是从头开始培训数据。训练很快,在30个时期内达到90 \%的最高精度,并且训练期间的差异稳定。同样在本文中,还分析和讨论了CUDFKD的适用性。
translated by 谷歌翻译
最初引入了知识蒸馏,以利用来自单一教师模型的额外监督为学生模型培训。为了提高学生表现,最近的一些变体试图利用多个教师利用不同的知识来源。然而,现有研究主要通过对多种教师预测的平均或将它们与其他无标签策略相结合,将知识集成在多种来源中,可能在可能存在低质量的教师预测存在中误导学生。为了解决这个问题,我们提出了信心感知的多教师知识蒸馏(CA-MKD),该知识蒸馏(CA-MKD)在地面真理标签的帮助下,适用于每个教师预测的样本明智的可靠性,与那些接近单热的教师预测标签分配了大量的重量。此外,CA-MKD包含中间层,以进一步提高学生表现。广泛的实验表明,我们的CA-MKD始终如一地优于各种教师学生架构的所有最先进的方法。
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
在多种方式知识蒸馏研究的背景下,现有方法主要集中在唯一的学习教师最终产出问题。因此,教师网络与学生网络之间存在深处。有必要强制学生网络来学习教师网络的模态关系信息。为了有效利用从教师转移到学生的知识,采用了一种新的模型关系蒸馏范式,通过建模不同的模态之间的关系信息,即学习教师模级克矩阵。
translated by 谷歌翻译
知识蒸馏是将“知识”从大型模型(教师)转移到更紧凑的(学生)的过程,通常在模型压缩的背景下使用。当两个模型都具有相同的体系结构时,此过程称为自distillation。几项轶事表明,一个自灭的学生可以在持有的数据上胜过老师的表现。在这项工作中,我们系统地研究了许多设置。我们首先表明,即使有一个高度准确的老师,自我介绍也使学生在所有情况下都可以超越老师。其次,我们重新审视了(自我)蒸馏的现有理论解释,并确定矛盾的例子,揭示了这些解释的可能缺点。最后,我们通过损失景观几何形状的镜头为自我鉴定的动态提供了另一种解释。我们进行了广泛的实验,以表明自我验证会导致最小化的最小值,从而导致更好的概括。
translated by 谷歌翻译
AI-powered Medical Imaging has recently achieved enormous attention due to its ability to provide fast-paced healthcare diagnoses. However, it usually suffers from a lack of high-quality datasets due to high annotation cost, inter-observer variability, human annotator error, and errors in computer-generated labels. Deep learning models trained on noisy labelled datasets are sensitive to the noise type and lead to less generalization on the unseen samples. To address this challenge, we propose a Robust Stochastic Knowledge Distillation (RoS-KD) framework which mimics the notion of learning a topic from multiple sources to ensure deterrence in learning noisy information. More specifically, RoS-KD learns a smooth, well-informed, and robust student manifold by distilling knowledge from multiple teachers trained on overlapping subsets of training data. Our extensive experiments on popular medical imaging classification tasks (cardiopulmonary disease and lesion classification) using real-world datasets, show the performance benefit of RoS-KD, its ability to distill knowledge from many popular large networks (ResNet-50, DenseNet-121, MobileNet-V2) in a comparatively small network, and its robustness to adversarial attacks (PGD, FSGM). More specifically, RoS-KD achieves >2% and >4% improvement on F1-score for lesion classification and cardiopulmonary disease classification tasks, respectively, when the underlying student is ResNet-18 against recent competitive knowledge distillation baseline. Additionally, on cardiopulmonary disease classification task, RoS-KD outperforms most of the SOTA baselines by ~1% gain in AUC score.
translated by 谷歌翻译
除了使用硬标签的标准监督学习外,通常在许多监督学习设置中使用辅助损失来改善模型的概括。例如,知识蒸馏增加了第二个教师模仿模型训练的损失,在该培训中,教师可能是一个验证的模型,可以输出比标签更丰富的分布。同样,在标记数据有限的设置中,弱标记信息以标签函数的形式使用。此处引入辅助损失来对抗标签函数,这些功能可能是基于嘈杂的规则的真实标签近似值。我们解决了学习以原则性方式结合这些损失的问题。我们介绍AMAL,该AMAL使用元学习在验证度量上学习实例特定的权重,以实现损失的最佳混合。在许多知识蒸馏和规则降解域中进行的实验表明,Amal在这些领域中对竞争基准的增长可显着。我们通过经验分析我们的方法,并分享有关其提供性能提升的机制的见解。
translated by 谷歌翻译
Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
translated by 谷歌翻译
Knowledge distillation (KD) has been actively studied for image classification tasks in deep learning, aiming to improve the performance of a student based on the knowledge from a teacher. However, applying KD in image regression with a scalar response variable has been rarely studied, and there exists no KD method applicable to both classification and regression tasks yet. Moreover, existing KD methods often require a practitioner to carefully select or adjust the teacher and student architectures, making these methods less flexible in practice. To address the above problems in a unified way, we propose a comprehensive KD framework based on cGANs, termed cGAN-KD. Fundamentally different from existing KD methods, cGAN-KD distills and transfers knowledge from a teacher model to a student model via cGAN-generated samples. This novel mechanism makes cGAN-KD suitable for both classification and regression tasks, compatible with other KD methods, and insensitive to the teacher and student architectures. An error bound for a student model trained in the cGAN-KD framework is derived in this work, providing a theory for why cGAN-KD is effective as well as guiding the practical implementation of cGAN-KD. Extensive experiments on CIFAR-100 and ImageNet-100 show that we can combine state of the art KD methods with the cGAN-KD framework to yield a new state of the art. Moreover, experiments on Steering Angle and UTKFace demonstrate the effectiveness of cGAN-KD in image regression tasks, where existing KD methods are inapplicable.
translated by 谷歌翻译
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation.
translated by 谷歌翻译
神经网络可以从单个图像中了解视觉世界的内容是什么?虽然它显然不能包含存在的可能对象,场景和照明条件 - 在所有可能的256 ^(3x224x224)224尺寸的方形图像中,它仍然可以在自然图像之前提供强大的。为了分析这一假设,我们通过通过监控掠夺教师的知识蒸馏来制定一种训练神经网络的培训神经网络。有了这个,我们发现上述问题的答案是:“令人惊讶的是,很多”。在定量术语中,我们在CiFar-10/100上找到了94%/ 74%的前1个精度,在想象中,通过将这种方法扩展到音频,84%的语音组合。在广泛的分析中,我们解除了增强,源图像和网络架构的选择,以及在从未见过熊猫的网络中发现“熊猫神经元”。这项工作表明,一个图像可用于推断成千上万的对象类,并激励关于增强和图像的基本相互作用的更新的研究议程。
translated by 谷歌翻译
基于可穿戴传感器的人类动作识别(HAR)最近取得了杰出的成功。但是,基于可穿戴传感器的HAR的准确性仍然远远落后于基于视觉模式的系统(即RGB视频,骨架和深度)。多样化的输入方式可以提供互补的提示,从而提高HAR的准确性,但是如何利用基于可穿戴传感器的HAR的多模式数据的优势很少探索。当前,可穿戴设备(即智能手表)只能捕获有限的非视态模式数据。这阻碍了多模式HAR关联,因为它无法同时使用视觉和非视态模态数据。另一个主要挑战在于如何在有限的计算资源上有效地利用可穿戴设备上的多模式数据。在这项工作中,我们提出了一种新型的渐进骨骼到传感器知识蒸馏(PSKD)模型,该模型仅利用时间序列数据,即加速度计数据,从智能手表来解决基于可穿戴传感器的HAR问题。具体而言,我们使用来自教师(人类骨架序列)和学生(时间序列加速度计数据)模式的数据构建多个教师模型。此外,我们提出了一种有效的渐进学习计划,以消除教师和学生模型之间的绩效差距。我们还设计了一种称为自适应信心语义(ACS)的新型损失功能,以使学生模型可以自适应地选择其中一种教师模型或所需模拟的地面真实标签。为了证明我们提出的PSKD方法的有效性,我们对伯克利-MHAD,UTD-MHAD和MMACT数据集进行了广泛的实验。结果证实,与以前的基于单传感器的HAR方法相比,提出的PSKD方法具有竞争性能。
translated by 谷歌翻译