知识蒸馏是一种通过减少差异来将有关陈述信息从教师转移到学生的方法。这种方法的一个挑战是减少学生表现的灵活性,从而导致对教师知识的学习不准确。为了解决BERT转移,我们研究了指定为三种类型的表示结构的蒸馏:功能内,局部局部互感,全局功能间结构。要转移它们,我们基于中心内核对齐方式介绍了\ textit {特征结构蒸馏}方法,该方法为相似的特征结构分配了一致的价值,并揭示了更有信息的关系。特别是,针对全局结构实现了一种带有聚类的内存调节方法。在对胶合数据集的语言理解的九项任务的实验中,与最新的蒸馏方法相比,提出的方法有效地传递了三种类型的结构并提高性能。实际上,这些方法的代码可在https://github.com/maroo-sky/fsd中获得
translated by 谷歌翻译
Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large "teacher" BERT can be effectively transferred to a small "student" Tiny-BERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specific knowledge in BERT. TinyBERT 41 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERT BASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT 4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Moreover, TinyBERT 6 with 6 layers performs on-par with its teacher BERT BASE .
translated by 谷歌翻译
Electroencephalogram (EEG) has been one of the common neuromonitoring modalities for real-world brain-computer interfaces (BCIs) because of its non-invasiveness, low cost, and high temporal resolution. Recently, light-weight and portable EEG wearable devices based on low-density montages have increased the convenience and usability of BCI applications. However, loss of EEG decoding performance is often inevitable due to reduced number of electrodes and coverage of scalp regions of a low-density EEG montage. To address this issue, we introduce knowledge distillation (KD), a learning mechanism developed for transferring knowledge/information between neural network models, to enhance the performance of low-density EEG decoding. Our framework includes a newly proposed similarity-keeping (SK) teacher-student KD scheme that encourages a low-density EEG student model to acquire the inter-sample similarity as in a pre-trained teacher model trained on high-density EEG data. The experimental results validate that our SK-KD framework consistently improves motor-imagery EEG decoding accuracy when number of electrodes deceases for the input EEG data. For both common low-density headphone-like and headband-like montages, our method outperforms state-of-the-art KD methods across various EEG decoding model architectures. As the first KD scheme developed for enhancing EEG decoding, we foresee the proposed SK-KD framework to facilitate the practicality of low-density EEG-based BCI in real-world applications.
translated by 谷歌翻译
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
已经证明了对比学习适合学习句子嵌入,可以显着提高语义文本相似性(STS)任务。最近,大型对比学习模型,例如句子T5倾向于学到更强大的句子嵌入。虽然有效,但由于计算资源或时间成本限制,这种大型型号很难在线服务。为了解决这个问题,通常采用知识蒸馏(KD),这可以将大型“教师”模型压缩成一个小的“学生”模型,但通常会遭受一些性能损失。在这里,我们提出了一个增强的KD框架,称为蒸馏 - 对比度(迪斯科)。所提出的迪斯科框架首先利用KD将大句子嵌入模型的能力转移到大型未标记数据的小学生模型,然后在标记的训练数据上具有对比学习的学生模型。对于迪斯科舞厅的KD进程,我们进一步提出了对比的知识蒸馏(CKD),以增强教师模型培训,KD和学生模型的一致性,这可能会提高迅速学习的表现。 7 STS基准测试的广泛实验表明,使用所提出的迪斯科和CKD培训的学生模型很少或甚至没有性能损失,并且始终如一地优于相同参数大小的相应对应物。令人惊讶的是,我们的110米学生模型甚至可以优于最新的最新(SOTA)模型,即句子T5(11B),只有1%的参数。
translated by 谷歌翻译
大多数深度度量学习(DML)方法采用了一种策略,该策略迫使所有积极样本在嵌入空间中靠近,同时使它们远离负面样本。但是,这种策略忽略了正(负)样本的内部关系,并且通常导致过度拟合,尤其是在存在硬样品和标签错误的情况下。在这项工作中,我们提出了一个简单而有效的正则化,即列表自我验证(LSD),该化逐渐提炼模型的知识,以适应批处理中每个样本对的更合适的距离目标。LSD鼓励在正(负)样本中更平稳的嵌入和信息挖掘,以减轻过度拟合并从而改善概括。我们的LSD可以直接集成到一般的DML框架中。广泛的实验表明,LSD始终提高多个数据集上各种度量学习方法的性能。
translated by 谷歌翻译
大型的语言模型(PRELMS)正在彻底改变所有基准的自然语言处理。但是,它们的巨大尺寸对于小型实验室或移动设备上的部署而言是过分的。修剪和蒸馏等方法可减少模型尺寸,但通常保留相同的模型体系结构。相反,我们探索了蒸馏预告片中的更有效的架构,单词的持续乘法(CMOW),该构造将每个单词嵌入为矩阵,并使用矩阵乘法来编码序列。我们扩展了CMOW体系结构及其CMOW/CBOW-HYBRID变体,具有双向组件,以提供更具表现力的功能,在预绘制期间进行一般(任务无义的)蒸馏的单次表示,并提供了两种序列编码方案,可促进下游任务。句子对,例如句子相似性和自然语言推断。我们的基于矩阵的双向CMOW/CBOW-HYBRID模型在问题相似性和识别文本范围内的Distilbert具有竞争力,但仅使用参数数量的一半,并且在推理速度方面快三倍。除了情感分析任务SST-2和语言可接受性任务COLA外,我们匹配或超过ELMO的ELMO分数。但是,与以前的跨架结构蒸馏方法相比,我们证明了检测语言可接受性的分数增加了一倍。这表明基于基质的嵌入可用于将大型预赛提炼成竞争模型,并激励朝这个方向进行进一步的研究。
translated by 谷歌翻译
由于学术和工业领域的异质图无处不在,研究人员最近提出了许多异质图神经网络(HGNN)。在本文中,我们不再采用更强大的HGNN模型,而是有兴趣设计一个多功能的插件模块,该模块解释了从预先训练的HGNN中提取的关系知识。据我们所知,我们是第一个在异质图上提出高阶(雇用)知识蒸馏框架的人,无论HGNN的模型体系结构如何,它都可以显着提高预测性能。具体而言,我们的雇用框架最初执行一阶节点级知识蒸馏,该蒸馏曲线及其预测逻辑编码了老师HGNN的语义。同时,二阶关系级知识蒸馏模仿了教师HGNN生成的不同类型的节点嵌入之间的关系相关性。在各种流行的HGNN模型和三个现实世界的异质图上进行了广泛的实验表明,我们的方法获得了一致且相当大的性能增强,证明了其有效性和泛化能力。
translated by 谷歌翻译
在过去的几年中,基于变压器的预训练的语言模型在行业和学术界都取得了惊人的成功。但是,较大的模型尺寸和高运行时间延迟是在实践中应用它们的严重障碍,尤其是在手机和物联网(IoT)设备上。为了压缩该模型,最近有大量文献围绕知识蒸馏(KD)的主题长大。然而,KD在基于变压器的模型中的工作方式仍不清楚。我们取消了KD的组件,并提出了一个统一的KD框架。通过框架,花费了23,000多个GPU小时的系统和广泛的实验,从知识类型的角度,匹配策略,宽度深度折衷,初始化,型号大小等。在培训前语言模型中,对先前最新的(SOTA)的相对显着改善。最后,我们为基于变压器模型的KD提供了最佳实践指南。
translated by 谷歌翻译
最初引入了知识蒸馏,以利用来自单一教师模型的额外监督为学生模型培训。为了提高学生表现,最近的一些变体试图利用多个教师利用不同的知识来源。然而,现有研究主要通过对多种教师预测的平均或将它们与其他无标签策略相结合,将知识集成在多种来源中,可能在可能存在低质量的教师预测存在中误导学生。为了解决这个问题,我们提出了信心感知的多教师知识蒸馏(CA-MKD),该知识蒸馏(CA-MKD)在地面真理标签的帮助下,适用于每个教师预测的样本明智的可靠性,与那些接近单热的教师预测标签分配了大量的重量。此外,CA-MKD包含中间层,以进一步提高学生表现。广泛的实验表明,我们的CA-MKD始终如一地优于各种教师学生架构的所有最先进的方法。
translated by 谷歌翻译
迅速调整,它冻结了预审计的语言模型(PLM),只有微调的几个额外软提示的参数,在PLM具有数十亿个参数时,对全参数微调(即模型调整)显示出具有竞争性的性能,但仍然显示出竞争力。在较小的PLM的情况下,性能差。因此,迅速转移(POT),通过训练有素的类似源任务的提示来初始化目标提示,最近提议改善及时调整。但是,这样的香草锅方法通常会实现次优的性能,因为(i)锅对源目标对的相似性和(ii)直接对目标提示进行初始提示的提示敏感,而目标任务可能会导致灾难性忘记来源知识。为了解决这些问题,我们提出了一个新的指标,以准确预测及时的转移性(关于(i)),以及一种利用知识蒸馏技术将“知识”从源提示转移到的新颖的锅方法(即熊猫)目标以微妙的方式提示,并有效缓解灾难性遗忘(关于(ii))。此外,为了实现每个源目标对的自适应及时转移,我们使用指标来控制熊猫方法中的知识转移。对PLM的5个量表的21个源和9个目标数据集的189组组合进行了广泛而系统的实验,表明:1)我们提出的指标很好地预测了及时的可传递性; 2)在所有任务和型号中,我们的熊猫始终优于香草锅的平均得分2.3%(最高24.1%); 3)通过我们的熊猫方法,及时调整可以比在各种PLM量表场景中的模型调整来实现竞争性甚至更好的性能。接受代码和模型将在接受后发布。
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD .
translated by 谷歌翻译
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7x more parameters and run 2.5x slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus
translated by 谷歌翻译
在多种方式知识蒸馏研究的背景下,现有方法主要集中在唯一的学习教师最终产出问题。因此,教师网络与学生网络之间存在深处。有必要强制学生网络来学习教师网络的模态关系信息。为了有效利用从教师转移到学生的知识,采用了一种新的模型关系蒸馏范式,通过建模不同的模态之间的关系信息,即学习教师模级克矩阵。
translated by 谷歌翻译
作为模型压缩的一种有前途的方法,知识蒸馏通过从繁琐的知识转移知识来改善紧凑模型的性能。用于指导学生培训的知识很重要。语义分割中的先前蒸馏方法努力从这些特征中提取各种形式的知识,涉及依靠先前信息并具有有限的性能提高的精心手动设计。在本文中,我们提出了一种称为标准化功能蒸馏(NFD)的简单而有效的特征蒸馏方法,旨在实现原始功能的有效蒸馏,而无需手动设计新的知识形式。关键的想法是防止学生专注于模仿通过归一化的教师特征响应的幅度。我们的方法可获得有关CityScapes,VOC 2012和ADE20K数据集的语义细分的最新蒸馏结果。代码将可用。
translated by 谷歌翻译
知识蒸馏(KD),最称为模型压缩的有效方法,旨在将更大的网络(教师)的知识转移到更小的网络(学生)。传统的KD方法通常采用以监督方式培训的教师模型,其中输出标签仅作为目标处理。我们进一步扩展了这一受监督方案,我们为KD,即Oracle老师推出了一种新型的教师模型,它利用源输入和输出标签的嵌入来提取更准确的知识来转移到学生。所提出的模型遵循变压器网络的编码器解码器注意结构,这允许模型从输出标签上参加相关信息。在三种不同的序列学习任务中进行了广泛的实验:语音识别,场景文本识别和机器翻译。从实验结果来看,我们经验证明,拟议的模型在这些任务中改善了学生,同时在教师模型的培训时间内实现了相当大的速度。
translated by 谷歌翻译
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional knowledge distillation methods include response-based methods and feature-based methods. Response-based methods are used the most widely but suffer from lower upper limit of model performance, while feature-based methods have constraints on the vocabularies and tokenizers. In this paper, we propose a tokenizer-free method liberal feature-based distillation (LEAD). LEAD aligns the distribution between teacher model and student model, which is effective, extendable, portable and has no requirements on vocabularies, tokenizer, or model architecture. Extensive experiments show the effectiveness of LEAD on several widely-used benchmarks, including MS MARCO Passage, TREC Passage 19, TREC Passage 20, MS MARCO Document, TREC Document 19 and TREC Document 20.
translated by 谷歌翻译
尽管知识蒸馏有经验成功,但仍然缺乏理论基础,可以自然地导致计算廉价的实现。为了解决这一问题,我们使用最近提出的熵函数来促进信息理论与知识蒸馏之间的替代联系。在这样做时,我们介绍了两个不同的互补损失,旨在最大限度地提高学生和教师陈述之间的相关性和互信。我们的方法对知识蒸馏和跨模型转移任务的最先进的竞争性能实现了最先进的,同时产生明显较低的培训开销,而不是密切相关和类似的方法。我们进一步展示了我们对二元蒸馏任务的方法的有效性,由此,我们将光线光到新的最先进的二进制量化。代码,评估协议和培训的型号将公开可用。
translated by 谷歌翻译