在理论上和经验上,已经显示了深度神经网络的集合,以提高看不见的试验集上的泛化精度。然而,高训练成本阻碍了其效率,因为我们需要足够数量的基础模型,并且集合中的每一个都必须单独培训。提出了许多方法来解决这个问题,而且大多数基于预先训练的网络可以将其知识转移到下一个基础模型,然后加速培训过程的特征。然而,这些方法遭受严重的问题,即所有这些都会在没有选择的情况下传输知识,从而导致多样化。由于集合学习的效果更明显,如果合并成员是准确和多样化的,我们提出了一种命名为高效分集驱动的合奏(EDDE)的方法来解决集合的多样性和效率。为了加快培训过程,我们提出了一种新颖的知识转移方法,可以选择性地转移以前的通用知识。为了增强多样性,我们首先提出了一种新的多样性度量,然后使用它来定义多样性驱动的损耗功能以进行优化。最后,我们采用基于升级的框架来结合上述操作,这种方法还可以进一步提高分集。我们在计算机视觉(CV)和自然语言处理(NLP)任务中评估EDDE。与其他众所周知的集合方法相比,EDDE可以获得最高的合奏精度,培训成本最低,这意味着它在神经网络的集合中有效。
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
Ensemble learning serves as a straightforward way to improve the performance of almost any machine learning algorithm. Existing deep ensemble methods usually naively train many different models and then aggregate their predictions. This is not optimal in our view from two aspects: i) Naively training multiple models adds much more computational burden, especially in the deep learning era; ii) Purely optimizing each base model without considering their interactions limits the diversity of ensemble and performance gains. We tackle these issues by proposing deep negative correlation classification (DNCC), in which the accuracy and diversity trade-off is systematically controlled by decomposing the loss function seamlessly into individual accuracy and the correlation between individual models and the ensemble. DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated. Thanks to the optimized diversities, DNCC works well even when utilizing a shared network backbone, which significantly improves its efficiency when compared with most existing ensemble systems. Extensive experiments on multiple benchmark datasets and network structures demonstrate the superiority of the proposed method.
translated by 谷歌翻译
合奏学习结合了几个单独的模型,以获得更好的概括性能。目前,与浅层或传统模型相比,深度学习体系结构表现更好。深度合奏学习模型结合了深度学习模型以及整体学习的优势,使最终模型具有更好的概括性能。本文回顾了最先进的深度合奏模型,因此是研究人员的广泛摘要。合奏模型广泛地分类为包装,增强,堆叠,基于负相关的深度合奏模型,显式/隐式合奏,同质/异质合奏,基于决策融合策略的深层集合模型。还简要讨论了在不同领域中深层集成模型的应用。最后,我们以一些潜在的未来研究方向结束了本文。
translated by 谷歌翻译
结合是改善机器学习(ML)模型的一种流行而有效的方法。它不仅在古典ML中,而且证明了其价值,而且还证明了深度学习的价值。合奏提高了ML解决方案的质量和可信度,并允许估计不确定性。但是,它们以一个代价:深度学习模型的培训合奏吃了大量的计算资源。快照结合,沿着单个训练路径在合奏中收集模型。由于它仅一次进行训练,因此计算时间类似于一个模型的训练。但是,沿训练路径的模型质量是不同的:通常,如果没有过度拟合,则以后的模型更好。因此,模型具有不同的效用。我们的方法通过沿训练路径选择和加权合奏成员来改善快照结合。它依赖于训练时间的可能性,而无需查看标准堆叠方法的验证样本错误。时尚MNIST,CIFAR-10和CIFAR-100数据集的实验证据证明了拟议的加权合奏C.T.香草结合深度学习模型。
translated by 谷歌翻译
很少有射击学习(FSL)旨在使用有限标记的示例生成分类器。许多现有的作品采用了元学习方法,构建了一些可以从几个示例中学习以生成分类器的学习者。通常,几次学习者是通过依次对多个几次射击任务进行采样并优化几杆学习者在为这些任务生成分类器时的性能来构建或进行元训练的。性能是通过结果分类器对这些任务的测试(即查询)示例进行分类的程度来衡量的。在本文中,我们指出了这种方法的两个潜在弱点。首先,采样的查询示例可能无法提供足够的监督来进行元训练少数学习者。其次,元学习的有效性随着射击数量的增加而急剧下降。为了解决这些问题,我们为少数学习者提出了一个新颖的元训练目标,这是为了鼓励少数学习者生成像强大分类器一样执行的分类器。具体而言,我们将每个采样的几个弹药任务与强大的分类器相关联,该分类器接受了充分的标记示例。强大的分类器可以看作是目标分类器,我们希望在几乎没有示例的情况下生成的几个学习者,我们使用强大的分类器来监督少数射击学习者。我们提出了一种构建强分类器的有效方法,使我们提出的目标成为现有基于元学习的FSL方法的易于插入的术语。我们与许多代表性的元学习方法相结合验证了我们的方法,Lastshot。在几个基准数据集中,我们的方法可导致各种任务的显着改进。更重要的是,通过我们的方法,基于元学习的FSL方法可以在不同数量的镜头上胜过基于非Meta学习的方法。
translated by 谷歌翻译
常规的几杆分类(FSC)旨在识别出有限标记的数据的新课程中的样本。最近,已经提出了域泛化FSC(DG-FSC),目的是识别来自看不见的域的新型类样品。 DG-FSC由于基础类(用于培训)和新颖类(评估中遇到)之间的域移位,对许多模型构成了巨大的挑战。在这项工作中,我们为解决DG-FSC做出了两个新颖的贡献。我们的首要贡献是提出重生网络(BAN)情节培训,并全面研究其对DG-FSC的有效性。作为一种特定的知识蒸馏形式,已证明禁令可以通过封闭式设置来改善常规监督分类的概括。这种改善的概括促使我们研究了DG-FSC的禁令,我们表明禁令有望解决DG-FSC中遇到的域转移。在令人鼓舞的发现的基础上,我们的第二个(主要)贡献是提出很少的禁令,FS-Ban,这是DG-FSC的新型禁令方法。我们提出的FS-BAN包括新颖的多任务学习目标:相互正则化,不匹配的老师和元控制温度,这些目标都是专门设计的,旨在克服DG-FSC中的中心和独特挑战,即过度拟合和领域差异。我们分析了这些技术的不同设计选择。我们使用六个数据集和三个基线模型进行全面的定量和定性分析和评估。结果表明,我们提出的FS-BAN始终提高基线模型的概括性能,并达到DG-FSC的最先进的准确性。
translated by 谷歌翻译
神经网络的宽度很重要,因为增加了宽度,这必然会增加模型容量。但是,网络的性能不会随宽度而线性地提高,并且很快就会饱和。在这种情况下,我们认为,增加网络数量(合奏)的数量比纯粹增加宽度可以实现更好的准确性效率折衷。为了证明这一点,一个大型网络就其参数和正则化组件分为几个小网络。这些小型网络中的每一个都有原始参数的一小部分。然后,我们一起训练这些小型网络,使他们看到相同数据的各种观点,以增加它们的多样性。在此共同培训过程中,网络也可以相互学习。结果,小型网络可以比几乎没有或没有额外参数或拖船的大型网络获得更好的合奏性能,即实现更好的准确性效率折衷。通过并发运行,小型网络还可以比大型推理速度更快。以上所有内容都表明,网络的数量是模型缩放的新维度。我们通过广泛的实验在共同基准上使用8种不同的神经体系结构来验证我们的论点。该代码可在\ url {https://github.com/freeformrobotics/divide-and-co-training}中获得。
translated by 谷歌翻译
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
将训练一个问题的深度神经网络转移到另一个问题上,只需要少量数据和几乎没有额外的计算时间。对于深度学习模型的合奏,通常比单个模型优越。但是,深度神经网络的转移需要相对较高的计算费用。过度拟合的可能性也增加。我们的转移学习方法的方法包括两个步骤:(a)通过单个移位向量移动集合中所有模型的编码器的权重,以及(b)之后为每个单独的模型进行微小的微调。该策略导致了训练过程的加快,并提供了使用Shift Vector大大减少训练时间的合奏中添加模型的机会。我们通过计算时间比较不同的策略,合奏的准确性,不确定性估计和分歧,得出的结论是,与传统方法相比,我们的方法使用相同的计算复杂性提供了竞争结果。同样,我们的方法使合奏模型的多样性更高。
translated by 谷歌翻译
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
translated by 谷歌翻译
无数据知识蒸馏(DFKD)的目的是在没有培训数据的情况下培训从教师网络的轻量级学生网络。现有方法主要遵循生成信息样本的范式,并通过针对数据先验,边界样本或内存样本来逐步更新学生模型。但是,以前的DFKD方法很难在不同的训练阶段动态调整生成策略,这反过来又很难实现高效且稳定的训练。在本文中,我们探讨了如何从课程学习(CL)的角度来教学学生,并提出一种新方法,即“ CUDFKD”,即“使用课程的无数据知识蒸馏”。它逐渐从简单的样本到困难的样本学习,这类似于人类学习的方式。此外,我们还提供了对主要化最小化(MM)算法的理论分析,并解释了CUDFKD的收敛性。在基准数据集上进行的实验表明,使用简单的课程设计策略,CUDFKD可以在最先进的DFKD方法和不同的基准测试中实现最佳性能,例如CIFAR10上RESNET18模型的95.28 \%TOP1的精度,这是更好的而不是从头开始培训数据。训练很快,在30个时期内达到90 \%的最高精度,并且训练期间的差异稳定。同样在本文中,还分析和讨论了CUDFKD的适用性。
translated by 谷歌翻译
神经网络通常使预测依赖于数据集的虚假相关性,而不是感兴趣的任务的内在特性,面对分布外(OOD)测试数据的急剧下降。现有的De-Bias学习框架尝试通过偏置注释捕获特定的DataSet偏差,它们无法处理复杂的“ood方案”。其他人在低能力偏置模型或损失上隐含地识别数据集偏置,但在训练和测试数据来自相同分布时,它们会降低。在本文中,我们提出了一般的贪婪去偏见学习框架(GGD),它贪婪地训练偏置模型和基础模型,如功能空间中的梯度下降。它鼓励基础模型专注于用偏置模型难以解决的示例,从而仍然在测试阶段中的杂散相关性稳健。 GGD在很大程度上提高了各种任务的模型的泛化能力,但有时会过度估计偏置水平并降低在分配测试。我们进一步重新分析了GGD的集合过程,并将课程正规化为由课程学习启发的GGD,这取得了良好的分配和分发性能之间的权衡。对图像分类的广泛实验,对抗问题应答和视觉问题应答展示了我们方法的有效性。 GGD可以在特定于特定于任务的偏置模型的设置下学习更强大的基础模型,其中具有现有知识和自组合偏置模型而无需先验知识。
translated by 谷歌翻译
Meta-learning has been proposed as a framework to address the challenging few-shot learning setting. The key idea is to leverage a large number of similar few-shot tasks in order to learn how to adapt a base-learner to a new task for which only a few labeled samples are available. As deep neural networks (DNNs) tend to overfit using a few samples only, meta-learning typically uses shallow neural networks (SNNs), thus limiting its effectiveness. In this paper we propose a novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks. Specifically, meta refers to training multiple tasks, and transfer is achieved by learning scaling and shifting functions of DNN weights for each task. In addition, we introduce the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL. We conduct experiments using (5-class, 1-shot) and (5-class, 5shot) recognition tasks on two challenging few-shot learning benchmarks: miniImageNet and Fewshot-CIFAR100. Extensive comparisons to related works validate that our meta-transfer learning approach trained with the proposed HT meta-batch scheme achieves top performance. An ablation study also shows that both components contribute to fast convergence and high accuracy 1 .Optimize θ by Eq. 3; 5 end 6 Optimize Φ S {1,2} and θ by Eq. 4 and Eq. 5; 7 while not done do 8 Sample class-k in T (te) ; 9 Compute Acc k for T (te) ; 10 end 11 Return class-m with the lowest accuracy Acc m .
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
translated by 谷歌翻译
合奏的基本分支混合合奏在许多机器学习问题,尤其是回归中蓬勃发展。几项研究证实了多样性的重要性。但是,以前的合奏仅考虑在子模型训练阶段的多样性,与单个模型相比,改进有限。相反,本研究从异质模型池中选择和权重子模型。它使用内点过滤线性搜索算法解决了优化问题。这种优化问题创新地将负相关学习作为惩罚项,可以选择多种模型子集。实验结果显示了一些有意义的观点。模型池构造需要不同类别的模型,每个类别都作为子模型为所有可能的参数集。选择每个类的最佳子模型以构建基于NCL的合奏,该集合比子模型的平均值要好得多。此外,与经典常数和非恒定加权方法相比,基于NCL的合奏在几种预测指标中具有重要优势。实际上,由于模型不确定性,很难在事先结论数据集的最佳子模型。但是,我们的方法将获得可比较的精度作为RMSE度量的潜在最佳子模型。总之,这项研究的价值在于它的易用性和有效性,使混合团合奏可以接受多样性和准确性。
translated by 谷歌翻译
知识蒸馏在模型压缩方面取得了显着的成就。但是,大多数现有方法需要原始的培训数据,而实践中的实际数据通常是不可用的,因为隐私,安全性和传输限制。为了解决这个问题,我们提出了一种有条件的生成数据无数据知识蒸馏(CGDD)框架,用于培训有效的便携式网络,而无需任何实际数据。在此框架中,除了使用教师模型中提取的知识外,我们将预设标签作为额外的辅助信息介绍以培训发电机。然后,训练有素的发生器可以根据需要产生指定类别的有意义的培训样本。为了促进蒸馏过程,除了使用常规蒸馏损失,我们将预设标签视为地面真理标签,以便学生网络直接由合成训练样本类别监督。此外,我们强制学生网络模仿教师模型的注意图,进一步提高了其性能。为了验证我们方法的优越性,我们设计一个新的评估度量称为相对准确性,可以直接比较不同蒸馏方法的有效性。培训的便携式网络通过提出的数据无数据蒸馏方法获得了99.63%,99.07%和99.84%的CIFAR10,CIFAR100和CALTECH101的相对准确性。实验结果表明了所提出的方法的优越性。
translated by 谷歌翻译
Knowledge Distillation (KD) consists of transferring "knowledge" from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student's compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and nonpredicted classes.
translated by 谷歌翻译