我们提出了一个简单的方案,以合并两个神经网络,这些神经网络训练有不同的启动初始化,成一个与原始大小相同的神经网络。我们通过仔细选择每个输入网络的频道来做到这一点。在尝试多个起始种子以避免不幸的种子之后,我们的过程可以用作最终确定步骤。我们还表明,培训两个网络并合并它们会导致性能比在很长一段时间内训练单个网络更好。可用性:https://github.com/fmfi-compbio/neural-network-merging
translated by 谷歌翻译
知识蒸馏将知识从繁琐的老师转移到小学生。最近的结果表明,对学生友好的老师更适合提炼,因为它提供了更可转移的知识。在这项工作中,我们提出了新颖的框架“修剪,然后蒸馏”,该框架首先修剪模型,以使其更具转让,然后将其提炼为学生。我们提供了几个探索性示例,这些探索性示例与原始未经修复的网络相比,教师教的更好。从理论上讲,我们进一步表明,修剪的老师在蒸馏中扮演正规剂的角色,从而减少了概括误差。基于此结果,我们提出了一种新型的神经网络压缩方案,该方案根据修剪教师形成学生网络,然后采用“修剪,然后蒸馏”策略。该代码可在https://github.com/ososos8888/prune-then-distill上找到
translated by 谷歌翻译
Structural pruning of neural network parameters reduces computation, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and secondorder Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93%) correlation between the contribution computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods leads to an improvement over state-ofthe-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0.02% in the top-1 accuracy on ImageNet. Code is available at https://github.com/NVlabs/Taylor_pruning.
translated by 谷歌翻译
Figure 1. An illustration of standard knowledge distillation. Despite widespread use, an understanding of when the student can learn from the teacher is missing.
translated by 谷歌翻译
当前的深神经网络(DNN)被过度参数化,并在推断每个任务期间使用其大多数神经元连接。然而,人的大脑开发了针对不同任务的专门区域,并通过其神经元连接的一小部分进行推断。我们提出了一种迭代修剪策略,引入了一个简单的重要性评分度量度量,该指标可以停用不重要的连接,解决DNN中的过度参数化并调节射击模式。目的是找到仍然能够以可比精度解决给定任务的最小连接,即更简单的子网。我们在MNIST上实现了LENET体系结构的可比性能,并且与CIFAR-10/100和Tiny-ImageNet上的VGG和Resnet架构的最先进算法相比,参数压缩的性能明显更高。我们的方法对于考虑到ADAM和SGD的两个不同优化器也表现良好。该算法并非旨在在考虑当前的硬件和软件实现时最小化失败,尽管与最新技术相比,该算法的性能合理。
translated by 谷歌翻译
在实现最先进的性能和在实际应用中负担得起的大型模型之间,计算机视觉的差异越来越大。在本文中,我们解决了这个问题,并显着弥合了这两种模型之间的差距。在我们的实证研究中,我们不一定要提出一种新方法,而是要努力确定一个可靠的有效食谱,以使最先进的大型模型在实践中负担得起。我们证明,当正确执行时,知识蒸馏可以成为减少大型尺寸而不损害其性能的强大工具。特别是,我们发现存在某些隐式设计选择,这可能会严重影响蒸馏的有效性。我们的关键贡献是对这些设计选择的明确识别,这些选择以前在文献中尚未阐明。我们通过一项全面的实证研究备份了我们的发现,在广泛的视觉数据集上展示了令人信服的结果,尤其是获得了最先进的Imagenet Resnet-50模型,该模型可实现82.8%的Top-1准确性。 。
translated by 谷歌翻译
神经网络的架构和参数通常独立优化,这需要每当修改体系结构时对参数的昂贵再次再次再次进行验证。在这项工作中,我们专注于在不需要昂贵的再培训的情况下越来越多。我们提出了一种在训练期间添加新神经元的方法,而不会影响已经学到的内容,同时改善了培训动态。我们通过最大化新重量的梯度来实现后者,并通过奇异值分解(SVD)有效地找到最佳初始化。我们称这种技术渐变最大化增长(Gradmax),并展示其各种视觉任务和架构的效力。
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
Knowledge Distillation (KD) consists of transferring "knowledge" from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student's compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and nonpredicted classes.
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
translated by 谷歌翻译
深度神经网络的计算能力的巨大要求是他们真实世界应用的主要障碍。许多最近的应用特定集成电路(ASIC)芯片特征专用于神经网络加速的硬件支持。然而,由于ASICS多年来发展,他们不可避免地通过神经结构研究的最新发展出现。例如,变换器网络在许多流行芯片上没有本机支持,因此难以部署。在本文中,我们提出了一系列神经网络的拱门,这些网络唯一由距离Asics的大多数架构有效支持的运营商。当产生弓形网时,通过无标记的块块模型蒸馏以逐步的方式消除较少的普通网络结构,如层归一化和嵌入层,同时同时执行Sub-八比特量化以最大化性能。机器翻译和图像分类任务的经验结果确认我们可以将最新的发发的神经架构转换为快速运行和准确的拱网,准备部署多个大规模生产的ASIC芯片。代码将在https://github.com/megvii-research/arch-et栏中提供。
translated by 谷歌翻译
知识蒸馏(KD)在将学习表征从大型模型(教师)转移到小型模型(学生)方面表现出非常有希望的能力。但是,随着学生和教师之间的容量差距变得更大,现有的KD方法无法获得更好的结果。我们的工作表明,“先验知识”对KD至关重要,尤其是在应用大型老师时。特别是,我们提出了动态的先验知识(DPK),该知识将教师特征的一部分作为特征蒸馏之前的先验知识。这意味着我们的方法还将教师的功能视为“输入”,而不仅仅是``目标''。此外,我们根据特征差距动态调整训练阶段的先验知识比率,从而引导学生在适当的困难中。为了评估所提出的方法,我们对两个图像分类基准(即CIFAR100和Imagenet)和一个对象检测基准(即MS Coco)进行了广泛的实验。结果表明,在不同的设置下,我们方法在性能方面具有优势。更重要的是,我们的DPK使学生模型的表现与教师模型的表现呈正相关,这意味着我们可以通过应用更大的教师进一步提高学生的准确性。我们的代码将公开用于可重复性。
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
尽管深层神经网络在各种任务中取得了巨大的成功,但它们不断增加的规模也为部署带来了重要的开销。为了压缩这些模型,提出了知识蒸馏将知识从笨拙(教师)网络转移到轻量级(学生)网络中。但是,老师的指导并不总是改善学生的概括,尤其是当学生和老师之间的差距很大时。以前的作品认为,这是由于老师的高确定性,导致更难适应的标签。为了软化这些标签,我们提出了一种修剪方法,称为预测不确定性扩大(PRUE),以简化教师。具体而言,我们的方法旨在减少教师对数据的确定性,从而为学生产生软预测。我们从经验上研究了提出的方法通过在CIFAR-10/100,Tiny-Imagenet和Imagenet上实验的实验的有效性。结果表明,接受稀疏教师培训的学生网络取得更好的表现。此外,我们的方法允许研究人员从更深的网络中提取知识,以进一步改善学生。我们的代码公开:\ url {https://github.com/wangshaopu/prue}。
translated by 谷歌翻译
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, the optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well-pre-trained model is not a good initialization for the model with low-rank constraints. Thus, the performance of a low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. Compared to re-training on the compact model, this enables full utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Third, we propose batch normalization (BN) rectification to cut off its effect on the optimal low-rank approximation of the weight matrix, which further improves the performance. Comprehensive experiments on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods. Our code is available at https://github.com/BZQLin/LRPET.
translated by 谷歌翻译
特征回归是将大型神经网络模型蒸馏到较小的功能回归。我们表明,随着网络架构的简单变化,回归可能会优于自我监督模型的知识蒸馏更复杂的最先进方法。令人惊讶的是,即使仅在蒸馏过程中仅使用并且在下游任务中丢弃时,将多层的Perceptron头部添加到CNN骨架上是有益的。因此,更深的非线性投影可以使用在不改变推理架构和时间的情况下准确地模仿老师。此外,我们利用独立的投影头来同时蒸馏多个教师网络。我们还发现,使用与教师和学生网络的输入相同的弱增强图像辅助蒸馏。Imagenet DataSet上的实验证明了各种自我监督蒸馏环境中提出的变化的功效。
translated by 谷歌翻译
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
translated by 谷歌翻译