本文旨在探讨神经架构搜索(NAS)的可行性仅在不使用任何原始训练数据的情况下给出预先训练的模型。这是实质保护,偏离避免等的重要情况。为实现这一目标,我们首先通过从预先训练的深神经网络中恢复知识来综合可用数据。然后我们使用合成数据及其预测的软标签来指导神经结构搜索。我们确定NAS任务需要具有足够的语义,多样性和来自自然图像的最小域间隙的合成数据(我们在此处瞄准)。对于语义,我们提出了递归标签校准,以产生更多的信息性输出。对于多样性,我们提出了一个区域更新策略,以产生更多样化和富集的合成数据。对于最小的域间隙,我们使用输入和特征级正则化来模拟潜在空间的原始数据分布。我们将我们提出的三个流行NAS算法实例化:飞镖,Proxylessnas和Spos。令人惊讶的是,我们的结果表明,通过搜索我们的合成数据来实现的架构,实现了与从原始的架构中搜索的架构相当的准确性,首次导出了NAS可以有效完成的结论如果合成方法设计良好,则无需访问原件或称为自然数据。我们的代码将公开提供。
translated by 谷歌翻译
在过去的十年中,许多深入学习模型都受到了良好的培训,并在各种机器智能领域取得了巨大成功,特别是对于计算机视觉和自然语言处理。为了更好地利用这些训练有素的模型在域内或跨域转移学习情况下,提出了知识蒸馏(KD)和域适应(DA)并成为研究亮点。他们旨在通过原始培训数据从训练有素的模型转移有用的信息。但是,由于隐私,版权或机密性,原始数据并不总是可用的。最近,无数据知识转移范式吸引了吸引人的关注,因为它涉及从训练有素的模型中蒸馏宝贵的知识,而无需访问培训数据。特别是,它主要包括无数据知识蒸馏(DFKD)和源无数据域适应(SFDA)。一方面,DFKD旨在将域名域内知识从一个麻烦的教师网络转移到一个紧凑的学生网络,以进行模型压缩和有效推论。另一方面,SFDA的目标是重用存储在训练有素的源模型中的跨域知识并将其调整为目标域。在本文中,我们对知识蒸馏和无监督域适应的视角提供了全面的数据知识转移,以帮助读者更好地了解目前的研究状况和想法。分别简要审查了这两个领域的应用和挑战。此外,我们对未来研究的主题提供了一些见解。
translated by 谷歌翻译
有条件的生成对冲网络(CGANS)为许多视觉和图形应用程序启用了可控图像合成。然而,最近的CGANS比现代识别CNNS更加计算密集型1-2个数量级。例如,Gaugan每张图像消耗281G Mac,而MobileNet-V3的0.44g Mac相比,使交互式部署难以实现。在这项工作中,我们提出了一种通用压缩框架,用于减少CGAN中发电机的推理时间和模型大小。直接应用现有的压缩方法由于GaN培训的难度和发电机架构的差异而产生差的性能。我们以两种方式解决了这些挑战。首先,为了稳定GaN培训,我们将原型模型的多个中间表示的知识转移到其压缩模型,统一未配对和配对的学习。其次,我们的方法通过神经架构搜索找到高效的架构,而不是重用现有的CNN设计。为了加速搜索过程,我们通过重量共享解耦模型培训并搜索。实验证明了我们在不同监督环境,网络架构和学习方法中的方法的有效性。在没有损失图像质量的情况下,我们将Cycleangan,Pix2pix的Cryclan,Pix2pix的计算计算为12倍,Munit By 29X,Gaugan,通过9倍,为交互式图像合成铺平道路。
translated by 谷歌翻译
知识蒸馏在模型压缩方面取得了显着的成就。但是,大多数现有方法需要原始的培训数据,而实践中的实际数据通常是不可用的,因为隐私,安全性和传输限制。为了解决这个问题,我们提出了一种有条件的生成数据无数据知识蒸馏(CGDD)框架,用于培训有效的便携式网络,而无需任何实际数据。在此框架中,除了使用教师模型中提取的知识外,我们将预设标签作为额外的辅助信息介绍以培训发电机。然后,训练有素的发生器可以根据需要产生指定类别的有意义的培训样本。为了促进蒸馏过程,除了使用常规蒸馏损失,我们将预设标签视为地面真理标签,以便学生网络直接由合成训练样本类别监督。此外,我们强制学生网络模仿教师模型的注意图,进一步提高了其性能。为了验证我们方法的优越性,我们设计一个新的评估度量称为相对准确性,可以直接比较不同蒸馏方法的有效性。培训的便携式网络通过提出的数据无数据蒸馏方法获得了99.63%,99.07%和99.84%的CIFAR10,CIFAR100和CALTECH101的相对准确性。实验结果表明了所提出的方法的优越性。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
神经网络可以从单个图像中了解视觉世界的内容是什么?虽然它显然不能包含存在的可能对象,场景和照明条件 - 在所有可能的256 ^(3x224x224)224尺寸的方形图像中,它仍然可以在自然图像之前提供强大的。为了分析这一假设,我们通过通过监控掠夺教师的知识蒸馏来制定一种训练神经网络的培训神经网络。有了这个,我们发现上述问题的答案是:“令人惊讶的是,很多”。在定量术语中,我们在CiFar-10/100上找到了94%/ 74%的前1个精度,在想象中,通过将这种方法扩展到音频,84%的语音组合。在广泛的分析中,我们解除了增强,源图像和网络架构的选择,以及在从未见过熊猫的网络中发现“熊猫神经元”。这项工作表明,一个图像可用于推断成千上万的对象类,并激励关于增强和图像的基本相互作用的更新的研究议程。
translated by 谷歌翻译
我们介绍了延迟感知网络加速度(LANA) - 一种在神经结构上建立的方法,用于加速神经网络的神经结构搜索技术和教师学生蒸馏。 Lana由两个阶段组成:在第一阶段,它会使用层面特征映射蒸馏来列举每层教师网络的许多替代操作。在第二阶段,它解决了使用新颖的整数线性优化(ILP)方法的有效操作的组合选择。 ILP带来独特的属性,因为它(i)在几秒钟内执行NAS,(ii)轻松满足预算约束,(iii)在图层粒度上工作,(iv)支持巨大的搜索空间$ o(10 ^ { 100})$,超越先前的搜索方法,效率和效率。在广泛的实验中,我们表明Lana产生了由目标潜伏期预算限制的有效和准确的模型,同时比其他技术明显快。我们分析了三个流行的网络架构:高效的网络,高效网络和reses,并在压缩较大模型的较小模型的延迟级别时,实现所有型号(高达3.0 \%$)的准确性改进。 Lana通过GPU和CPU实现显着的加速(高达5美元\倍),以没有准确性下降。代码将很快分享。
translated by 谷歌翻译
在实现最先进的性能和在实际应用中负担得起的大型模型之间,计算机视觉的差异越来越大。在本文中,我们解决了这个问题,并显着弥合了这两种模型之间的差距。在我们的实证研究中,我们不一定要提出一种新方法,而是要努力确定一个可靠的有效食谱,以使最先进的大型模型在实践中负担得起。我们证明,当正确执行时,知识蒸馏可以成为减少大型尺寸而不损害其性能的强大工具。特别是,我们发现存在某些隐式设计选择,这可能会严重影响蒸馏的有效性。我们的关键贡献是对这些设计选择的明确识别,这些选择以前在文献中尚未阐明。我们通过一项全面的实证研究备份了我们的发现,在广泛的视觉数据集上展示了令人信服的结果,尤其是获得了最先进的Imagenet Resnet-50模型,该模型可实现82.8%的Top-1准确性。 。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)最近一直吸引了研究社区的越来越关注,归因于其仅使用合成数据压缩模型的能力。尽管取得了令人鼓舞的成果,但最先进的DFKD方法仍然患有数据综合的低效率,使得无数据培训过程非常耗时,因此可以对大规模任务进行不适当的。在这项工作中,我们介绍了一个被称为FastDFKD的有效方案,使我们能够将DFKD加速到数量级。在我们的方法中,我们的方法是一种重用培训数据中共享共同功能的新策略,以便综合不同的数据实例。与先前的方法独立优化一组数据,我们建议学习一个Meta合成器,该综合仪寻求常见功能作为快速数据合成的初始化。因此,FastDFKD仅在几个步骤内实现数据综合,显着提高了无数据培训的效率。在CiFAR,NYUV2和Imagenet上的实验表明,所提出的FastDFKD实现了10美元\时代$甚至100美元\倍$加速,同时保持与现有技术的表现。
translated by 谷歌翻译
基于正规化的方法有利于缓解类渐进式学习中的灾难性遗忘问题。由于缺乏旧任务图像,如果分类器在新图像上产生类似的输出,它们通常会假设旧知识得到很好的保存。在本文中,我们发现他们的效果很大程度上取决于旧课程的性质:它们在彼此之间容易区分的课程上工作,但可能在更细粒度的群体上失败,例如,男孩和女孩。在SPIRIT中,此类方法将新数据项目投入到完全连接层中的权重向量中跨越的特征空间,对应于旧类。由此产生的预测在细粒度的旧课程上是相似的,因此,新分类器将逐步失去这些课程的歧视能力。为了解决这个问题,我们提出了一种无记忆生成的重播策略,通过直接从旧分类器生成代表性的旧图像并结合新的分类器培训的新数据来保留细粒度的旧阶级特征。为了解决所产生的样本的均化问题,我们还提出了一种分集体损失,使得产生的样品之间的Kullback Leibler(KL)发散。我们的方法最好是通过先前的基于正规化的方法补充,证明是为了易于区分的旧课程有效。我们验证了上述关于CUB-200-2011,CALTECH-101,CIFAR-100和微小想象的设计和见解,并表明我们的策略优于现有的无记忆方法,并具有清晰的保证金。代码可在https://github.com/xmengxin/mfgr获得
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
Structural pruning of neural network parameters reduces computation, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and secondorder Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93%) correlation between the contribution computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods leads to an improvement over state-ofthe-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0.02% in the top-1 accuracy on ImageNet. Code is available at https://github.com/NVlabs/Taylor_pruning.
translated by 谷歌翻译
知识蒸馏是一种培训小型学生网络的流行技术,以模仿更大的教师模型,例如网络的集合。我们表明,虽然知识蒸馏可以改善学生泛化,但它通常不得如此普遍地工作:虽然在教师和学生的预测分布之间,甚至在学生容量的情况下,通常仍然存在令人惊讶的差异完美地匹配老师。我们认为优化的困难是为什么学生无法与老师匹配的关键原因。我们还展示了用于蒸馏的数据集的细节如何在学生与老师匹配的紧密关系中发挥作用 - 以及教师矛盾的教师并不总是导致更好的学生泛化。
translated by 谷歌翻译
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. With this representation, we can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism which allows us to obtain multiple quantized networks from one full precision source model by progressively mapping the higher precision weights to their adjacent lower precision counterparts. Then, with networks of different bit-widths from one source model, multi-objective optimization is employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. By doing this, the shared weights will be optimized to balance the performance of different quantized models, thus making the weights transferable among different bit widths. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. Code will be available.
translated by 谷歌翻译
Zero-shot quantization is a promising approach for developing lightweight deep neural networks when data is inaccessible owing to various reasons, including cost and issues related to privacy. By utilizing the learned parameters (statistics) of FP32-pre-trained models, zero-shot quantization schemes focus on generating synthetic data by minimizing the distance between the learned parameters ($\mu$ and $\sigma$) and distributions of intermediate activations. Subsequently, they distill knowledge from the pre-trained model (\textit{teacher}) to the quantized model (\textit{student}) such that the quantized model can be optimized with the synthetic dataset. In general, zero-shot quantization comprises two major elements: synthesizing datasets and quantizing models. However, thus far, zero-shot quantization has primarily been discussed in the context of quantization-aware training methods, which require task-specific losses and long-term optimization as much as retraining. We thus introduce a post-training quantization scheme for zero-shot quantization that produces high-quality quantized networks within a few hours on even half an hour. Furthermore, we propose a framework called \genie~that generates data suited for post-training quantization. With the data synthesized by \genie, we can produce high-quality quantized models without real datasets, which is comparable to few-shot quantization. We also propose a post-training quantization algorithm to enhance the performance of quantized models. By combining them, we can bridge the gap between zero-shot and few-shot quantization while significantly improving the quantization performance compared to that of existing approaches. In other words, we can obtain a unique state-of-the-art zero-shot quantization approach.
translated by 谷歌翻译
Computational cost of training state-of-the-art deep models in many learning problems is rapidly increasing due to more sophisticated models and larger datasets. A recent promising direction for reducing training cost is dataset condensation that aims to replace the original large training set with a significantly smaller learned synthetic set while preserving the original information. While training deep models on the small set of condensed images can be extremely fast, their synthesis remains computationally expensive due to the complex bi-level optimization and second-order derivative computation. In this work, we propose a simple yet effective method that synthesizes condensed images by matching feature distributions of the synthetic and original training images in many sampled embedding spaces. Our method significantly reduces the synthesis cost while achieving comparable or better performance. Thanks to its efficiency, we apply our method to more realistic and larger datasets with sophisticated neural architectures and obtain a significant performance boost. We also show promising practical benefits of our method in continual learning and neural architecture search.
translated by 谷歌翻译
最近,生成的数据无量子化作为一种​​实用的方法,将神经网络压缩到低位宽度而不访问真实数据。它通过利用其全精密对应物的批量归一化(BN)统计来生成数据来量化网络。然而,我们的研究表明,在实践中,BN统计的合成数据在分布和样品水平时严重均匀化,这导致量化网络的严重劣化。本文提出了一种通用不同的样本生成(DSG)方案,用于生成无数据的训练后量化和量化感知培训,以减轻有害的均质化。在我们的DSG中,我们首先将统计对齐缩写为BN层中的功能,以放宽分配约束。然后,我们加强特定BN层对不同样品的损失影响,并抑制了生成过程中样品之间的相关性,分别从统计和空间角度分别多样化样本。广泛的实验表明,对于大规模的图像分类任务,我们的DSG可以始终如一地优于各种神经结构上的现有数据无数据量化方法,尤其是在超低比特宽度下(例如,在W4A4设置下的22%的增益下)。此外,由我们的DSG引起的数据多样化引起了各种量化方法的一般增益,证明了多样性是无数据量化的高质量合成数据的重要特性。
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译