Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
Recently, large-scale pre-trained models have shown their advantages in many tasks. However, due to the huge computational complexity and storage requirements, it is challenging to apply the large-scale model to real scenes. A common solution is knowledge distillation which regards the large-scale model as a teacher model and helps to train a small student model to obtain a competitive performance. Cross-task Knowledge distillation expands the application scenarios of the large-scale pre-trained model. Existing knowledge distillation works focus on directly mimicking the final prediction or the intermediate layers of the teacher model, which represent the global-level characteristics and are task-specific. To alleviate the constraint of different label spaces, capturing invariant intrinsic local object characteristics (such as the shape characteristics of the leg and tail of the cattle and horse) plays a key role. Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios. First, to better transfer the generalized knowledge in the teacher model in cross-task scenarios, we propose a prototype learning module to learn from the essential feature representation of objects in the teacher model. Secondly, for diverse downstream tasks, we propose a task-adaptive feature augmentation module to enhance the features of the student model with the learned generalization prototype features and guide the training of the student model to improve its generalization ability. The experimental results on various visual tasks demonstrate the effectiveness of our approach for large-scale model cross-task knowledge distillation scenes.
translated by 谷歌翻译
具有更多参数数量的深卷积神经网络在自然图像上的对象检测任务中提高了精度,其中感兴趣的对象用水平边界框注释。从鸟类视角捕获的航空图像上,这些对模型架构和更深卷积层的改进也可以提高定向对象检测任务的性能。但是,很难直接在设备上使用有限的计算资源应用那些最先进的对象探测器,这需要通过模型压缩来实现轻量级模型。为了解决此问题,我们提出了一种模型压缩方法,用于通过知识蒸馏(即KD-RNET)在空中图像上旋转对象检测。凭借具有大量参数的训练有素的以教师为导向的对象探测器,获得的对象类别和位置信息都通过协作培训策略转移到KD-RNET的紧凑型学生网络中。传输类别信息是通过对预测概率分布的知识蒸馏来实现的,并且在处理位置信息传输中的位移时采用了软回归损失。大规模空中对象检测数据集(DOTA)的实验结果表明,提出的KD-RNET模型可以通过减少参数数量来提高均值平均精度(MAP),同时kd-rnet促进性能增强性能在提供高质量检测的情况下,与地面截然注释的重叠更高。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
In recent years, Siamese network based trackers have significantly advanced the state-of-the-art in real-time tracking. Despite their success, Siamese trackers tend to suffer from high memory costs, which restrict their applicability to mobile devices with tight memory budgets. To address this issue, we propose a distilled Siamese tracking framework to learn small, fast and accurate trackers (students), which capture critical knowledge from large Siamese trackers (teachers) by a teacher-students knowledge distillation model. This model is intuitively inspired by the one teacher vs. multiple students learning method typically employed in schools. In particular, our model contains a single teacher-student distillation module and a student-student knowledge sharing mechanism. The former is designed using a tracking-specific distillation strategy to transfer knowledge from a teacher to students. The latter is utilized for mutual learning between students to enable in-depth knowledge understanding. Extensive empirical evaluations on several popular Siamese trackers demonstrate the generality and effectiveness of our framework. Moreover, the results on five tracking benchmarks show that the proposed distilled trackers achieve compression rates of up to 18$\times$ and frame-rates of $265$ FPS, while obtaining comparable tracking accuracy compared to base models.
translated by 谷歌翻译
在本文中,我们为RSI(名为Superyolo)提出了一种准确而快速的小对象检测方法,该方法融合了多模式数据并通过利用辅助超级分辨率(SR)学习并考虑既有辅助的超级分辨率(SR)对象进行高分辨率(HR)对象检测检测准确性和计算成本。首先,我们通过删除焦点模块来保持人力资源特征并显着克服小物体缺失的误差来构建紧凑的基线。其次,我们利用像素级的多模式融合(MF)从各种数据中提取信息,以促进RSI中的小物体更合适和有效的功能。此外,我们设计了一个简单且灵活的SR分支来学习HR特征表示,可以区分具有低分辨率(LR)输入的庞大背景的小物体,从而进一步提高了检测准确性。此外,为避免引入其他计算,SR分支在推理阶段被丢弃,并且由于LR输入而减少了网络模型的计算。实验结果表明,在广泛使用的Vedai RS数据集上,Superyolo的精度为73.61%(在MAP50方面),比SOTA大型模型(例如Yolov5L,Yolov5X和RS设计的Yolors)高10%以上。同时,Superyolo的Gfolps和参数大小比Yolov5X少约18.1倍,4.2倍。我们提出的模型显示出与最新模型相比,具有良好的准确性速度权衡。该代码将在https://github.com/icey-zhang/superyolo上开放。
translated by 谷歌翻译
深度学习的巨大成功主要是由于大规模的网络架构和高质量的培训数据。但是,在具有有限的内存和成像能力的便携式设备上部署最近的深层模型仍然挑战。一些现有的作品通过知识蒸馏进行了压缩模型。不幸的是,这些方法不能处理具有缩小图像质量的图像,例如低分辨率(LR)图像。为此,我们采取了开创性的努力,从高分辨率(HR)图像到达将处理LR图像的紧凑型网络模型中学习的繁重网络模型中蒸馏有用的知识,从而推动了新颖的像素蒸馏的当前知识蒸馏技术。为实现这一目标,我们提出了一名教师助理 - 学生(TAS)框架,将知识蒸馏分解为模型压缩阶段和高分辨率表示转移阶段。通过装备新颖的特点超分辨率(FSR)模块,我们的方法可以学习轻量级网络模型,可以实现与重型教师模型相似的准确性,但参数更少,推理速度和较低分辨率的输入。在三个广泛使用的基准,\即,幼崽200-2011,Pascal VOC 2007和ImageNetsub上的综合实验证明了我们方法的有效性。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.
translated by 谷歌翻译
模型二进制化是一种压缩神经网络并加速其推理过程的有效方法。但是,1位模型和32位模型之间仍然存在显着的性能差距。实证研究表明,二进制会导致前进和向后传播中的信息损失。我们提出了一个新颖的分布敏感信息保留网络(DIR-NET),该网络通过改善内部传播和引入外部表示,将信息保留在前后传播中。 DIR-NET主要取决于三个技术贡献:(1)最大化二进制(IMB)的信息:最小化信息损失和通过重量平衡和标准化同时同时使用权重/激活的二进制误差; (2)分布敏感的两阶段估计器(DTE):通过共同考虑更新能力和准确的梯度来通过分配敏感的软近似来保留梯度的信息; (3)代表性二进制 - 意识蒸馏(RBD):通过提炼完整精确和二元化网络之间的表示来保留表示信息。 DIR-NET从统一信息的角度研究了BNN的前进过程和后退过程,从而提供了对网络二进制机制的新见解。我们的DIR-NET中的三种技术具有多功能性和有效性,可以在各种结构中应用以改善BNN。关于图像分类和客观检测任务的综合实验表明,我们的DIR-NET始终优于主流和紧凑型体系结构(例如Resnet,vgg,vgg,EfficityNet,darts和mobilenet)下最新的二进制方法。此外,我们在现实世界中的资源有限设备上执行DIR-NET,该设备可实现11.1倍的存储空间和5.4倍的速度。
translated by 谷歌翻译
近年来,大规模的深层模型取得了巨大的成功,但巨大的计算复杂性和大规模的存储要求使其在资源限制设备中部署它们是一个巨大的挑战。作为模型压缩和加速度方法,知识蒸馏通过从教师探测器转移黑暗知识有效提高了小型模型的性能。然而,大多数基于蒸馏的检测方法主要模仿近边界盒附近的特征,这遭受了两个限制。首先,它们忽略边界盒外面的有益特征。其次,这些方法模仿一些特征,这些特征被教师探测器被错误地被视为背景。为了解决上述问题,我们提出了一种新颖的特征性 - 丰富的评分(FRS)方法,可以选择改善蒸馏过程中的广义可检测性的重要特征。所提出的方法有效地检索边界盒外面的重要特征,并消除边界盒内的有害特征。广泛的实验表明,我们的方法在基于锚和无锚探测器上实现了出色的性能。例如,具有Reset-50的RetinAnet在Coco2017数据集上达到39.7%,甚至超过基于Reset-101的教师检测器38.9%甚至超过0.8%。
translated by 谷歌翻译
无人驾驶飞机(UAV)的实时对象检测是一个具有挑战性的问题,因为Edge GPU设备作为物联网(IoT)节点的计算资源有限。为了解决这个问题,在本文中,我们提出了一种基于Yolox模型的新型轻型深度学习体系结构,用于Edge GPU上的实时对象检测。首先,我们设计了一个有效且轻巧的PixSF头,以更换Yolox的原始头部以更好地检测小物体,可以将其进一步嵌入深度可分离的卷积(DS Conv)中,以达到更轻的头。然后,开发为减少网络参数的颈层中的较小结构,这是精度和速度之间的权衡。此外,我们将注意模块嵌入头层中,以改善预测头的特征提取效果。同时,我们还改进了标签分配策略和损失功能,以减轻UAV数据集的类别不平衡和盒子优化问题。最后,提出了辅助头进行在线蒸馏,以提高PIXSF Head中嵌入位置嵌入和特征提取的能力。在NVIDIA Jetson NX和Jetson Nano GPU嵌入平台上,我们的轻质模型的性能得到了实验验证。扩展的实验表明,与目前的模型相比,Fasterx模型在Visdrone2021数据集中实现了更好的折衷和延迟之间的折衷。
translated by 谷歌翻译
随着AI芯片(例如GPU,TPU和NPU)的改进以及物联网(IOT)的快速发展,一些强大的深神经网络(DNN)通常由数百万甚至数亿个参数组成,这些参数是可能不适合直接部署在低计算和低容量单元(例如边缘设备)上。最近,知识蒸馏(KD)被认为是模型压缩的有效方法之一,以减少模型参数。 KD的主要概念是从大型模型(即教师模型)的特征图中提取有用的信息,以引用成功训练一个小型模型(即学生模型),该模型大小比老师小得多。尽管已经提出了许多基于KD的方法来利用教师模型中中间层的特征图中的信息,但是,它们中的大多数并未考虑教师模型和学生模型之间的特征图的相似性,这可能让学生模型学习无用的信息。受到注意机制的启发,我们提出了一种新颖的KD方法,称为代表教师钥匙(RTK),该方法不仅考虑了特征地图的相似性,而且还会过滤掉无用的信息以提高目标学生模型的性能。在实验中,我们使用多个骨干网络(例如Resnet和wideresnet)和数据集(例如CIFAR10,CIFAR100,SVHN和CINIC10)验证了我们提出的方法。结果表明,我们提出的RTK可以有效地提高基于注意的KD方法的分类精度。
translated by 谷歌翻译
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. With this representation, we can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism which allows us to obtain multiple quantized networks from one full precision source model by progressively mapping the higher precision weights to their adjacent lower precision counterparts. Then, with networks of different bit-widths from one source model, multi-objective optimization is employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. By doing this, the shared weights will be optimized to balance the performance of different quantized models, thus making the weights transferable among different bit widths. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. Code will be available.
translated by 谷歌翻译
知识蒸馏已成功应用于图像分类。然而,物体检测更复杂,大多数知识蒸馏方法都失败了。在本文中,我们指出,在物体检测中,教师和学生的特征在不同的区域变化,特别是在前景和背景中。如果我们同样蒸馏它们,则特征图之间的不均匀差异会对蒸馏产生负面影响。因此,我们提出了焦点和全球蒸馏(FGD)。焦蒸馏分离前景和背景,强迫学生专注于教师的临界像素和渠道。全球蒸馏重建了不同像素之间的关系,并将其从教师转移给学生,弥补了局灶性蒸馏中缺失的全球信息。由于我们的方法仅需要计算特征图上的损失,因此FGD可以应用于各种探测器。我们在不同骨干网上进行各种探测器,结果表明,学生探测器实现了优异的地图改进。例如,基于Reset-50基于RecinAnet,更快的RCNN,Reppoints和Mask RCNN,Coco2017上达到40.7%,42.0%,42.0%和42.1%地图,3.3,3.6,3.4和2.9高于基线,分别。我们的代码可在https://github.com/yzd-v/fgd获得。
translated by 谷歌翻译
大型预训练的变压器是现代语义分割基准的顶部,但具有高计算成本和冗长的培训。为了提高这种约束,我们从综合知识蒸馏的角度来研究有效的语义分割,并考虑弥合多源知识提取和特定于变压器特定的斑块嵌入之间的差距。我们提出了基于变压器的知识蒸馏(TransKD)框架,该框架通过蒸馏出大型教师变压器的特征地图和补丁嵌入来学习紧凑的学生变形金刚,绕过长期的预训练过程并将FLOPS降低> 85.0%。具体而言,我们提出了两个基本和两个优化模块:(1)交叉选择性融合(CSF)可以通过通道注意和层次变压器内的特征图蒸馏之间的知识转移; (2)嵌入对齐(PEA)在斑块过程中执行尺寸转换,以促进贴片嵌入蒸馏; (3)全局本地上下文混合器(GL-MIXER)提取了代表性嵌入的全局和局部信息; (4)嵌入助手(EA)是一种嵌入方法,可以无缝地桥接老师和学生模型,并具有老师的渠道数量。关于CityScapes,ACDC和NYUV2数据集的实验表明,TransKD的表现优于最先进的蒸馏框架,并竞争了耗时的预训练方法。代码可在https://github.com/ruipingl/transkd上找到。
translated by 谷歌翻译
知识蒸馏(KD)在将学习表征从大型模型(教师)转移到小型模型(学生)方面表现出非常有希望的能力。但是,随着学生和教师之间的容量差距变得更大,现有的KD方法无法获得更好的结果。我们的工作表明,“先验知识”对KD至关重要,尤其是在应用大型老师时。特别是,我们提出了动态的先验知识(DPK),该知识将教师特征的一部分作为特征蒸馏之前的先验知识。这意味着我们的方法还将教师的功能视为“输入”,而不仅仅是``目标''。此外,我们根据特征差距动态调整训练阶段的先验知识比率,从而引导学生在适当的困难中。为了评估所提出的方法,我们对两个图像分类基准(即CIFAR100和Imagenet)和一个对象检测基准(即MS Coco)进行了广泛的实验。结果表明,在不同的设置下,我们方法在性能方面具有优势。更重要的是,我们的DPK使学生模型的表现与教师模型的表现呈正相关,这意味着我们可以通过应用更大的教师进一步提高学生的准确性。我们的代码将公开用于可重复性。
translated by 谷歌翻译
在多种方式知识蒸馏研究的背景下,现有方法主要集中在唯一的学习教师最终产出问题。因此,教师网络与学生网络之间存在深处。有必要强制学生网络来学习教师网络的模态关系信息。为了有效利用从教师转移到学生的知识,采用了一种新的模型关系蒸馏范式,通过建模不同的模态之间的关系信息,即学习教师模级克矩阵。
translated by 谷歌翻译
多年来,Yolo系列一直是有效对象检测的事实上的行业级别标准。尤洛社区(Yolo Community)绝大多数繁荣,以丰富其在众多硬件平台和丰富场景中的使用。在这份技术报告中,我们努力将其限制推向新的水平,以坚定不移的行业应用心态前进。考虑到对真实环境中速度和准确性的多种要求,我们广泛研究了行业或学术界的最新对象检测进步。具体而言,我们从最近的网络设计,培训策略,测试技术,量化和优化方法中大量吸收了思想。最重要的是,我们整合了思想和实践,以在各种规模上建立一套可供部署的网络,以适应多元化的用例。在Yolo作者的慷慨许可下,我们将其命名为Yolov6。我们还向用户和贡献者表示热烈欢迎,以进一步增强。为了了解性能,我们的Yolov6-N在NVIDIA TESLA T4 GPU上以1234 fps的吞吐量在可可数据集上击中35.9%的AP。 Yolov6-S在495 fps处的43.5%AP罢工,在相同规模〜(Yolov5-S,Yolox-S和Ppyoloe-S)上超过其他主流探测器。我们的量化版本的Yolov6-S甚至在869 fps中带来了新的43.3%AP。此外,与其他推理速度相似的检测器相比,Yolov6-m/L的精度性能(即49.5%/52.3%)更好。我们仔细进行了实验以验证每个组件的有效性。我们的代码可在https://github.com/meituan/yolov6上提供。
translated by 谷歌翻译