Convolution Neural Networks (CNNs) have been used in various fields and are showing demonstrated excellent performance, especially in Single-Image Super Resolution (SISR). However, recently, CNN-based SISR has numerous parameters and computational costs for obtaining better performance. As one of the methods to make the network efficient, Knowledge Distillation (KD) which optimizes the performance trade-off by adding a loss term to the existing network architecture is currently being studied. KD for SISR is mainly proposed as a feature distillation (FD) to minimize L1-distance loss of feature maps between teacher and student networks, but it does not fully take into account the amount and importance of information that the student can accept. In this paper, we propose a feature-based adaptive contrastive distillation (FACD) method for efficiently training lightweight SISR networks. We show the limitations of the existing feature-distillation (FD) with L1-distance loss, and propose a feature-based contrastive loss that maximizes the mutual information between the feature maps of the teacher and student networks. The experimental results show that the proposed FACD improves not only the PSNR performance of the entire benchmark datasets and scales but also the subjective image quality compared to the conventional FD approach.
translated by 谷歌翻译
最近的卷积神经网络(CNN)的改进 - 基于单图像超分辨率(SISR)方法严重依赖于制造网络架构,而不是发现除了简单地降低回归损耗之外的合适的培训算法。调整知识蒸馏(KD)可以开辟一种方法,以便对SISR进行进一步改进,并且在模型效率方面也是有益的。 KD是一种模型压缩方法,可提高深神经网络(DNN)的性能而不使用其他参数进行测试。它最近越来越敏捷,以提供更好的能力性能权衡。在本文中,我们提出了一种适用于SISR的新型特征蒸馏(FD)方法。我们展示了基于FITNET的FD方法的局限性,它在SISR任务中受到影响,并建议修改现有的FD算法以专注于本地特征信息。此外,我们提出了一种基于教师 - 学生差异的软特征注意方法,其选择性地专注于特定的像素位置以提取特征信息。我们致电我们的方法本地选择性特征蒸馏(LSFD)并验证我们的方法在SISR问题中优于传统的FD方法。
translated by 谷歌翻译
知识蒸馏(KD)可以有效地将知识从繁琐的网络(教师)转移到紧凑的网络(学生),在某些计算机视觉应用中证明了其优势。知识的表示对于知识转移和学生学习至关重要,这通常以手工制作的方式定义或直接使用中间功能。在本文中,我们建议在教师学生架构下为单像超级分辨率任务提出一种模型 - 不足的元知识蒸馏方法。它提供了一种更灵活,更准确的方法,可以通过知识代表网络(KRNET)的能力来帮助教师通过具有可学习参数的知识传输知识。为了提高知识表示对学生需求的看法能力,我们建议通过采用学生特征以及KRNET中的教师和学生之间的相关性来解决从中间产出到转移知识的转型过程。具体而言,生成纹理感知的动态内核,然后提取要改进的纹理特征,并将相应的教师指导分解为质地监督,以进一步促进高频细节的恢复质量。此外,KRNET以元学习方式进行了优化,以确保知识转移和学生学习有益于提高学生的重建质量。在各种单个图像超分辨率数据集上进行的实验表明,我们所提出的方法优于现有的定义知识表示相关的蒸馏方法,并且可以帮助超分辨率算法实现更好的重建质量,而无需引入任何推理复杂性。
translated by 谷歌翻译
深度学习的巨大成功主要是由于大规模的网络架构和高质量的培训数据。但是,在具有有限的内存和成像能力的便携式设备上部署最近的深层模型仍然挑战。一些现有的作品通过知识蒸馏进行了压缩模型。不幸的是,这些方法不能处理具有缩小图像质量的图像,例如低分辨率(LR)图像。为此,我们采取了开创性的努力,从高分辨率(HR)图像到达将处理LR图像的紧凑型网络模型中学习的繁重网络模型中蒸馏有用的知识,从而推动了新颖的像素蒸馏的当前知识蒸馏技术。为实现这一目标,我们提出了一名教师助理 - 学生(TAS)框架,将知识蒸馏分解为模型压缩阶段和高分辨率表示转移阶段。通过装备新颖的特点超分辨率(FSR)模块,我们的方法可以学习轻量级网络模型,可以实现与重型教师模型相似的准确性,但参数更少,推理速度和较低分辨率的输入。在三个广泛使用的基准,\即,幼崽200-2011,Pascal VOC 2007和ImageNetsub上的综合实验证明了我们方法的有效性。
translated by 谷歌翻译
减少全身CT扫描中患者的辐射暴露引起了医学成像界的广泛关注。鉴于低辐射剂量可能导致噪声和伪像增加,这极大地影响了临床诊断。为了获得高质量的全身低剂量CT(LDCT)图像,以前的基于深度学习的研究工作引入了各种网络架构。然而,大多数这些方法只采用正常剂量CT(NDCT)图像作为地面真理来指导去噪网络的训练。这种简单的限制导致模型效率更低,并使重建的图像遭受过平滑的效果。在本文中,我们提出了一种新的任务内知识转移方法,利用来自NDCT图像的蒸馏知识来帮助LDCT图像上的培训过程。派生架构被称为师生一致性网络(TSC-Net),由教师网络和具有相同架构的学生网络组成。通过中间功能之间的监督,鼓励学生网络模仿教师网络并获得丰富的纹理细节。此外,为了进一步利用CT扫描中包含的信息,介绍了在对比学习时建立的对比正规化机制(CRM).CRM执行将恢复的CT图像拉到NDCT样本,并将远离LDCT样本的遥控器中的遥远空间。此外,基于注意力和可变形卷积机制,我们设计了一种动态增强模块(DEM)以提高网络变换能力。
translated by 谷歌翻译
Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR) image from its corresponding low-resolution (LR) input image with unknown degradations. Most of the existing works design an explicit degradation estimator for each degradation to guide SR. However, it is infeasible to provide concrete labels of multiple degradation combinations (\eg, blur, noise, jpeg compression) to supervise the degradation estimator training. In addition, these special designs for certain degradation, such as blur, impedes the models from being generalized to handle different degradations. To this end, it is necessary to design an implicit degradation estimator that can extract discriminative degradation representation for all degradations without relying on the supervision of degradation ground-truth. In this paper, we propose a Knowledge Distillation based Blind-SR network (KDSR). It consists of a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network. To learn the KDSR model, we first train a teacher network: KD-IDE$_{T}$. It takes paired HR and LR patches as inputs and is optimized with the SR network jointly. Then, we further train a student network KD-IDE$_{S}$, which only takes LR images as input and learns to extract the same implicit degradation representation (IDR) as KD-IDE$_{T}$. In addition, to fully use extracted IDR, we design a simple, strong, and efficient IDR based dynamic convolution residual block (IDR-DCRB) to build an SR network. We conduct extensive experiments under classic and real-world degradation settings. The results show that KDSR achieves SOTA performance and can generalize to various degradation processes. The source codes and pre-trained models will be released.
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
对比学习在各种高级任务中取得了显着的成功,但是为低级任务提出了较少的方法。采用VANILLA对比学习技术采用直接为低级视觉任务提出的VANILLA对比度学习技术,因为所获得的全局视觉表现不足以用于需要丰富的纹理和上下文信息的低级任务。在本文中,我们提出了一种用于单图像超分辨率(SISR)的新型对比学习框架。我们从两个视角调查基于对比的学习的SISR:样品施工和特征嵌入。现有方法提出了一些天真的样本施工方法(例如,考虑到作为负样本的低质量输入以及作为正样品的地面真理),并且它们采用了先前的模型(例如,预先训练的VGG模型)来获得该特征嵌入而不是探索任务友好的。为此,我们向SISR提出了一个实用的对比学习框架,涉及在频率空间中产生许多信息丰富的正负样本。我们不是利用其他预先训练的网络,我们设计了一种从鉴别器网络继承的简单但有效的嵌入网络,并且可以用主SR网络迭代优化,使其成为任务最通报。最后,我们对我们的方法进行了广泛的实验评估,与基准方法相比,在目前的最先进的SISR方法中显示出高达0.21 dB的显着增益。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
在这项工作中,我们提出了相互信息最大化知识蒸馏(MIMKD)。我们的方法使用对比目标来同时估计,并最大化教师和学生网络之间的本地和全球特征表示的相互信息的下限。我们通过广泛的实验证明,这可以通过将知识从更加性能但计算昂贵的模型转移来改善低容量模型的性能。这可用于产生更好的模型,可以在具有低计算资源的设备上运行。我们的方法灵活,我们可以将具有任意网络架构的教师蒸馏到任意学生网络。我们的经验结果表明,MIMKD优于各种学生教师对的竞争方法,具有不同的架构,以及学生网络的容量极低。我们能够通过从Reset-50蒸馏出来的知识,从基线精度为Shufflenetv2获得74.55%的精度。在Imagenet上,我们使用Reset-34教师网络将Reset-18网络从68.88%提高到70.32%的准确度(1.44%+)。
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
基于CNN的大多数超分辨率(SR)方法假设降解是已知的(\ eg,bicubic)。当降解与假设不同时,这些方法将遭受严重的性能下降。因此,一些方法试图通过多种降解的复杂组合来培训SR网络,以涵盖实际的降解空间。为了适应多个未知降解,引入显式降解估计器实际上可以促进SR性能。然而,以前的显式降解估计方法通常可以通过对地面模糊内核的监督来预测高斯的模糊,并且估计错误可能导致SR失败。因此,有必要设计一种可以提取隐式歧视性降解表示的方法。为此,我们提出了一个基于元学习的区域退化意识SR网络(MRDA),包括元学习网络(MLN),降级提取网络(DEN)和区域退化意识SR Network(RDAN)。为了处理缺乏地面污染的降解,我们使用MLN在几次迭代后快速适应特定的复合物降解并提取隐式降解信息。随后,教师网络MRDA $ _ {T} $旨在进一步利用MLN为SR提取的降解信息。但是,MLN需要在配对的低分辨率(LR)和相应的高分辨率(HR)图像上进行迭代,这在推理阶段不可用。因此,我们采用知识蒸馏(KD)来使学生网络学会直接提取与LR图像的老师相同的隐式退化表示(IDR)。
translated by 谷歌翻译
无教师的在线知识蒸馏(KD)旨在培训多个学生模型的合奏,并彼此提炼知识。尽管现有的在线KD方法实现了理想的性能,但它们通常专注于阶级概率作为核心知识类型,而忽略了宝贵的特征代表性信息。我们为在线KD提供了一个相互的对比学习(MCL)框架。 MCL的核心思想是以在线方式进行对比分布的相互交互和对比度分布的转移。我们的MCL可以汇总跨网络嵌入信息,并最大化两个网络之间的相互信息的下限。这使每个网络能够从他人那里学习额外的对比知识,从而提供更好的特征表示形式,从而提高视觉识别任务的性能。除最后一层外,我们还将MCL扩展到辅助特征细化模块辅助的几个中间层。这进一步增强了在线KD的表示能力。关于图像分类和转移学习到视觉识别任务的实验表明,MCL可以针对最新的在线KD方法带来一致的性能提高。优势表明,MCL可以指导网络生成更好的特征表示。我们的代码可在https://github.com/winycg/mcl上公开获取。
translated by 谷歌翻译
Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image or video by introducing an additional high-resolution (HR) reference image. Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images. However, performing local transfer is difficult because of two gaps between input and reference images: the transformation gap (e.g., scale and rotation) and the resolution gap (e.g., HR and LR). To tackle these challenges, we propose C2-Matching in this work, which performs explicit robust matching crossing transformation and resolution. 1) To bridge the transformation gap, we propose a contrastive correspondence network, which learns transformation-robust correspondences using augmented views of the input image. 2) To address the resolution gap, we adopt teacher-student correlation distillation, which distills knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching. 3) Finally, we design a dynamic aggregation module to address the potential misalignment issue between input images and reference images. In addition, to faithfully evaluate the performance of Reference-based Image Super-Resolution under a realistic setting, we contribute the Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario. We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image. Extensive experiments demonstrate that our proposed C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark and also boosts the performance of video SR by incorporating the C2-Matching component into Video SR pipelines.
translated by 谷歌翻译
近年来,深度卷积神经网络在病理学图像分割方面取得了重大进展。然而,病理图像分割遇到困境,其中更高绩效网络通常需要更多的计算资源和存储。由于病理图像的固有高分辨率,这种现象限制了实际场景中的高精度网络的就业。为了解决这个问题,我们提出了一种用于病理胃癌细分的新型跨层相关(COCO)知识蒸馏网络。知识蒸馏,通过从繁琐的网络从知识转移提高紧凑型网络的性能的一般技术。具体而言,我们的Coco Distillnet模拟了不同层之间的通道混合空间相似性的相关性,然后将这些知识从预培训的繁琐的教师网络传送到非培训的紧凑学生网络。此外,我们还利用了对抗性学习策略来进一步提示被称为对抗性蒸馏(AD)的蒸馏程序。此外,为了稳定我们的培训程序,我们利用无监督的释义模块(PM)来提高教师网络中的知识释义。结果,对胃癌细分数据集进行的广泛实验表明了Coco Distillnet的突出能力,实现了最先进的性能。
translated by 谷歌翻译
In recent years, Siamese network based trackers have significantly advanced the state-of-the-art in real-time tracking. Despite their success, Siamese trackers tend to suffer from high memory costs, which restrict their applicability to mobile devices with tight memory budgets. To address this issue, we propose a distilled Siamese tracking framework to learn small, fast and accurate trackers (students), which capture critical knowledge from large Siamese trackers (teachers) by a teacher-students knowledge distillation model. This model is intuitively inspired by the one teacher vs. multiple students learning method typically employed in schools. In particular, our model contains a single teacher-student distillation module and a student-student knowledge sharing mechanism. The former is designed using a tracking-specific distillation strategy to transfer knowledge from a teacher to students. The latter is utilized for mutual learning between students to enable in-depth knowledge understanding. Extensive empirical evaluations on several popular Siamese trackers demonstrate the generality and effectiveness of our framework. Moreover, the results on five tracking benchmarks show that the proposed distilled trackers achieve compression rates of up to 18$\times$ and frame-rates of $265$ FPS, while obtaining comparable tracking accuracy compared to base models.
translated by 谷歌翻译
随着卷积神经网络最近的大规模发展,已经提出了用于边缘设备上实用部署的大量基于CNN的显着图像超分辨率方法。但是,大多数现有方法都集中在一个特定方面:网络或损失设计,这导致难以最大程度地减少模型大小。为了解决这个问题,我们得出结论,设计,架构搜索和损失设计,以获得更有效的SR结构。在本文中,我们提出了一个名为EFDN的边缘增强功能蒸馏网络,以保留在约束资源下的高频信息。详细说明,我们基于现有的重新处理方法构建了一个边缘增强卷积块。同时,我们提出了边缘增强的梯度损失,以校准重新分配的路径训练。实验结果表明,我们的边缘增强策略可以保持边缘并显着提高最终恢复质量。代码可在https://github.com/icandle/efdn上找到。
translated by 谷歌翻译
最近的深度学习模型在言语增强方面已经达到了高性能。但是,获得快速和低复杂模型而没有明显的性能降解仍然是一项挑战。以前的知识蒸馏研究对言语增强无法解决这个问题,因为它们的输出蒸馏方法在某些方面不符合语音增强任务。在这项研究中,我们提出了基于特征的蒸馏多视图注意转移(MV-AT),以在时域中获得有效的语音增强模型。基于多视图功能提取模型,MV-AT将教师网络的多视图知识传输到学生网络,而无需其他参数。实验结果表明,所提出的方法始终提高瓦伦蒂尼和深噪声抑制(DNS)数据集的各种规模的学生模型的性能。与基线模型相比,使用我们提出的方法(一种用于有效部署的轻巧模型)分别使用了15.4倍和4.71倍(FLOPS),与具有相似性能的基线模型相比,Many-S-8.1GF分别达到了15.4倍和4.71倍。
translated by 谷歌翻译
尽管配备的远景和语言预处理(VLP)在过去两年中取得了显着的进展,但它遭受了重大缺点:VLP型号不断增加的尺寸限制了其部署到现实世界的搜索场景(高潜伏期是不可接受的)。为了减轻此问题,我们提出了一种新颖的插件动态对比度蒸馏(DCD)框架,以压缩ITR任务的大型VLP模型。从技术上讲,我们面临以下两个挑战:1)由于GPU内存有限,在处理交叉模式融合功能期间优化了太多的负样本,因此很难直接应用于跨模式任务,因此很难直接应用于跨模式任务。 。 2)从不同的硬样品中静态优化学生网络的效率效率低下,这些样本对蒸馏学习和学生网络优化具有不同的影响。我们试图从两点克服这些挑战。首先,为了实现多模式对比度学习并平衡培训成本和效果,我们建议使用教师网络估算学生的困难样本,使学生吸收了预培训的老师的强大知识,并掌握知识来自硬样品。其次,要从硬样品对学习动态,我们提出动态蒸馏以动态学习不同困难的样本,从更好地平衡知识和学生的自学能力的困难的角度。我们成功地将我们提出的DCD策略应用于两个最先进的视觉语言预处理模型,即vilt和仪表。关于MS-Coco和FlickR30K基准测试的广泛实验显示了我们DCD框架的有效性和效率。令人鼓舞的是,与现有的ITR型号相比,我们可以至少加快推断至少129美元的$ \ times $。
translated by 谷歌翻译
Electroencephalogram (EEG) has been one of the common neuromonitoring modalities for real-world brain-computer interfaces (BCIs) because of its non-invasiveness, low cost, and high temporal resolution. Recently, light-weight and portable EEG wearable devices based on low-density montages have increased the convenience and usability of BCI applications. However, loss of EEG decoding performance is often inevitable due to reduced number of electrodes and coverage of scalp regions of a low-density EEG montage. To address this issue, we introduce knowledge distillation (KD), a learning mechanism developed for transferring knowledge/information between neural network models, to enhance the performance of low-density EEG decoding. Our framework includes a newly proposed similarity-keeping (SK) teacher-student KD scheme that encourages a low-density EEG student model to acquire the inter-sample similarity as in a pre-trained teacher model trained on high-density EEG data. The experimental results validate that our SK-KD framework consistently improves motor-imagery EEG decoding accuracy when number of electrodes deceases for the input EEG data. For both common low-density headphone-like and headband-like montages, our method outperforms state-of-the-art KD methods across various EEG decoding model architectures. As the first KD scheme developed for enhancing EEG decoding, we foresee the proposed SK-KD framework to facilitate the practicality of low-density EEG-based BCI in real-world applications.
translated by 谷歌翻译