我们为移动设备提出了一个轻巧的单图超分辨率网络,名为XCAT。XCAT引入了具有交叉串联(HXBLOCK)的异质群卷积块。输入通道向组卷积块的异质拆分减少了操作数量,交叉串联允许在级联HXBlocks的中间输入张量之间进行信息流。HXBlocks内部的交叉串联也可以避免使用更昂贵的操作,例如1x1卷积。为了进一步预见昂贵的张量副本操作,XCAT利用不可训练的卷积内核来应用采样操作。XCAT考虑了整数量化的设计,还利用了几种技术,例如基于强度的数据增强。Integer的XCAT量化XCAT可在320ms的Mali-G71 MP2 GPU上实时运行,以及适用于实时应用的30ms(NCHW)和8.8ms(NHWC)的Synaptics Dolphin NPU。
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
基于深度学习的单图像超分辨率(SISR)方法引起了人们的关注,并在现代高级GPU上取得了巨大的成功。但是,大多数最先进的方法都需要大量参数,记忆和计算资源,这些参数通常会显示在当前移动设备CPU/NPU上时显示出较低的推理时间。在本文中,我们提出了一个简单的普通卷积网络,该网络具有快速最近的卷积模块(NCNET),该模块对NPU友好,可以实时执行可靠的超级分辨率。提出的最近的卷积具有与最近的UP采样相同的性能,但更快,更适合Android NNAPI。我们的模型可以很容易地在具有8位量化的移动设备上部署,并且与所有主要的移动AI加速器完全兼容。此外,我们对移动设备上的不同张量操作进行了全面的实验,以说明网络体系结构的效率。我们的NCNET在DIV2K 3X数据集上进行了训练和验证,并且与其他有效的SR方法的比较表明,NCNET可以实现高保真SR结果,同时使用更少的推理时间。我们的代码和预估计的模型可在\ url {https://github.com/algolzw/ncnet}上公开获得。
translated by 谷歌翻译
In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to realworld applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods.
translated by 谷歌翻译
随着移动设备的普及,例如智能手机和可穿戴设备,更轻,更快的型号对于应用视频超级分辨率至关重要。但是,大多数以前的轻型模型倾向于集中于减少台式GPU模型推断的范围,这在当前的移动设备中可能不会节能。在本文中,我们提出了极端低功率超级分辨率(ELSR)网络,该网络仅在移动设备中消耗少量的能量。采用预训练和填充方法来提高极小模型的性能。广泛的实验表明,我们的方法在恢复质量和功耗之间取得了良好的平衡。最后,我们在目标总经理Dimenty 9000 PlantForm上,PSNR 27.34 dB和功率为0.09 w/30fps的竞争分数为90.9,在移动AI&AIM 2022实时视频超级分辨率挑战中排名第一。
translated by 谷歌翻译
具有强大学习能力的CNN被广泛选择以解决超分辨率问题。但是,CNN依靠更深的网络体系结构来提高图像超分辨率的性能,这可能会增加计算成本。在本文中,我们提出了一个增强的超分辨率组CNN(ESRGCNN),具有浅层架构,通过完全融合了深层和宽的通道特征,以在单图超级分辨率中的不同通道的相关性提取更准确的低频信息( SISR)。同样,ESRGCNN中的信号增强操作对于继承更长途上下文信息以解决长期依赖性也很有用。将自适应上采样操作收集到CNN中,以获得具有不同大小的低分辨率图像的图像超分辨率模型。广泛的实验报告说,我们的ESRGCNN在SISR中的SISR性能,复杂性,执行速度,图像质量评估和SISR的视觉效果方面超过了最先进的实验。代码可在https://github.com/hellloxiaotian/esrgcnn上找到。
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
In recent years, image and video delivery systems have begun integrating deep learning super-resolution (SR) approaches, leveraging their unprecedented visual enhancement capabilities while reducing reliance on networking conditions. Nevertheless, deploying these solutions on mobile devices still remains an active challenge as SR models are excessively demanding with respect to workload and memory footprint. Despite recent progress on on-device SR frameworks, existing systems either penalize visual quality, lead to excessive energy consumption or make inefficient use of the available resources. This work presents NAWQ-SR, a novel framework for the efficient on-device execution of SR models. Through a novel hybrid-precision quantization technique and a runtime neural image codec, NAWQ-SR exploits the multi-precision capabilities of modern mobile NPUs in order to minimize latency, while meeting user-specified quality constraints. Moreover, NAWQ-SR selectively adapts the arithmetic precision at run time to equip the SR DNN's layers with wider representational power, improving visual quality beyond what was previously possible on NPUs. Altogether, NAWQ-SR achieves an average speedup of 7.9x, 3x and 1.91x over the state-of-the-art on-device SR systems that use heterogeneous processors (MobiSR), CPU (SplitSR) and NPU (XLSR), respectively. Furthermore, NAWQ-SR delivers an average of 3.2x speedup and 0.39 dB higher PSNR over status-quo INT8 NPU designs, but most importantly mitigates the negative effects of quantization on visual quality, setting a new state-of-the-art in the attainable quality of NPU-based SR.
translated by 谷歌翻译
随着卷积神经网络最近的大规模发展,已经提出了用于边缘设备上实用部署的大量基于CNN的显着图像超分辨率方法。但是,大多数现有方法都集中在一个特定方面:网络或损失设计,这导致难以最大程度地减少模型大小。为了解决这个问题,我们得出结论,设计,架构搜索和损失设计,以获得更有效的SR结构。在本文中,我们提出了一个名为EFDN的边缘增强功能蒸馏网络,以保留在约束资源下的高频信息。详细说明,我们基于现有的重新处理方法构建了一个边缘增强卷积块。同时,我们提出了边缘增强的梯度损失,以校准重新分配的路径训练。实验结果表明,我们的边缘增强策略可以保持边缘并显着提高最终恢复质量。代码可在https://github.com/icandle/efdn上找到。
translated by 谷歌翻译
卷积神经网络(CNN)通过深度体系结构获得了出色的性能。但是,这些CNN在复杂的场景下通常对图像超分辨率(SR)实现较差的鲁棒性。在本文中,我们通过利用不同类型的结构信息来获得高质量图像,提出了异质组SR CNN(HGSRCNN)。具体而言,HGSRCNN的每个异质组块(HGB)都采用含有对称组卷积块和互补的卷积块的异质体系结构,并以平行方式增强不同渠道的内部和外部关系,以促进富裕类型的较富裕类型的信息, 。为了防止出现获得的冗余功能,以串行方式具有信号增强功能的完善块旨在过滤无用的信息。为了防止原始信息的丢失,多级增强机制指导CNN获得对称架构,以促进HGSRCNN的表达能力。此外,开发了一种平行的向上采样机制来训练盲目的SR模型。广泛的实验表明,在定量和定性分析方面,提出的HGSRCNN获得了出色的SR性能。可以在https://github.com/hellloxiaotian/hgsrcnn上访问代码。
translated by 谷歌翻译
基于卷积神经网络(CNN)的现代单图像超分辨率(SISR)系统实现了花哨的性能,而需要巨大的计算成本。在视觉识别任务中对特征冗余的问题进行了很好的研究,但很少在SISR中进行讨论。基于这样的观察,SISR模型中的许多功能也彼此相似,我们建议使用Shift操作来生成冗余功能(即幽灵功能)。与在类似GPU的设备上耗时的深度卷积相比,Shift操作可以为CNN带来实用的推理加速度。我们分析了SISR操作对SISR任务的好处,并根据Gumbel-SoftMax技巧使Shift取向可学习。此外,基于预训练的模型探索了聚类过程,以识别用于生成内在特征的内在过滤器。幽灵功能将通过沿特定方向移动这些内在功能来得出。最后,完整的输出功能是通过将固有和幽灵特征串联在一起来构建的。在几个基准模型和数据集上进行的广泛实验表明,嵌入了所提出方法的非压缩和轻质SISR模型都可以实现与基准的可比性能,并大大降低了参数,拖台和GPU推荐延迟。例如,我们将参数降低46%,FLOPS掉落46%,而GPU推断潜伏期则减少了$ \ times2 $ EDSR网络的42%,基本上是无损的。
translated by 谷歌翻译
不断需要在低容量设备上使用的图像超分辨率(SR)的高性能和计算有效的神经网络模型。获取此类模型的一种方法是压缩现有体系结构,例如量化。另一个选择是发现新的有效解决方案的神经体系结构搜索(NAS)。我们为专门设计的SR搜索空间提出了一种新颖的量化NAS程序。我们的方法执行NAS以找到量化友好的SR模型。搜索依赖于将量化噪声添加到参数和激活中,而不是直接量化参数。我们的Quontnas比固定体系结构的均匀或混合精度量化找到了具有更好的PSNR/BITOP权衡的体系结构。此外,我们对噪声过程的搜索比直接量化权重的速度快30%。
translated by 谷歌翻译
This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results.A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∼ 100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.
translated by 谷歌翻译
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.
translated by 谷歌翻译
Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image superresolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
translated by 谷歌翻译
Single Image Super-Resolution (SISR) tasks have achieved significant performance with deep neural networks. However, the large number of parameters in CNN-based met-hods for SISR tasks require heavy computations. Although several efficient SISR models have been recently proposed, most are handcrafted and thus lack flexibility. In this work, we propose a novel differentiable Neural Architecture Search (NAS) approach on both the cell-level and network-level to search for lightweight SISR models. Specifically, the cell-level search space is designed based on an information distillation mechanism, focusing on the combinations of lightweight operations and aiming to build a more lightweight and accurate SR structure. The network-level search space is designed to consider the feature connections among the cells and aims to find which information flow benefits the cell most to boost the performance. Unlike the existing Reinforcement Learning (RL) or Evolutionary Algorithm (EA) based NAS methods for SISR tasks, our search pipeline is fully differentiable, and the lightweight SISR models can be efficiently searched on both the cell-level and network-level jointly on a single GPU. Experiments show that our methods can achieve state-of-the-art performance on the benchmark datasets in terms of PSNR, SSIM, and model complexity with merely 68G Multi-Adds for $\times 2$ and 18G Multi-Adds for $\times 4$ SR tasks.
translated by 谷歌翻译
单像超分辨率可以在需要可靠的视觉流以监视任务,处理远程操作或研究相关视觉细节的环境中支持机器人任务。在这项工作中,我们为实时超级分辨率提出了一个有效的生成对抗网络模型。我们采用了原始SRGAN的量身定制体系结构和模型量化,以提高CPU和Edge TPU设备上的执行,最多达到200 fps的推断。我们通过将其知识提炼成较小版本的网络,进一步优化我们的模型,并与标准培训方法相比获得显着的改进。我们的实验表明,与较重的最新模型相比,我们的快速和轻量级模型可保持相当令人满意的图像质量。最后,我们对图像传输进行带宽降解的实验,以突出提出的移动机器人应用系统的优势。
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
视频超分辨率(VSR)是从一系列低分辨率输入序列恢复高分辨率帧的任务。与单图超分辨率不同,VSR可以利用框架的时间信息来重建结果,并提供更多详细信息。最近,随着卷积神经网络(CNN)的快速发展,VSR任务引起了人们的关注,许多基于CNN的方法取得了显着的结果。但是,由于计算资源和运行时限制,只能将一些VSR方法应用于现实世界移动设备。在本文中,我们提出了一个\ textIt {基于滑动窗口的重复网络}(SWRN),该网络可以实时推断,同时仍能达到卓越的性能。具体而言,我们注意到视频帧应该具有可以帮助恢复细节的空间和时间关系,而关键点是如何提取和汇总信息。解决它,我们输入了三个相邻的帧,并利用隐藏状态来重复存储和更新重要的时间信息。我们在REDS数据集上的实验表明,所提出的方法可以很好地适应移动设备并产生视觉上令人愉悦的结果。
translated by 谷歌翻译