近年来,压缩图像超分辨率已引起了极大的关注,其中图像被压缩伪像和低分辨率伪影降解。由于复杂的杂化扭曲变形,因此很难通过简单的超分辨率和压缩伪像消除掉的简单合作来恢复扭曲的图像。在本文中,我们向前迈出了一步,提出了层次的SWIN变压器(HST)网络,以恢复低分辨率压缩图像,该图像共同捕获分层特征表示并分别用SWIN Transformer增强每个尺度表示。此外,我们发现具有超分辨率(SR)任务的预处理对于压缩图像超分辨率至关重要。为了探索不同的SR预审查的影响,我们将常用的SR任务(例如,比科比奇和不同的实际超分辨率仿真)作为我们的预处理任务,并揭示了SR在压缩的图像超分辨率中起不可替代的作用。随着HST和预训练的合作,我们的HST在AIM 2022挑战中获得了低质量压缩图像超分辨率轨道的第五名,PSNR为23.51db。广泛的实验和消融研究已经验证了我们提出的方法的有效性。
translated by 谷歌翻译
压缩在通过限制系统(例如流媒体服务,虚拟现实或视频游戏)等系统的有效传输和存储图像和视频中起着重要作用。但是,不可避免地会导致伪影和原始信息的丢失,这可能会严重降低视觉质量。由于这些原因,压缩图像的质量增强已成为流行的研究主题。尽管大多数最先进的图像恢复方法基于卷积神经网络,但基于Swinir等其他基于变压器的方法在这些任务上表现出令人印象深刻的性能。在本文中,我们探索了新型的Swin Transformer V2,以改善图像超分辨率的Swinir,尤其是压缩输入方案。使用这种方法,我们可以解决训练变压器视觉模型中的主要问题,例如训练不稳定性,预训练和微调之间的分辨率差距以及数据饥饿。我们对三个代表性任务进行实验:JPEG压缩伪像去除,图像超分辨率(经典和轻巧)以及压缩的图像超分辨率。实验结果表明,我们的方法SWIN2SR可以改善SWINIR的训练收敛性和性能,并且是“ AIM 2022挑战压缩图像和视频的超分辨率”的前5个解决方案。
translated by 谷歌翻译
Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from lowquality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we propose a strong baseline model SwinIR for image restoration based on the Swin Transformer. SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection. We conduct experiments on three representative tasks: image super-resolution (including classical, lightweight and real-world image super-resolution), image denoising (including grayscale and color image denoising) and JPEG compression artifact reduction. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks by up to 0.14∼0.45dB, while the total number of parameters can be reduced by up to 67%.
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
基于变压器的方法与基于CNN的方法相比,由于其对远程依赖性的模型,因此获得了令人印象深刻的图像恢复性能。但是,像Swinir这样的进步采用了基于窗口的和本地注意力的策略来平衡性能和计算开销,这限制了采用大型接收领域来捕获全球信息并在早期层中建立长期依赖性。为了进一步提高捕获全球信息的效率,在这项工作中,我们建议Swinfir通过更换具有整个图像范围的接收场的快速傅立叶卷积(FFC)组件来扩展Swinir。我们还重新访问其他先进技术,即数据增强,预训练和功能集合,以改善图像重建的效果。并且我们的功能合奏方法使模型的性能得以大大增强,而无需增加训练和测试时间。与现有方法相比,我们将算法应用于多个流行的大规模基准,并实现了最先进的性能。例如,我们的Swinfir在漫画109数据集上达到了32.83 dB的PSNR,该PSNR比最先进的Swinir方法高0.8 dB。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
盲面修复(BFR)旨在从相应的低质量(LQ)输入中构建高质量(HQ)面部图像。最近,已经提出了许多BFR方法,并取得了杰出的成功。但是,这些方法经过私人合成的数据集进行了培训或评估,这使得与后续方法相比的方法是不可行的。为了解决这个问题,我们首先合成两个称为EDFEACE-CELEB-1M(BFR128)和EDFACE-CELEB-150K(BFR512)的盲面恢复基准数据集。在五个设置下,将最先进的方法在它们的五个设置下进行了基准测试,包括模糊,噪声,低分辨率,JPEG压缩伪像及其组合(完全退化)。为了使比较更全面,应用了五个广泛使用的定量指标和两个任务驱动的指标,包括平均面部标志距离(AFLD)和平均面部ID余弦相似性(AFICS)。此外,我们开发了一个有效的基线模型,称为Swin Transformer U-NET(昏迷)。带有U-NET体系结构的昏迷器应用了注意机制和移动的窗口方案,以捕获远程像素相互作用,并更多地关注重要功能,同时仍受到有效训练。实验结果表明,所提出的基线方法对各种BFR任务的SOTA方法表现出色。
translated by 谷歌翻译
盲人恢复通常会遇到各种规模的面孔输入,尤其是在现实世界中。但是,当前的大多数作品都支持特定的规模面,这限制了其在现实情况下的应用能力。在这项工作中,我们提出了一个新颖的尺度感知盲人面部修复框架,名为FaceFormer,该框架将面部特征恢复作为比例感知转换。所提出的面部特征上采样(FFUP)模块基于原始的比例比例动态生成UPSMPLING滤波器,这有助于我们的网络适应任意面部尺度。此外,我们进一步提出了面部特征嵌入(FFE)模块,该模块利用变压器来层次提取面部潜在的多样性和鲁棒性。因此,我们的脸部形式实现了富裕性和稳健性,恢复了面部的面孔,对面部成分具有现实和对称的细节。广泛的实验表明,我们提出的使用合成数据集训练的方法比当前的最新图像更好地推广到天然低质量的图像。
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
盲级超分辨率(SR)旨在从低分辨率(LR)图像中恢复高质量的视觉纹理,通常通过下采样模糊内核和添加剂噪声来降解。由于现实世界中复杂的图像降解的挑战,此任务非常困难。现有的SR方法要么假定预定义的模糊内核或固定噪声,这限制了这些方法在具有挑战性的情况下。在本文中,我们提出了一个用于盲目超级分辨率(DMSR)的降解引导的元修复网络,该网络促进了真实病例的图像恢复。 DMSR由降解提取器和元修复模块组成。萃取器估计LR输入中的降解,并指导元恢复模块以预测恢复参数的恢复参数。 DMSR通过新颖的降解一致性损失和重建损失共同优化。通过这样的优化,DMSR在三个广泛使用的基准上以很大的边距优于SOTA。一项包括16个受试者的用户研究进一步验证了现实世界中的盲目SR任务中DMSR的优势。
translated by 谷歌翻译
Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.
translated by 谷歌翻译
现实世界图像超分辨率(SR)的关键挑战是在低分辨率(LR)图像中恢复具有复杂未知降解(例如,下采样,噪声和压缩)的缺失细节。大多数以前的作品还原图像空间中的此类缺失细节。为了应对自然图像的高度多样性,他们要么依靠难以训练和容易训练和伪影的不稳定的甘体,要么诉诸于通常不可用的高分辨率(HR)图像中的明确参考。在这项工作中,我们提出了匹配SR(FEMASR)的功能,该功能在更紧凑的特征空间中恢复了现实的HR图像。与图像空间方法不同,我们的FEMASR通过将扭曲的LR图像{\ IT特征}与我们预读的HR先验中的无失真性HR对应物匹配来恢复HR图像,并解码匹配的功能以获得现实的HR图像。具体而言,我们的人力资源先验包含一个离散的特征代码簿及其相关的解码器,它们在使用量化的生成对抗网络(VQGAN)的HR图像上预估计。值得注意的是,我们在VQGAN中结合了一种新型的语义正则化,以提高重建图像的质量。对于功能匹配,我们首先提取由LR编码器组成的LR编码器的LR功能,然后遵循简单的最近邻居策略,将其与预读的代码簿匹配。特别是,我们为LR编码器配备了与解码器的残留快捷方式连接,这对于优化功能匹配损耗至关重要,还有助于补充可能的功能匹配错误。实验结果表明,我们的方法比以前的方法产生更现实的HR图像。代码以\ url {https://github.com/chaofengc/femasr}发布。
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
现实世界图像Denoising是一个实用的图像恢复问题,旨在从野外嘈杂的输入中获取干净的图像。最近,Vision Transformer(VIT)表现出强大的捕获远程依赖性的能力,许多研究人员试图将VIT应用于图像DeNosing任务。但是,现实世界的图像是一个孤立的框架,它使VIT构建了内部贴片的远程依赖性,该依赖性将图像分为贴片并混乱噪声模式和梯度连续性。在本文中,我们建议通过使用连续的小波滑动转换器来解决此问题,该小波滑动转换器在现实世界中构建频率对应关系,称为dnswin。具体而言,我们首先使用CNN编码器从嘈杂的输入图像中提取底部功能。 DNSWIN的关键是将高频和低频信息与功能和构建频率依赖性分开。为此,我们提出了小波滑动窗口变压器,该变压器利用离散的小波变换,自我注意力和逆离散小波变换来提取深度特征。最后,我们使用CNN解码器将深度特征重建为DeNo的图像。对现实世界的基准测试的定量和定性评估都表明,拟议的DNSWIN对最新方法的表现良好。
translated by 谷歌翻译
在本文中,我们提出了D2C-SR,这是一个新颖的框架,用于实现现实世界图像超级分辨率的任务。作为一个不适的问题,超分辨率相关任务的关键挑战是给定的低分辨率输入可能会有多个预测。大多数基于经典的深度学习方法都忽略了基本事实,缺乏对基础高频分布的明确建模,从而导致结果模糊。最近,一些基于GAN或学习的超分辨率空间的方法可以生成模拟纹理,但不能保证具有低定量性能的纹理的准确性。重新思考这两者,我们以离散形式了解了基本高频细节的分布,并提出了两阶段的管道:分歧阶段到收敛阶段。在发散阶段,我们提出了一个基于树的结构深网作为差异骨干。提出了发散损失,以鼓励基于树的网络产生的结果,以分解可能的高频表示,这是我们对基本高频分布进行离散建模的方式。在收敛阶段,我们分配空间权重以融合这些不同的预测,以获得更准确的细节,以获取最终输出。我们的方法为推理提供了方便的端到端方式。我们对几个现实世界基准进行评估,包括具有X8缩放系数的新提出的D2CrealSR数据集。我们的实验表明,D2C-SR针对最先进的方法实现了更好的准确性和视觉改进,参数编号明显较少,并且我们的D2C结构也可以作为广义结构应用于其他一些方法以获得改进。我们的代码和数据集可在https://github.com/megvii-research/d2c-sr上找到
translated by 谷歌翻译
为了解决高光谱图像超分辨率(HSISR)的不良问题,通常方法是使用高光谱图像(HSIS)的先前信息作为正则化术语来限制目标函数。使用手工制作前沿的基于模型的方法无法完全表征HSI的性质。基于学习的方法通常使用卷积神经网络(CNN)来学习HSI的隐式前导者。然而,CNN的学习能力是有限的,它仅考虑HSI的空间特性并忽略光谱特性,并且卷积对远程依赖性建模无效。还有很多改进的空间。在本文中,我们提出了一种新颖的HSISR方法,该方法使用变压器而不是CNN来学习HSI之前。具体地,我们首先使用近端梯度算法来解决HSISR模型,然后使用展开网络来模拟迭代解决方案过程。变压器的自我注意层使其具有空间全局互动的能力。此外,我们在变压器层后面添加3D-CNN,以更好地探索HSIS的时空相关性。两个广泛使用的HSI数据集和实际数据集的定量和视觉结果证明,与所有主流算法相比,所提出的方法实现了相当大的增益,包括最竞争力的传统方法和最近提出的基于深度学习的方法。
translated by 谷歌翻译