在恢复低分辨率灰度图像的实际应用中,我们通常需要为目标设备运行三个单独的图像着色,超分辨率和Dows采样操作。但是,该管道对于独立进程是冗余的并且低效,并且可以共享一些内部特征。因此,我们提出了一种有效的范例来执行{s} {s} {c} olorization和{s} Uper分辨率(SCS),并提出了端到端的SCSNet来实现这一目标。该方法由两部分组成:用于学习颜色信息的彩色分支,用于采用所提出的即插即用\ EMPH {金字塔阀跨关注}(PVCATTN)模块来聚合源和参考图像之间的特征映射;和超分辨率分支集成颜色和纹理信息以预测使用设计的\ emph {连续像素映射}(CPM)模块的目标图像来预测连续放大率的高分辨率图像。此外,我们的SCSNet支持对实际应用更灵活的自动和参照模式。丰富的实验证明了我们通过最先进的方法生成真实图像的方法的优越性,例如,平均降低了1.8 $ \ Depararrow $和5.1 $ \ Downarrow $相比,与自动和参照模式的最佳分数相比,分别在拥有更少的参数(超过$ \ \倍$ 2 $ \ dovearrow $)和更快的运行速度(超过$ \ times $ 3 $ \ Uprarow $)。
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
面部超分辨率(FSR),也称为面部幻觉,其旨在增强低分辨率(LR)面部图像以产生高分辨率(HR)面部图像的分辨率,是特定于域的图像超分辨率问题。最近,FSR获得了相当大的关注,并目睹了深度学习技术的发展炫目。迄今为止,有很少有基于深入学习的FSR的研究摘要。在本次调查中,我们以系统的方式对基于深度学习的FSR方法进行了全面审查。首先,我们总结了FSR的问题制定,并引入了流行的评估度量和损失功能。其次,我们详细说明了FSR中使用的面部特征和流行数据集。第三,我们根据面部特征的利用大致分类了现有方法。在每个类别中,我们从设计原则的一般描述开始,然后概述代表方法,然后讨论其中的利弊。第四,我们评估了一些最先进的方法的表现。第五,联合FSR和其他任务以及与FSR相关的申请大致介绍。最后,我们设想了这一领域进一步的技术进步的前景。在\ URL {https://github.com/junjun-jiang/face-hallucination-benchmark}上有一个策划的文件和资源的策划文件和资源清单
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
尽管基准数据集的成功,但大多数先进的面部超分辨率模型在真实情况下表现不佳,因为真实图像与合成训练对之间的显着域间隙。为了解决这个问题,我们提出了一种用于野外面部超分辨率的新型域 - 自适应降级网络。该降级网络预测流场以及中间低分辨率图像。然后,通过翘曲中间图像来生成降级的对应物。利用捕获运动模糊的偏好,这种模型在保护原始图像和劣化之间保持身份一致性更好地执行。我们进一步提出了超分辨率网络的自我调节块。该块将输入图像作为条件术语,以有效地利用面部结构信息,从而消除了对显式前沿的依赖性,例如,面部地标或边界。我们的模型在Celeba和真实世界的面部数据集上实现了最先进的性能。前者展示了我们所提出的建筑的强大生成能力,而后者展示了现实世界中的良好的身份一致性和感知品质。
translated by 谷歌翻译
Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.
translated by 谷歌翻译
Image super-resolution (SR) serves as a fundamental tool for the processing and transmission of multimedia data. Recently, Transformer-based models have achieved competitive performances in image SR. They divide images into fixed-size patches and apply self-attention on these patches to model long-range dependencies among pixels. However, this architecture design is originated for high-level vision tasks, which lacks design guideline from SR knowledge. In this paper, we aim to design a new attention block whose insights are from the interpretation of Local Attribution Map (LAM) for SR networks. Specifically, LAM presents a hierarchical importance map where the most important pixels are located in a fine area of a patch and some less important pixels are spread in a coarse area of the whole image. To access pixels in the coarse area, instead of using a very large patch size, we propose a lightweight Global Pixel Access (GPA) module that applies cross-attention with the most similar patch in an image. In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details. In addition, a Cascaded Patch Division (CPD) strategy is proposed to enhance perceptual quality of recovered images. Extensive experiments suggest that our method outperforms state-of-the-art lightweight SR methods by a large margin. Code is available at https://github.com/passerer/HPINet.
translated by 谷歌翻译
We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a softattention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1× to 4× magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations. The source code can be downloaded at https://github.com/ researchmm/TTSR.
translated by 谷歌翻译
盲人恢复通常会遇到各种规模的面孔输入,尤其是在现实世界中。但是,当前的大多数作品都支持特定的规模面,这限制了其在现实情况下的应用能力。在这项工作中,我们提出了一个新颖的尺度感知盲人面部修复框架,名为FaceFormer,该框架将面部特征恢复作为比例感知转换。所提出的面部特征上采样(FFUP)模块基于原始的比例比例动态生成UPSMPLING滤波器,这有助于我们的网络适应任意面部尺度。此外,我们进一步提出了面部特征嵌入(FFE)模块,该模块利用变压器来层次提取面部潜在的多样性和鲁棒性。因此,我们的脸部形式实现了富裕性和稳健性,恢复了面部的面孔,对面部成分具有现实和对称的细节。广泛的实验表明,我们提出的使用合成数据集训练的方法比当前的最新图像更好地推广到天然低质量的图像。
translated by 谷歌翻译
本文介绍了一个名为DTVNet的新型端到端动态时间流逝视频生成框架,以从归一化运动向量上的单个景观图像生成多样化的延期视频。所提出的DTVNET由两个子模块组成:\ EMPH {光学流编码器}(OFE)和\ EMPH {动态视频生成器}(DVG)。 OFE将一系列光学流程图映射到编码所生成视频的运动信息的\ Emph {归一化运动向量}。 DVG包含来自运动矢量和单个景观图像的运动和内容流。此外,它包含一个编码器,用于学习共享内容特征和解码器,以构造具有相应运动的视频帧。具体地,\ EMPH {运动流}介绍多个\ EMPH {自适应实例归一化}(Adain)层,以集成用于控制对象运动的多级运动信息。在测试阶段,基于仅一个输入图像,可以产生具有相同内容但具有相同运动信息但各种运动信息的视频。此外,我们提出了一个高分辨率的景区时间流逝视频数据集,命名为快速天空时间,以评估不同的方法,可以被视为高质量景观图像和视频生成任务的新基准。我们进一步对天空延时,海滩和快速天空数据集进行实验。结果证明了我们对最先进的方法产生高质量和各种动态视频的方法的优越性。
translated by 谷歌翻译
深度神经网络极大地促进了单图超分辨率(SISR)的性能。传统方法仍然仅基于图像模态的输入来恢复单个高分辨率(HR)解决方案。但是,图像级信息不足以预测大型展望因素面临的足够细节和光真逼真的视觉质量(x8,x16)。在本文中,我们提出了一种新的视角,将SISR视为语义图像详细信息增强问题,以产生忠于地面真理的语义合理的HR图像。为了提高重建图像的语义精度和视觉质量,我们通过提出文本指导的超分辨率(TGSR)框架来探索SISR中的多模式融合学习,该框架可以从文本和图像模态中有效地利用信息。与现有方法不同,提出的TGSR可以生成通过粗到精细过程匹配文本描述的HR图像详细信息。广泛的实验和消融研究证明了TGSR的效果,该效果利用文本参考来恢复逼真的图像。
translated by 谷歌翻译
我们考虑单个图像超分辨率(SISR)问题,其中基于低分辨率(LR)输入产生高分辨率(HR)图像。最近,生成的对抗性网络(GANS)变得幻觉细节。大多数沿着这条线的方法依赖于预定义的单个LR-intle-hr映射,这对于SISR任务来说是足够灵活的。此外,GaN生成的假细节可能经常破坏整个图像的现实主义。我们通过为Rich-Detail SISR提出最好的伙伴GANS(Beby-GaN)来解决这些问题。放松不变的一对一的约束,我们允许估计的贴片在培训期间动态寻求最佳监督,这有利于产生更合理的细节。此外,我们提出了一种区域感知的对抗性学习策略,指导我们的模型专注于自适应地为纹理区域发电细节。广泛的实验证明了我们方法的有效性。还构建了超高分辨率4K数据集以促进未来的超分辨率研究。
translated by 谷歌翻译
突发超级分辨率(SR)提供了从低质量图像恢复丰富细节的可能性。然而,由于实际应用中的低分辨率(LR)图像具有多种复杂和未知的降级,所以现有的非盲(例如,双臂)设计的网络通常导致恢复高分辨率(HR)图像的严重性能下降。此外,处理多重未对准的嘈杂的原始输入也是具有挑战性的。在本文中,我们解决了从现代手持设备获取的原始突发序列重建HR图像的问题。中央观点是一个内核引导策略,可以用两个步骤解决突发SR:内核建模和HR恢复。前者估计来自原始输入的突发内核,而后者基于估计的内核预测超分辨图像。此外,我们引入了内核感知可变形对准模块,其可以通过考虑模糊的前沿而有效地对准原始图像。对综合和现实世界数据集的广泛实验表明,所提出的方法可以在爆发SR问题中对最先进的性能进行。
translated by 谷歌翻译
Deep Convolutional Neural Networks (DCNNs) have exhibited impressive performance on image super-resolution tasks. However, these deep learning-based super-resolution methods perform poorly in real-world super-resolution tasks, where the paired high-resolution and low-resolution images are unavailable and the low-resolution images are degraded by complicated and unknown kernels. To break these limitations, we propose the Unsupervised Bi-directional Cycle Domain Transfer Learning-based Generative Adversarial Network (UBCDTL-GAN), which consists of an Unsupervised Bi-directional Cycle Domain Transfer Network (UBCDTN) and the Semantic Encoder guided Super Resolution Network (SESRN). First, the UBCDTN is able to produce an approximated real-like LR image through transferring the LR image from an artificially degraded domain to the real-world LR image domain. Second, the SESRN has the ability to super-resolve the approximated real-like LR image to a photo-realistic HR image. Extensive experiments on unpaired real-world image benchmark datasets demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods.
translated by 谷歌翻译
The Super-Resolution Generative Adversarial Network (SR-GAN) [1] is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGANnetwork architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN [2] to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge 1 [3]. The code is available at https://github.com/xinntao/ESRGAN.
translated by 谷歌翻译
近年来,着色吸引了越来越多的兴趣。经典的基于参考的方法通常依靠外部颜色图像来获得合理的结果。检索此类示例不可避免地需要大型图像数据库或在线搜索引擎。最近的基于深度学习的方法可以自动以低成本为图像着色。但是,总是伴随着不满意的文物和不连贯的颜色。在这项工作中,我们提出了GCP颜色化,以利用预审前的生成对抗网络(GAN)封装的丰富和多样化的颜色先验进行自动着色。具体而言,我们首先通过GAN编码器“检索”匹配的功能(类似于示例),然后将这些功能与功能调制量合并到着色过程中。得益于强大的生成颜色先验(GCP)和精致的设计,我们的GCP颜色可以通过单个前向传球产生生动的颜色。此外,通过修改GAN潜在代码获得多样化的结果非常方便。 GCP颜色还继承了可解释的gan的功能,并可以通过穿过甘恩潜在空间来实现可控制和平滑的过渡。广泛的实验和用户研究表明,GCP颜色比以前的作品具有出色的性能。代码可在https://github.com/tothebeginning/gcp-colorization上找到。
translated by 谷歌翻译
近年来,面部语义指导(包括面部地标,面部热图和面部解析图)和面部生成对抗网络(GAN)近年来已广泛用于盲面修复(BFR)。尽管现有的BFR方法在普通案例中取得了良好的性能,但这些解决方案在面对严重降解和姿势变化的图像时具有有限的弹性(例如,在现实世界情景中看起来右,左看,笑等)。在这项工作中,我们提出了一个精心设计的盲人面部修复网络,具有生成性面部先验。所提出的网络主要由非对称编解码器和stylegan2先验网络组成。在非对称编解码器中,我们采用混合的多路残留块(MMRB)来逐渐提取输入图像的弱纹理特征,从而可以更好地保留原始面部特征并避免过多的幻想。 MMRB也可以在其他网络中插入插件。此外,多亏了StyleGAN2模型的富裕和多样化的面部先验,我们采用了微调的方法来灵活地恢复自然和现实的面部细节。此外,一种新颖的自我监督训练策略是专门设计用于面部修复任务的,以使分配更接近目标并保持训练稳定性。关于合成和现实世界数据集的广泛实验表明,我们的模型在面部恢复和面部超分辨率任务方面取得了卓越的表现。
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
联合超分辨率和反音调映射(SR-ITM)旨在提高具有分辨率和动态范围具有质量缺陷的视频的视觉质量。当使用4K高动态范围(HDR)电视来观看低分辨率标准动态范围(LR SDR)视频时,就会出现此问题。以前依赖于学习本地信息的方法通常在保留颜色合规性和远程结构相似性方面做得很好,从而导致了不自然的色彩过渡和纹理伪像。为了应对这些挑战,我们建议联合SR-ITM的全球先验指导的调制网络(GPGMNET)。特别是,我们设计了一个全球先验提取模块(GPEM),以提取颜色合规性和结构相似性,分别对ITM和SR任务有益。为了进一步利用全球先验并保留空间信息,我们使用一些用于中间特征调制的参数,设计多个全球先验的指导空间调制块(GSMB),其中调制参数由共享的全局先验和空间特征生成来自空间金字塔卷积块(SPCB)的地图。通过这些精心设计的设计,GPGMNET可以通过较低的计算复杂性实现更高的视觉质量。广泛的实验表明,我们提出的GPGMNET优于最新方法。具体而言,我们提出的模型在PSNR中超过了0.64 dB的最新模型,其中69 $ \%$ $ $较少,3.1 $ \ times $ speedup。该代码将很快发布。
translated by 谷歌翻译