我们提出了一种增强的多尺度网络,被称为GriddehazeNet +,用于单图像脱水。所提出的去吸收方法不依赖于大气散射模型(ASM),并提供为什么不一定执行该模型提供的尺寸减少的原因。 Griddehazenet +由三个模块组成:预处理,骨干和后处理。与手工选定的预处理方法产生的那些导出的输入相比,可训练的预处理模块可以生成具有更好分集和更相关的功能的学习输入。骨干模块实现了两种主要增强功能的多尺度估计:1)一种新颖的网格结构,有效地通过不同尺度的密集连接来减轻瓶颈问题; 2)一种空间通道注意力块,可以通过巩固脱水相关特征来促进自适应融合。后处理模块有助于减少最终输出中的伪像。由于域移位,在合成数据上培训的模型可能在真实数据上概括。为了解决这个问题,我们塑造了合成数据的分布以匹配真实数据的分布,并使用所产生的翻译数据来到Finetune我们的网络。我们还提出了一种新的任务内部知识转移机制,可以记住和利用综合域知识,以协助学习过程对翻译数据。实验结果表明,所提出的方法优于几种合成脱色数据集的最先进,并在FineTuning之后实现了现实世界朦胧图像的优越性。
translated by 谷歌翻译
Image restoration under hazy weather condition, which is called single image dehazing, has been of significant interest for various computer vision applications. In recent years, deep learning-based methods have achieved success. However, existing image dehazing methods typically neglect the hierarchy of features in the neural network and fail to exploit their relationships fully. To this end, we propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD), which is based on feature fusion and contrastive learning strategies. HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL). Specifically, the core design in the HDN is a Hierarchical Interaction Module, which utilizes multi-scale activation to revise the feature responses hierarchically. To cooperate with the training of HDN, we propose HCL which performs contrastive learning on hierarchically paired exemplars, facilitating haze removal. Extensive experiments on public datasets, RESIDE, HazeRD, and DENSE-HAZE, demonstrate that HCD quantitatively outperforms the state-of-the-art methods in terms of PSNR, SSIM and achieves better visual quality.
translated by 谷歌翻译
With the development of convolutional neural networks, hundreds of deep learning based dehazing methods have been proposed. In this paper, we provide a comprehensive survey on supervised, semi-supervised, and unsupervised single image dehazing. We first discuss the physical model, datasets, network modules, loss functions, and evaluation metrics that are commonly used. Then, the main contributions of various dehazing algorithms are categorized and summarized. Further, quantitative and qualitative experiments of various baseline methods are carried out. Finally, the unsolved issues and challenges that can inspire the future research are pointed out. A collection of useful dehazing materials is available at \url{https://github.com/Xiaofeng-life/AwesomeDehazing}.
translated by 谷歌翻译
Images with haze of different varieties often pose a significant challenge to dehazing. Therefore, guidance by estimates of haze parameters related to the variety would be beneficial and their progressive update jointly with haze reduction will allow effective dehazing. To this end, we propose a multi-network dehazing framework containing novel interdependent dehazing and haze parameter updater networks that operate in a progressive manner. The haze parameters, transmission map and atmospheric light, are first estimated using specific convolutional networks allowing color-cast handling. The estimated parameters are then used to guide our dehazing module, where the estimates are progressively updated by novel convolutional networks. The updating takes place jointly with progressive dehazing by a convolutional network that invokes inter-step dependencies. The joint progressive updating and dehazing gradually modify the haze parameter estimates toward achieving effective dehazing. Through different studies, our dehazing framework is shown to be more effective than image-to-image mapping or predefined haze formation model based dehazing. Our dehazing framework is qualitatively and quantitatively found to outperform the state-of-the-art on synthetic and real-world hazy images of several datasets with varied haze conditions.
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
基于对抗性学习的图像抑制方法,由于其出色的性能,已经在计算机视觉中进行了广泛的研究。但是,大多数现有方法对实际情况的质量功能有限,因为它们在相同场景的透明和合成的雾化图像上进行了培训。此外,它们在保留鲜艳的色彩和丰富的文本细节方面存在局限性。为了解决这些问题,我们开发了一个新颖的生成对抗网络,称为整体注意力融合对抗网络(HAAN),用于单个图像。 Haan由Fog2FogFogre块和FogFree2Fog块组成。在每个块中,有三个基于学习的模块,即雾除雾,颜色纹理恢复和雾合成,它们相互限制以生成高质量的图像。 Haan旨在通过学习雾图图像之间的整体通道空间特征相关性及其几个派生图像之间的整体通道空间特征相关性来利用纹理和结构信息的自相似性。此外,在雾合成模块中,我们利用大气散射模型来指导它,以通过新颖的天空分割网络专注于大气光优化来提高生成质量。关于合成和现实世界数据集的广泛实验表明,就定量准确性和主观的视觉质量而言,Haan的表现优于最先进的脱落方法。
translated by 谷歌翻译
在合成数据集接受培训的基于深度学习的源脱掩护方法已经取得了显着的性能,但由于域移动而引起的真实朦胧图像的性能急剧下降。尽管已经提出了某些域的适应(DA)脱掩护方法,但它们不可避免地需要访问源数据集,以减少源合成和目标真实域之间的差距。为了解决这些问题,我们提出了一种新颖的无源无监督的域适应性(SFUDA)图像去悬式范式,其中只有训练有素的源模型和未标记的目标真实的朦胧数据集。具体而言,我们设计了域表示标准化(DRN)模块,以使真实朦胧域特征的表示与合成域的特征相匹配以弥合间隙。借助我们的插件DRN模块,未标记的真实朦胧图像可以调整现有训练有素的源网络。此外,还应用了无监督的损失来指导DRN模块的学习,该模块包括频率损失和物理先验损失。频率损失提供了结构和样式的约束,而先前的损失探讨了无雾图像的固有统计属性。现有的源脱去模型配备了我们的DRN模块和无监督的损失,能够脱光未标记的真实朦胧图像。在多个基层上进行的广泛实验证明了我们方法在视觉和定量上的有效性和优越性。
translated by 谷歌翻译
水下杂质的光吸收和散射导致水下较差的水下成像质量。现有的基于数据驱动的基于数据的水下图像增强(UIE)技术缺乏包含各种水下场景和高保真参考图像的大规模数据集。此外,不同颜色通道和空间区域的不一致衰减不完全考虑提升增强。在这项工作中,我们构建了一个大规模的水下图像(LSUI)数据集,包括5004个图像对,并报告了一个U形变压器网络,其中变压器模型首次引入UIE任务。 U形变压器与通道 - 方面的多尺度特征融合变压器(CMSFFT)模块和空间全局功能建模变压器(SGFMT)模块集成在一起,可使用更多地加强网络对色频道和空间区域的关注严重衰减。同时,为了进一步提高对比度和饱和度,在人类视觉原理之后,设计了组合RGB,实验室和LCH颜色空间的新型损失函数。可用数据集的广泛实验验证了报告的技术的最先进性能,具有超过2dB的优势。
translated by 谷歌翻译
传统的基于CNNS的脱水模型遭受了两个基本问题:脱水框架(可解释性有限)和卷积层(内容无关,无效地学习远程依赖信息)。在本文中,我们提出了一种新的互补特征增强框架,其中互补特征由几个互补的子任务学习,然后一起用于提高主要任务的性能。新框架的一个突出优势之一是,有目的选择的互补任务可以专注于学习弱依赖性的互补特征,避免重复和无效的网络学习。我们根据这样一个框架设计了一种新的脱瘟网络。具体地,我们选择内在图像分解作为补充任务,其中反射率和阴影预测子任务用于提取色彩和纹理的互补特征。为了有效地聚合这些互补特征,我们提出了一种互补特征选择模块(CFSM),以选择图像脱水的更有用功能。此外,我们介绍了一个名为Hybrid Local-Global Vision变换器(Hylog-Vit)的新版本的Vision变换器块,并将其包含在我们的脱水网络中。 Hylog-VIT块包括用于捕获本地和全球依赖性的本地和全局视觉变压器路径。结果,Hylog-VIT引入网络中的局部性并捕获全局和远程依赖性。在均匀,非均匀和夜间脱水任务上的广泛实验表明,所提出的脱水网络可以实现比基于CNNS的去吸收模型的相当甚至更好的性能。
translated by 谷歌翻译
多尺度体系结构和注意力模块在许多基于深度学习的图像脱落方法中都显示出有效性。但是,将这两个组件手动设计和集成到神经网络中需要大量的劳动力和广泛的专业知识。在本文中,高性能多尺度的细心神经体系结构搜索(MANAS)框架是技术开发的。所提出的方法为图像脱落任务的最爱的多个灵活模块制定了新的多尺度注意搜索空间。在搜索空间下,建立了多尺度的细胞,该单元被进一步用于构建功能强大的图像脱落网络。通过基于梯度的搜索算法自动搜索脱毛网络的内部多尺度架构,该算法在某种程度上避免了手动设计的艰巨过程。此外,为了获得强大的图像脱落模型,还提出了一种实用有效的多到一对训练策略,以允许去磨损网络从具有相同背景场景的多个雨天图像中获取足够的背景信息,与此同时,共同优化了包括外部损失,内部损失,建筑正则损失和模型复杂性损失在内的多个损失功能,以实现可靠的损伤性能和可控的模型复杂性。对合成和逼真的雨图像以及下游视觉应用(即反对检测和分割)的广泛实验结果始终证明了我们提出的方法的优越性。
translated by 谷歌翻译
在弱光环境下,手持式摄影在长时间的曝光设置下遭受了严重的相机震动。尽管现有的Deblurry算法在暴露良好的模糊图像上表现出了令人鼓舞的性能,但它们仍然无法应对低光快照。在实用的低光脱毛中,复杂的噪声和饱和区是两个主导挑战。在这项工作中,我们提出了一种称为图像的新型非盲脱毛方法,并具有特征空间Wiener Deonervolution网络(Infwide),以系统地解决这些问题。在算法设计方面,Infwide提出了一个两分支的架构,该体系结构明确消除了噪声并幻觉,使图像空间中的饱和区域抑制了特征空间中的响起文物,并将两个互补输出与一个微妙的多尺度融合网络集成在一起高质量的夜间照片浮雕。为了进行有效的网络培训,我们设计了一组损失功能,集成了前向成像模型和向后重建,以形成近环的正则化,以确保深神经网络的良好收敛性。此外,为了优化Infwide在实际弱光条件下的适用性,采用基于物理过程的低光噪声模型来合成现实的嘈杂夜间照片进行模型训练。利用传统的Wiener Deonervolution算法的身体驱动的特征并引起了深层神经网络的表示能力,Infwide可以恢复细节,同时抑制在脱毛期间的不愉快的人工制品。关于合成数据和实际数据的广泛实验证明了所提出的方法的出色性能。
translated by 谷歌翻译
这项工作研究了关节降雨和雾霾清除问题。在现实情况下,雨水和阴霾通常是两个经常共同发生的共同天气现象,可以极大地降低场景图像的清晰度和质量,从而导致视觉应用的性能下降,例如自动驾驶。但是,在场景图像中共同消除雨水和雾霾是艰难而挑战,在那里,阴霾和雨水的存在以及大气光的变化都可以降低现场信息。当前的方法集中在污染部分上,因此忽略了受大气光的变化影响的场景信息的恢复。我们提出了一个新颖的深神经网络,称为不对称双重编码器U-NET(ADU-NET),以应对上述挑战。 ADU-NET既产生污染物残留物,又产生残留的现场,以有效地去除雨水和雾霾,同时保留场景信息的保真度。广泛的实验表明,我们的工作在合成数据和现实世界数据基准(包括RainCityScapes,Bid Rain和Spa-data)的相当大的差距上优于现有的最新方法。例如,我们在RainCityScapes/spa-data上分别将最新的PSNR值提高了2.26/4.57。代码将免费提供给研究社区。
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
减少全身CT扫描中患者的辐射暴露引起了医学成像界的广泛关注。鉴于低辐射剂量可能导致噪声和伪像增加,这极大地影响了临床诊断。为了获得高质量的全身低剂量CT(LDCT)图像,以前的基于深度学习的研究工作引入了各种网络架构。然而,大多数这些方法只采用正常剂量CT(NDCT)图像作为地面真理来指导去噪网络的训练。这种简单的限制导致模型效率更低,并使重建的图像遭受过平滑的效果。在本文中,我们提出了一种新的任务内知识转移方法,利用来自NDCT图像的蒸馏知识来帮助LDCT图像上的培训过程。派生架构被称为师生一致性网络(TSC-Net),由教师网络和具有相同架构的学生网络组成。通过中间功能之间的监督,鼓励学生网络模仿教师网络并获得丰富的纹理细节。此外,为了进一步利用CT扫描中包含的信息,介绍了在对比学习时建立的对比正规化机制(CRM).CRM执行将恢复的CT图像拉到NDCT样本,并将远离LDCT样本的遥控器中的遥远空间。此外,基于注意力和可变形卷积机制,我们设计了一种动态增强模块(DEM)以提高网络变换能力。
translated by 谷歌翻译
在现实世界中,在雾度下拍摄的图像的降解可以是非常复杂的,其中雾度的空间分布从图像变化到图像。最近的方法采用深神经网络直接从朦胧图像中恢复清洁场景。然而,由于悖论由真正捕获的雾霾的变化和当前网络的固定退化参数引起的悖论,最近在真实朦胧的图像上的脱水方法的泛化能力不是理想的。解决现实世界建模问题阴霾退化,我们建议通过对不均匀雾度分布的鉴定和建模密度来解决这个问题。我们提出了一种新颖的可分离混合注意力(SHA)模块来编码雾霾密度,通过捕获正交方向上的特征来实现这一目标。此外,提出了密度图以明确地模拟雾度的不均匀分布。密度图以半监督方式生成位置编码。这种雾度密度感知和建模有效地捕获特征水平的不均匀分布性变性。通过SHA和密度图的合适组合,我们设计了一种新型的脱水网络架构,实现了良好的复杂性性能权衡。两个大规模数据集的广泛实验表明,我们的方法通过量化和定性地通过大幅度超越所有最先进的方法,将最佳发布的PSNR度量从28.53 DB升高到Haze4K测试数据集和在SOTS室内测试数据集中的37.17 dB至38.41 dB。
translated by 谷歌翻译
由于波长依赖性的光衰减,折射和散射,水下图像通常遭受颜色变形和模糊的细节。然而,由于具有未变形图像的数量有限数量的图像作为参考,培训用于各种降解类型的深度增强模型非常困难。为了提高数据驱动方法的性能,必须建立更有效的学习机制,使得富裕监督来自有限培训的示例资源的信息。在本文中,我们提出了一种新的水下图像增强网络,称为Sguie-net,其中我们将语义信息引入了共享常见语义区域的不同图像的高级指导。因此,我们提出了语义区域 - 明智的增强模块,以感知不同语义区域从多个尺度的劣化,并将其送回从其原始比例提取的全局注意功能。该策略有助于实现不同的语义对象的强大和视觉上令人愉快的增强功能,这应该由于对差异化增强的语义信息的指导应该。更重要的是,对于在训练样本分布中不常见的那些劣化类型,指导根据其语义相关性与已经良好的学习类型连接。对公共数据集的广泛实验和我们拟议的数据集展示了Sguie-Net的令人印象深刻的表现。代码和建议的数据集可用于:https://trentqq.github.io/sguie-net.html
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
最近的变形金刚和多层Perceptron(MLP)模型的进展为计算机视觉任务提供了新的网络架构设计。虽然这些模型在许多愿景任务中被证明是有效的,但在图像识别之类的愿景中,仍然存在挑战,使他们适应低级视觉。支持高分辨率图像和本地注意力的局限性的不灵活性可能是使用变压器和MLP在图像恢复中的主要瓶颈。在这项工作中,我们介绍了一个多轴MLP基于MARIC的架构,称为Maxim,可用作用于图像处理任务的高效和灵活的通用视觉骨干。 Maxim使用UNET形的分层结构,并支持由空间门控MLP启用的远程交互。具体而言,Maxim包含两个基于MLP的构建块:多轴门控MLP,允许局部和全球视觉线索的高效和可扩展的空间混合,以及交叉栅栏,替代跨关注的替代方案 - 细分互补。这两个模块都仅基于MLP,而且还受益于全局和“全卷积”,两个属性对于图像处理是可取的。我们广泛的实验结果表明,所提出的Maxim模型在一系列图像处理任务中实现了十多个基准的最先进的性能,包括去噪,失败,派热,脱落和增强,同时需要更少或相当的数量参数和拖鞋而不是竞争模型。
translated by 谷歌翻译
增强低光图像的质量在许多图像处理和多媒体应用中起着非常重要的作用。近年来,已经开发出各种深入的学习技术来解决这一具有挑战性的任务。典型的框架是同时估计照明和反射率,但它们忽略了在特征空间中封装的场景级上下文信息,从而导致许多不利的结果,例如,细节损失,颜色不饱和,工件等。为了解决这些问题,我们开发了一个新的上下文敏感的分解网络架构,用于利用空间尺度上的场景级上下文依赖项。更具体地说,我们构建了一种双流估计机制,包括反射率和照明估计网络。我们设计一种新的上下文敏感的分解连接来通过结合物理原理来桥接双流机制。进一步构建了空间改变的照明引导,用于实现照明组件的边缘感知平滑性特性。根据不同的培训模式,我们构建CSDNet(配对监督)和CSDGAN(UNS满分监督),以充分评估我们设计的架构。我们在七个测试基准测试中测试我们的方法,以进行大量的分析和评估的实验。由于我们设计的上下文敏感的分解连接,我们成功实现了出色的增强结果,这完全表明我们对现有最先进的方法的优势。最后,考虑到高效的实际需求,我们通过减少通道数来开发轻量级CSDNet(命名为LiteCsdnet)。此外,通过为这两个组件共享编码器,我们获得更轻量级的版本(短路SLITECSDNET)。 SLITECSDNET只包含0.0301M参数,但达到与CSDNET几乎相同的性能。
translated by 谷歌翻译
As a common weather, rain streaks adversely degrade the image quality. Hence, removing rains from an image has become an important issue in the field. To handle such an ill-posed single image deraining task, in this paper, we specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks and has clear interpretability. In specific, we first establish a RCD model for representing rain streaks and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. By unfolding it, we then build the RCDNet in which every network module has clear physical meanings and corresponds to each operation involved in the algorithm. This good interpretability greatly facilitates an easy visualization and analysis on what happens inside the network and why it works well in inference process. Moreover, taking into account the domain gap issue in real scenarios, we further design a novel dynamic RCDNet, where the rain kernels can be dynamically inferred corresponding to input rainy images and then help shrink the space for rain layer estimation with few rain maps so as to ensure a fine generalization performance in the inconsistent scenarios of rain types between training and testing data. By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers, and thus naturally lead to better deraining performance. Comprehensive experiments substantiate the superiority of our method, especially on its well generality to diverse testing scenarios and good interpretability for all its modules. Code is available in \emph{\url{https://github.com/hongwang01/DRCDNet}}.
translated by 谷歌翻译