Subjective image-quality measurement plays a critical role in the development of image-processing applications. The purpose of a visual-quality metric is to approximate the results of subjective assessment. In this regard, more and more metrics are under development, but little research has considered their limitations. This paper addresses that deficiency: we show how image preprocessing before compression can artificially increase the quality scores provided by the popular metrics DISTS, LPIPS, HaarPSI, and VIF as well as how these scores are inconsistent with subjective-quality scores. We propose a series of neural-network preprocessing models that increase DISTS by up to 34.5%, LPIPS by up to 36.8%, VIF by up to 98.0%, and HaarPSI by up to 22.6% in the case of JPEG-compressed images. A subjective comparison of preprocessed images showed that for most of the metrics we examined, visual quality drops or stays unchanged, limiting the applicability of these metrics.
translated by 谷歌翻译
图像质量评估(IQA)指标被广泛用于定量估计一些形成,恢复,转换或增强算法后图像降解的程度。我们提出了Pytorch图像质量(PIQ),这是一个以可用性为中心的库,其中包含最受欢迎的现代IQA算法,并保证根据其原始命题正确实现并进行了彻底验证。在本文中,我们详细介绍了图书馆基础背后的原则,描述了使其可靠的评估策略,提供了展示性能时间权衡的基准,并强调了GPU加速的好处Pytorch后端。Pytorch图像质量是一个开源软件:https://github.com/photosynthesis-team/piq/。
translated by 谷歌翻译
Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. 1
translated by 谷歌翻译
现有的基于深度学习的全参考IQA(FR-IQA)模型通常通过明确比较特征,以确定性的方式预测图像质量,从而衡量图像严重扭曲的图像是多远,相应的功能与参考的空间相对远。图片。本文中,我们从不同的角度看这个问题,并提议从统计分布的角度对知觉空间中的质量降解进行建模。因此,根据深度特征域中的Wasserstein距离来测量质量。更具体地说,根据执行最终质量评分,测量了预训练VGG网络的每个阶段的1Dwasserstein距离。 Deep Wasserstein距离(DEEPWSD)在神经网络的功能上执行的,可以更好地解释由各种扭曲引起的质量污染,并提出了高级质量预测能力。广泛的实验和理论分析表明,在质量预测和优化方面,提出的DEEPWSD的优越性。
translated by 谷歌翻译
图像质量评估(IQA)是图像处理任务(例如压缩)的基本指标。使用了全参考iQA,使用了传统的智商,例如PSNR和SSIM。最近,还使用了基于深神经网络(深IQA)的IQA,例如LPIPS和DIST。众所周知,图像缩放在深IQA中是不一致的,因为有些则在预处理中执行下降,而另一些则使用原始图像大小。在本文中,我们表明图像量表是影响深度IQA性能的影响因素。我们在同一五个数据集上全面评估了四个深IQA,实验结果表明,图像量表会显着影响IQA性能。我们发现,最合适的图像量表通常既不是默认尺寸也不是原始大小,并且选择取决于所使用的方法和数据集。我们看到了稳定性,发现PIEAPP是四个深IQA中最稳定的。
translated by 谷歌翻译
在这项工作中,我们介绍了梯度暹罗网络(GSN)进行图像质量评估。所提出的方法熟练地捕获了全参考图像质量评估(IQA)任务中扭曲的图像和参考图像之间的梯度特征。我们利用中央微分卷积获得图像对中隐藏的语义特征和细节差异。此外,空间注意力指导网络专注于与图像细节相关的区域。对于网络提取的低级,中级和高级功能,我们创新设计了一种多级融合方法,以提高功能利用率的效率。除了常见的均方根错误监督外,我们还进一步考虑了批处理样本之间的相对距离,并成功地将KL差异丢失应用于图像质量评估任务。我们在几个公开可用的数据集上试验了提出的算法GSN,并证明了其出色的性能。我们的网络赢得了NTIRE 2022感知图像质量评估挑战赛1的第二名。
translated by 谷歌翻译
近年来,图像存储和传输系统的快速发展,其中图像压缩起着重要作用。一般而言,开发图像压缩算法是为了确保以有限的比特速率确保良好的视觉质量。但是,由于采用不同的压缩优化方法,压缩图像可能具有不同的质量水平,需要对其进行定量评估。如今,主流全参考度量(FR)指标可有效预测在粗粒水平下压缩图像的质量(压缩图像的比特速率差异很明显),但是,它们对于细粒度的压缩图像的性能可能很差比特率差异非常微妙。因此,为了更好地提高经验质量(QOE)并为压缩算法提供有用的指导,我们提出了一种全参考图像质量评估(FR-IQA)方法,以针对细粒度的压缩图像进行压缩图像。具体而言,首先将参考图像和压缩图像转换为$ ycbcr $颜色空间。梯度特征是从对压缩伪像敏感的区域中提取的。然后,我们采用对数 - 盖尔转换来进一步分析纹理差异。最后,将获得的功能融合为质量分数。提出的方法在细粒度的压缩图像质量评估(FGIQA)数据库中进行了验证,该数据库尤其是用于评估具有亲密比特率的压缩图像质量的构建。实验结果表明,我们的公制优于FGIQA数据库上的主流FR-IQA指标。我们还在其他常用的压缩IQA数据库上测试我们的方法,结果表明,我们的方法在粗粒度压缩IQA数据库上也获得了竞争性能。
translated by 谷歌翻译
Deep learning-based full-reference image quality assessment (FR-IQA) models typically rely on the feature distance between the reference and distorted images. However, the underlying assumption of these models that the distance in the deep feature domain could quantify the quality degradation does not scientifically align with the invariant texture perception, especially when the images are generated artificially by neural networks. In this paper, we bring a radical shift in inferring the quality with learned features and propose the Deep Image Dependency (DID) based FR-IQA model. The feature dependency facilitates the comparisons of deep learning features in a high-order manner with Brownian distance covariance, which is characterized by the joint distribution of the features from reference and test images, as well as their marginal distributions. This enables the quantification of the feature dependency against nonlinear transformation, which is far beyond the computation of the numerical errors in the feature space. Experiments on image quality prediction, texture image similarity, and geometric invariance validate the superior performance of our proposed measure.
translated by 谷歌翻译
视频框架插值(VFI)是许多视频处理应用程序的有用工具。最近,它也已应用于视频压缩域中,以增强常规视频编解码器和基于学习的压缩体系结构。尽管近年来,人们对增强框架插值算法的发展的重点越来越大,但插值内容的感知质量评估仍然是一个开放的研究领域。在本文中,我们为VFI(Flolpips)介绍了一个定制的完整参考视频质量指标,该指标基于流行的感知图像质量指标LPIP,该指标LPIPS捕获了提取的图像特征空间中的感知降解。为了提高LPIP的性能用于评估插值内容,我们通过使用时间失真(通过比较光流)来加重特征差图图,重新设计了其空间特征聚合步骤。在BVI-VFI数据库中进行了评估,该数据库包含180个带有各种框架插值伪像的测试序列,Flolpips显示出优异的相关性能(具有统计学意义),主观地面真相超过12位流行的质量评估者。为了促进VFI质量评估的进一步研究,我们的代码可在https://danielism97.github.io/flolpips上公开获得。
translated by 谷歌翻译
The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.
translated by 谷歌翻译
We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-tonoise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http:// live.ece.utexas.edu/ research/ quality/ BRISQUE_release.zip for public use and evaluation.
translated by 谷歌翻译
随着流媒体技术的发展,沟通的增加取决于声音和视觉信息,这给在线媒体带来了巨大的负担。数据压缩对于减少数据传输和存储的数量变得越来越重要。为了进一步提高图像压缩的效率,研究人员利用各种图像处理方法来补偿常规编解码器和基于先进的基于学习的压缩方法的局限性。我们没有修改面向压缩的方法,而是提出了一个称为Kuchen的统一图像压缩预处理框架,该框架旨在进一步提高现有编解码器的性能。该框架由混合数据标记系统以及基于学习的主链组成,以模拟个性化的预处理。据我们所知,这是在图像压缩任务中设置统一预处理基准测试的第一次探索。结果表明,我们统一的预处理框架优化的现代编解码器不断提高最新压缩的效率。
translated by 谷歌翻译
基于深度学习的立体图像超分辨率(StereOSR)的最新研究促进了Stereosr的发展。但是,现有的立体声模型主要集中于改善定量评估指标,并忽略了超级分辨立体图像的视觉质量。为了提高感知性能,本文提出了第一个面向感知的立体图像超分辨率方法,通过利用反馈,这是对立体声结果的感知质量的评估提供的。为了为StereOSR模型提供准确的指导,我们开发了第一个特殊的立体图像超分辨率质量评估(StereOSRQA)模型,并进一步构建了StereOSRQA数据库。广泛的实验表明,我们的Stereosr方法显着提高了感知质量,并提高了立体声图像的可靠性以进行差异估计。
translated by 谷歌翻译
面部超分辨率(FSR),也称为面部幻觉,其旨在增强低分辨率(LR)面部图像以产生高分辨率(HR)面部图像的分辨率,是特定于域的图像超分辨率问题。最近,FSR获得了相当大的关注,并目睹了深度学习技术的发展炫目。迄今为止,有很少有基于深入学习的FSR的研究摘要。在本次调查中,我们以系统的方式对基于深度学习的FSR方法进行了全面审查。首先,我们总结了FSR的问题制定,并引入了流行的评估度量和损失功能。其次,我们详细说明了FSR中使用的面部特征和流行数据集。第三,我们根据面部特征的利用大致分类了现有方法。在每个类别中,我们从设计原则的一般描述开始,然后概述代表方法,然后讨论其中的利弊。第四,我们评估了一些最先进的方法的表现。第五,联合FSR和其他任务以及与FSR相关的申请大致介绍。最后,我们设想了这一领域进一步的技术进步的前景。在\ URL {https://github.com/junjun-jiang/face-hallucination-benchmark}上有一个策划的文件和资源的策划文件和资源清单
translated by 谷歌翻译
本文报告了NTIRE 2022关于感知图像质量评估(IQA)的挑战,并与CVPR 2022的图像恢复和增强研讨会(NTIRE)研讨会(NTIRE)讲习班的新趋势举行。感知图像处理算法。这些算法的输出图像与传统扭曲具有完全不同的特征,并包含在此挑战中使用的PIP数据集中。这个挑战分为两条曲目,一个类似于以前的NTIRE IQA挑战的全参考IQA轨道,以及一条侧重于No-Reference IQA方法的新曲目。挑战有192和179名注册参与者的两条曲目。在最后的测试阶段,有7和8个参与的团队提交了模型和事实表。几乎所有这些都比现有的IQA方法取得了更好的结果,并且获胜方法可以证明最先进的性能。
translated by 谷歌翻译
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.
translated by 谷歌翻译
近年来,随着深度神经网络的发展,端到端优化的图像压缩已取得了重大进展,并超过了速度延伸性能的经典方法。但是,大多数基于学习的图像压缩方法是未标记的,在优化模型时不考虑图像语义或内容。实际上,人眼对不同内容具有不同的敏感性,因此还需要考虑图像内容。在本文中,我们提出了一种面向内容的图像压缩方法,该方法处理具有不同策略的不同类型的图像内容。广泛的实验表明,与最先进的端到端学习的图像压缩方法或经典方法相比,所提出的方法可实现竞争性的主观结果。
translated by 谷歌翻译
在本文中,我们提出了一个生成的对抗网络(GAN)框架,以增强压缩视频的感知质量。我们的框架包括单个模型中对不同量化参数(QP)的注意和适应。注意模块利用了可以捕获和对齐连续框架之间的远程相关性的全球接收场,这可能有益于提高视频感知质量。要增强的框架与其相邻的框架一起馈入深网,并在第一阶段的特征中提取不同深度的特征。然后提取的特征被馈入注意力块以探索全局的时间相关性,然后进行一系列上采样和卷积层。最后,通过利用相应的QP信息的QP条件适应模块处理所得的功能。这样,单个模型可用于增强对各种QP的适应性,而无需针对每个QP值的多个模型,同时具有相似的性能。实验结果表明,与最先进的压缩视频质量增强算法相比,所提出的PEQUENET的表现出色。
translated by 谷歌翻译
发现深度学习模型很容易受到对抗性示例的影响,因为在深度学习模型的输入中,对扰动的扰动可能引起错误的预测。对抗图像生成的大多数现有作品都试图为大多数模型实现攻击,而其中很少有人努力确保对抗性示例的感知质量。高质量的对手示例对许多应用很重要,尤其是保留隐私。在这项工作中,我们基于最小明显差异(MND)概念开发了一个框架,以生成对对抗性隐私的保留图像,这些图像与干净的图像具有最小的感知差异,但能够攻击深度学习模型。为了实现这一目标,首先提出了对抗性损失,以使深度学习模型成功地被对抗性图像攻击。然后,通过考虑摄动和扰动引起的结构和梯度变化的大小来开发感知质量的损失,该损失旨在为对抗性图像生成保持高知觉质量。据我们所知,这是基于MND概念以保存隐私的概念来探索质量保护的对抗图像生成的第一项工作。为了评估其在感知质量方面的性能,在这项工作中,通过建议的方法和几种锚方法测试了有关图像分类和面部识别的深层模型。广泛的实验结果表明,所提出的MND框架能够生成具有明显改善的性能指标(例如PSNR,SSIM和MOS)的对抗图像,而不是用锚定方法生成的对抗性图像。
translated by 谷歌翻译