实际图像的稀疏表示是成像应用的非常有效的方法,例如去噪。近年来,随着计算能力的增长,利用一个或多个图像提取的补丁内冗余的数据驱动策略,以增加稀疏性变得更加突出。本文提出了一种新颖的图像去噪算法,利用了由量子多体理论的图像依赖性的基础。基于补丁分析,通过类似于量子力学的术语来形式化局部图像邻域中的相似度测量,可以有效地保留真实图像的局部结构的量子力学中的相互作用。这种自适应基础的多功能性质将其应用范围扩展到图像无关或图像相关的噪声场景,而无需任何调整。我们对当代方法进行严格的比较,以证明所提出的算法的去噪能力,无论图像特征,噪声统计和强度如何。我们说明了超参数的特性及其对去噪性能的各自影响,以及自动化规则,可以在实验设置中选择其值的自动化规则,其实际设置不可用。最后,我们展示了我们对诸如医用超声图像检测应用等实际图像的方法处理实际图像的能力。
translated by 谷歌翻译
We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
translated by 谷歌翻译
本文的目的是描述一种从贝叶斯推理的观点来描述一种新的非参数降噪技术,其可以自动提高一个和二维数据的信噪比,例如例如,例如,天文图像和光谱。该算法迭代地评估数据的可能的平滑版本,平滑模型,获得与嘈杂测量统计上兼容的底层信号的估计。迭代基于最后一个顺利模型的证据和$ \ Chi ^ 2 $统计数据,并且我们将信号的预期值计算为整个平滑模型的加权平均值。在本文中,我们解释了算法的数学形式主义和数值实现,我们在利用真正的天文观测的电池对峰值信号,结构相似性指数和时间有效载荷来评估其性能。我们完全自适应的贝叶斯算法用于数据分析(Fabada)产生结果,没有任何参数调谐,与标准图像处理算法相当,其参数基于要恢复的真实信号进行了优化,在实际应用中不可能。诸如BM3D的最先进的非参数方法,以高信噪比提供稍微更好的性能,而我们的算法对于极其嘈杂的数据显着更准确(高于20-40 \%$相对错误,在天文领域特别兴趣的情况)。在此范围内,通过我们的重建获得的残差的标准偏差可能变得比原始测量值低的数量级。复制本报告中显示的所有结果所需的源代码,包括该方法的实现,在https://github.com/pablolyanala/fabada公开使用
translated by 谷歌翻译
我们在并行计算机架构上的图像的自适应粒子表示(APR)上的离散卷积运算符的本机实现数据结构和算法。 APR是一个内容 - 自适应图像表示,其本地地将采样分辨率局部调整到图像信号。已经开发为大,稀疏图像的像素表示的替代方案,因为它们通常在荧光显微镜中发生。已经显示出降低存储,可视化和处理此类图像的存储器和运行时成本。然而,这要求图像处理本身在APRS上运行,而无需中间恢复为像素。然而,设计高效和可扩展的APR-Native图像处理原语是APR的不规则内存结构的复杂性。这里,我们提供了使用可以在离散卷积方面配制的各种算法有效和本地地处理APR图像所需的算法建筑块。我们表明APR卷积自然地导致缩放 - 自适应算法,可在多核CPU和GPU架构上有效地平行化。与基于像素的算法和概念性数据的卷积相比,我们量化了加速度。我们在单个NVIDIA GeForce RTX 2080 Gaming GPU上实现了最多1 TB / s的像素等效吞吐量,而不是基于像素的实现的存储器最多两个数量级。
translated by 谷歌翻译
Experimental sciences have come to depend heavily on our ability to organize, interpret and analyze high-dimensional datasets produced from observations of a large number of variables governed by natural processes. Natural laws, conservation principles, and dynamical structure introduce intricate inter-dependencies among these observed variables, which in turn yield geometric structure, with fewer degrees of freedom, on the dataset. We show how fine-scale features of this structure in data can be extracted from \emph{discrete} approximations to quantum mechanical processes given by data-driven graph Laplacians and localized wavepackets. This data-driven quantization procedure leads to a novel, yet natural uncertainty principle for data analysis induced by limited data. We illustrate the new approach with algorithms and several applications to real-world data, including the learning of patterns and anomalies in social distancing and mobility behavior during the COVID-19 pandemic.
translated by 谷歌翻译
We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.
translated by 谷歌翻译
该论文通过将基于定向准分析小波包(QWP)与最新的加权核定标准最小化(WNNM)denoising算法相结合,从而提出了图像降级方案。基于QWP的Denoising方法(QWPDN)由降级图像的多尺度QWP变换,使用双变量收缩方法的适应性局部软阈值应用于转换系数,以及从几个分解级别中恢复阈值系数的图像。合并的方法由QWPDN和WNNM算法的几个迭代组成,以每种迭代的方式,从一种算法中的输出将输入提高到另一个算法。提出的方法将QWPDN的功能融合在一起,即使在严重损坏的图像中捕获边缘和精细的纹理模式,并利用了WNNM算法固有的真实图像中的非本地自相似性。多个实验将所提出的方法与包括WNNM在内的六种高级denoing算法进行了比较,证实,在定量度量和视觉感知质量方面,合并的跨增强算法比大多数都优于大多数。
translated by 谷歌翻译
Deconvolution is a widely used strategy to mitigate the blurring and noisy degradation of hyperspectral images~(HSI) generated by the acquisition devices. This issue is usually addressed by solving an ill-posed inverse problem. While investigating proper image priors can enhance the deconvolution performance, it is not trivial to handcraft a powerful regularizer and to set the regularization parameters. To address these issues, in this paper we introduce a tuning-free Plug-and-Play (PnP) algorithm for HSI deconvolution. Specifically, we use the alternating direction method of multipliers (ADMM) to decompose the optimization problem into two iterative sub-problems. A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels. A measure of 3D residual whiteness is then investigated to adjust the penalty parameters when solving the quadratic sub-problems, as well as a stopping criterion. Experimental results on both simulated and real-world data with ground-truth demonstrate the superiority of the proposed method.
translated by 谷歌翻译
FIG. 1. Schematic diagram of a Variational Quantum Algorithm (VQA). The inputs to a VQA are: a cost function C(θ), with θ a set of parameters that encodes the solution to the problem, an ansatz whose parameters are trained to minimize the cost, and (possibly) a set of training data {ρ k } used during the optimization. Here, the cost can often be expressed in the form in Eq. ( 3), for some set of functions {f k }. Also, the ansatz is shown as a parameterized quantum circuit (on the left), which is analogous to a neural network (also shown schematically on the right). At each iteration of the loop one uses a quantum computer to efficiently estimate the cost (or its gradients). This information is fed into a classical computer that leverages the power of optimizers to navigate the cost landscape C(θ) and solve the optimization problem in Eq. ( 1). Once a termination condition is met, the VQA outputs an estimate of the solution to the problem. The form of the output depends on the precise task at hand. The red box indicates some of the most common types of outputs.
translated by 谷歌翻译
图像去噪是许多领域下游任务的先决条件。低剂量和光子计数计算断层扫描(CT)去噪可以在最小化辐射剂量下优化诊断性能。监督深层去噪方法是流行的,但需要成对的清洁或嘈杂的样本通常在实践中不可用。受独立噪声假设的限制,电流无监督的去噪方法不能处理与CT图像中的相关噪声。在这里,我们提出了一种基于类似的类似性的无人监督的无监督的深度去噪方法,称为Coxing2Sim,以非局部和非线性方式起作用,不仅抑制独立而且还具有相关的噪音。从理论上讲,噪声2SIM在温和条件下渐近相当于监督学习方法。通过实验,Nosie2SIM从嘈杂的低剂量CT和光子计数CT图像中的内在特征,从视觉上,定量和统计上有效地或甚至优于实际数据集的监督学习方法。 Coke2Sim是一般无监督的去噪方法,在不同的应用中具有很大的潜力。
translated by 谷歌翻译
解决纳米级的形态学化相变对各种学科的许多科学和工业应用至关重要。通过组合全场传输X射线显微镜(TXM)和X射线吸收附近边缘结构(XANES)的TXM-XANES成像技术是通过获取具有多能量X的一系列显微镜图像来操作的新兴工具 - 接合并配合以获得化学图。然而,由于系统误差和用于快速采集的低曝光照明,其能力受到差的信噪比差的限制。在这项工作中,通过利用TXM-XANES成像数据的内在属性和子空间建模,我们引入了一种简单且坚固的去噪方法来提高图像质量,这使得能够快速和高灵敏度的化学成像。对合成和实时数据集的广泛实验证明了该方法的优越性。
translated by 谷歌翻译
作为混合成像技术,光声显微镜(PAM)成像由于激光强度的最大允许暴露,组织中超声波的衰减以及换能器的固有噪声而受到噪声。去噪是降低噪声的后处理方法,并且可以恢复PAM图像质量。然而,之前的去噪技术通常严重依赖于数学前导者以及手动选择的参数,导致对不同噪声图像的不令人满意和慢的去噪能,这极大地阻碍了实用和临床应用。在这项工作中,我们提出了一种基于深度学习的方法,可以从PAM图像中除去复杂的噪声,没有数学前导者,并手动选择不同输入图像的设置。注意增强的生成对抗性网络用于提取图像特征并去除各种噪声。在合成和实际数据集上证明了所提出的方法,包括幻影(叶静脉)和体内(小鼠耳血管和斑马鱼颜料)实验。结果表明,与先前的PAM去噪方法相比,我们的方法在定性和定量上恢复图像时表现出良好的性能。此外,为256次\ times256 $像素的图像实现了0.016 s的去噪速度。我们的方法对于PAM图像的去噪有效和实用。
translated by 谷歌翻译
网状denoising是数字几何处理中的基本问题。它试图消除表面噪声,同时尽可能准确地保留表面固有信号。尽管传统的智慧是基于专门的先验来平稳表面的,但基于学习的方法在概括和自动化方面取得了巨大的成功。在这项工作中,我们对网格denoising的进步进行了全面的综述,其中包含传统的几何方法和最近的基于学习的方法。首先,要熟悉读者的denoising任务,我们总结了网格denoising中的四个常见问题。然后,我们提供了两种现有的脱氧方法的分类。此外,分别详细介绍和分析了三个重要类别,包括优化,过滤器和基于数据驱动的技术。说明了定性和定量比较,以证明最先进的去核方法的有效性。最后,指出未来工作的潜在方向来解决这些方法的共同问题。这项工作还建立了网格denoising基准测试,未来的研究人员将通过最先进的方法轻松方便地评估其方法。
translated by 谷歌翻译
在过去的几十年中,已经进行了许多尝试来解决从其相应的低分辨率(LR)对应物中恢复高分辨率(HR)面部形象的问题,这是通常被称为幻觉的任务。尽管通过位置补丁和基于深度学习的方法实现了令人印象深刻的性能,但大多数技术仍然无法恢复面孔的特定特定功能。前一组算法通常在存在更高水平的降解存在下产生模糊和过天气输出,而后者产生的面部有时绝不使得输入图像中的个体类似于个体。在本文中,将引入一种新的面部超分辨率方法,其中幻觉面被迫位于可用训练面跨越的子空间中。因此,与大多数现有面的幻觉技术相比,由于这种面部子空间之前,重建是为了回收特定人的面部特征,而不是仅仅增加图像定量分数。此外,通过最近的3D面部重建领域的进步启发,还呈现了一种有效的3D字典对齐方案,通过该方案,该算法能够处理在不受控制的条件下拍摄的低分辨率面。在几个众所周知的面部数据集上进行的广泛实验中,所提出的算法通过生成详细和接近地面真理结果来显示出色的性能,这在定量和定性评估中通过显着的边距来实现了最先进的面部幻觉算法。
translated by 谷歌翻译
基于深度学习的方法保持最先进的导致低级图像处理任务,但由于其黑匣子结构而难以解释。展开的优化网络通过从经典迭代优化方法导出它们的架构而不使用来自标准深度学习工具盒的技巧来构建深神经网络的可解释的替代方案。到目前为止,这种方法在使用可解释结构的同时,在使用其可解释的结构的同时证明了接近最先进的模型的性能,以实现相对的低学习参数计数。在这项工作中,我们提出了一个展开的卷积字典学习网络(CDLNET),并在低和高参数计数方面展示其竞争的去噪和联合去噪和去除脱落(JDD)性能。具体而言,我们表明,当缩放到类似的参数计数时,所提出的模型优于最先进的完全卷积的去噪和JDD模型。此外,我们利用模型的可解释结构提出了网络中阈值的噪声适应性参数化,该阈值能够实现最先进的盲目的表现,以及在训练期间看不见的噪声水平的完美概括。此外,我们表明这种性能延伸到JDD任务和无监督的学习。
translated by 谷歌翻译
我们介绍了一种算法,用于计算采样歧管的测量测量算法,其依赖于对采样数据的植物嵌入的曲线图的模拟。我们的方法利用经典的结果在半导体分析和量子古典对应中,并形成用于学习数据集的歧管的技术的基础,随后用于高维数据集的非线性维度降低。我们以基于CoVID-19移动数据的聚类演示,从模型歧管中采样数据采样的数据,并通过集群演示来说明新的算法。最后,我们的方法揭示了数据采样和量化提供的离散化之间有趣的连接。
translated by 谷歌翻译
Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions.
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
十年自2010年以来,人工智能成功一直处于计算机科学和技术的最前沿,传染媒介空间模型已经巩固了人工智能最前沿的位置。与此同时,量子计算机已经变得更加强大,主要进步的公告经常在新闻中。这些区域的基础的数学技术比有时意识到更多的共同之处。传染媒介空间在20世纪30年代的量子力学的公理心脏上采取了位置,这一采用是从矢量空间的线性几何形状推导逻辑和概率的关键动机。粒子之间的量子相互作用是使用张量产品进行建模的,其也用于表达人工神经网络中的物体和操作。本文介绍了这些常见的数学区域中的一些,包括如何在人工智能(AI)中使用的示例,特别是在自动推理和自然语言处理(NLP)中。讨论的技术包括矢量空间,标量产品,子空间和含义,正交投影和否定,双向矩阵,密度矩阵,正算子和张量产品。应用领域包括信息检索,分类和含义,建模字传感和歧义,知识库的推断和语义构成。其中一些方法可能会在量子硬件上实现。该实施中的许多实际步骤都处于早期阶段,其中一些已经实现了。解释一些常见的数学工具可以帮助AI和量子计算中的研究人员进一步利用这些重叠,识别和沿途探索新方向。
translated by 谷歌翻译
在2015年和2019年之间,地平线的成员2020年资助的创新培训网络名为“Amva4newphysics”,研究了高能量物理问题的先进多变量分析方法和统计学习工具的定制和应用,并开发了完全新的。其中许多方法已成功地用于提高Cern大型Hadron撞机的地图集和CMS实验所执行的数据分析的敏感性;其他几个人,仍然在测试阶段,承诺进一步提高基本物理参数测量的精确度以及新现象的搜索范围。在本文中,在研究和开发的那些中,最相关的新工具以及对其性能的评估。
translated by 谷歌翻译