在本文中,我们在两维图像的超级度之前介绍了Wasserstein补丁。在这里,我们假设我们(另外到了低分辨率观察)的参考图像,其具有与重建的地面真实类似的补丁分布。这种假设是例如使用纹理图像或材料数据时满足。然后,建议的规则器惩罚重建的修补程序分发的$ w_2 $ - 在不同尺度下的一些参考图像的补丁分布。我们通过两维数值示例展示了所提出的规范器的性能。
translated by 谷歌翻译
仅使用少量数据学习神经网络是一个重要的研究主题,具有巨大的应用潜力。在本文中,我们介绍了基于归一化流量的成像中反问题的变异建模的常规化器。我们的常规器称为PatchNR,涉及在很少的图像的贴片上学习的正常流。特别是,培训独立于考虑的逆问题,因此可以将相同的正规化程序用于在同一类图像上作用的不同前向操作员。通过研究斑块的分布与整个图像类别的分布,我们证明我们的变分模型确实是一种地图方法。如果有其他监督信息,我们的模型可以推广到有条件的补丁。材料图像和低剂量或限量角度计算机断层扫描(CT)的层分辨率的数值示例表明,我们的方法在具有相似假设的方法之间提供了高质量的结果,但仅需要很少的数据。
translated by 谷歌翻译
我们提出了一个基于一般学习的框架,用于解决非平滑和非凸图像重建问题。我们将正则函数建模为$ l_ {2,1} $ norm的组成,并将平滑但非convex功能映射参数化为深卷积神经网络。我们通过利用Nesterov的平滑技术和残留学习的概念来开发一种可证明的趋同的下降型算法来解决非平滑非概念最小化问题,并学习网络参数,以使算法的输出与培训数据中的参考匹配。我们的方法用途广泛,因为人们可以将各种现代网络结构用于正规化,而所得网络继承了算法的保证收敛性。我们还表明,所提出的网络是参数有效的,其性能与实践中各种图像重建问题中的最新方法相比有利。
translated by 谷歌翻译
插件播放(PNP)框架使得将高级图像deno的先验集成到优化算法中成为可能,以有效地解决通常以最大后验(MAP)估计问题为例的各种图像恢复任务。乘法乘数的交替方向方法(ADMM)和通过denoing(红色)算法的正则化是这类方法的两个示例,这些示例在图像恢复方面取得了突破。但是,尽管前一种方法仅适用于近端算法,但最近已经证明,当DeOisers缺乏Jacobian对称性时,没有任何正规化解释红色算法,这恰恰是最实际的DINOISERS的情况。据我们所知,没有任何方法来训练直接代表正规器梯度的网络,该网络可以直接用于基于插入梯度的算法中。我们表明,可以在共同训练相应的地图Denoiser的同时训练直接建模MAP正常化程序梯度的网络。我们在基于梯度的优化方法中使用该网络,并获得与其他通用插件方法相比,获得更好的结果。我们还表明,正规器可以用作展开梯度下降的预训练网络。最后,我们证明了由此产生的Denoiser允许更好地收敛插件ADMM。
translated by 谷歌翻译
最近,由于高性能,深度学习方法已成为生物学图像重建和增强问题的主要研究前沿,以及其超快速推理时间。但是,由于获得监督学习的匹配参考数据的难度,对不需要配对的参考数据的无监督学习方法越来越兴趣。特别是,已成功用于各种生物成像应用的自我监督的学习和生成模型。在本文中,我们概述了在古典逆问题的背景下的连贯性观点,并讨论其对生物成像的应用,包括电子,荧光和去卷积显微镜,光学衍射断层扫描和功能性神经影像。
translated by 谷歌翻译
自Venkatakrishnan等人的开创性工作以来。 2013年,即插即用(PNP)方法在贝叶斯成像中变得普遍存在。这些方法通过将显式似然函数与预定由图像去噪算法隐式定义的明确定义,导出用于成像中的逆问题的最小均方误差(MMSE)或最大后验误差(MAP)估计器。文献中提出的PNP算法主要不同于他们用于优化或采样的迭代方案。在优化方案的情况下,一些最近的作品能够保证收敛到一个定点,尽管不一定是地图估计。在采样方案的情况下,据我们所知,没有已知的收敛证明。关于潜在的贝叶斯模型和估算器是否具有明确定义,良好的良好,并且具有支持这些数值方案所需的基本规律性属性,还存在重要的开放性问题。为了解决这些限制,本文开发了用于对PNP前锋进行贝叶斯推断的理论,方法和可忽略的会聚算法。我们介绍了两个算法:1)PNP-ULA(未调整的Langevin算法),用于蒙特卡罗采样和MMSE推断; 2)PNP-SGD(随机梯度下降)用于MAP推理。利用Markov链的定量融合的最新结果,我们为这两种算法建立了详细的收敛保证,在现实假设下,在去噪运营商使用的现实假设下,特别注意基于深神经网络的遣散者。我们还表明这些算法大致瞄准了良好的决策理论上最佳的贝叶斯模型。所提出的算法在几种规范问题上证明了诸如图像去纹,染色和去噪,其中它们用于点估计以及不确定的可视化和量化。
translated by 谷歌翻译
近年来,深度学习在图像重建方面取得了显着的经验成功。这已经促进了对关键用例中数据驱动方法的正确性和可靠性的精确表征的持续追求,例如在医学成像中。尽管基于深度学习的方法具有出色的性能和功效,但对其稳定性或缺乏稳定性的关注以及严重的实际含义。近年来,已经取得了重大进展,以揭示数据驱动的图像恢复方法的内部运作,从而挑战了其广泛认为的黑盒本质。在本文中,我们将为数据驱动的图像重建指定相关的融合概念,该概念将构成具有数学上严格重建保证的学习方法调查的基础。强调的一个例子是ICNN的作用,提供了将深度学习的力量与经典凸正则化理论相结合的可能性,用于设计被证明是融合的方法。这篇调查文章旨在通过提供对数据驱动的图像重建方法以及从业人员的理解,旨在通过提供可访问的融合概念的描述,并通过将一些现有的经验实践放在可靠的数学上,来推进我们对数据驱动图像重建方法的理解以及从业人员的了解。基础。
translated by 谷歌翻译
传统摄像机测量图像强度。相比之下,事件相机以异步测量每像素的时间强度变化。恢复事件的强度是一个流行的研究主题,因为重建的图像继承了高动态范围(HDR)和事件的高速属性;因此,它们可以在许多机器人视觉应用中使用并生成慢动作HDR视频。然而,最先进的方法通过训练映射到图像经常性神经网络(RNN)来解决这个问题,这缺乏可解释性并且难以调整。在这项工作中,我们首次展示运动和强度估计的联合问题导致我们以模拟基于事件的图像重建作为可以解决的线性逆问题,而无需训练图像重建RNN。相反,基于古典和学习的图像前导者可以用于解决问题并从重建的图像中删除伪影。实验表明,尽管仅使用来自短时间间隔(即,没有复发连接),但是,尽管只使用来自短时间间隔的数据,所提出的方法会产生视觉质量的图像。我们的方法还可用于提高首先估计图像Laplacian的方法重建的图像的质量;在这里,我们的方法可以被解释为由图像前提引导的泊松重建。
translated by 谷歌翻译
本文介绍了在混合高斯 - 突破噪声条件下重建高分辨率(HR)LF图像的GPU加速计算框架。主要重点是考虑处理速度和重建质量的高性能方法。从统计的角度来看,我们得出了一个联合$ \ ell^1 $ - $ \ ell^2 $数据保真度,用于惩罚人力资源重建错误,考虑到混合噪声情况。对于正则化,我们采用了加权非本地总变异方法,这使我们能够通过适当的加权方案有效地实现LF图像。我们表明,乘数算法(ADMM)的交替方向方法可用于简化计算复杂性,并在GPU平台上导致高性能并行计算。对合成4D LF数据集和自然图像数据集进行了广泛的实验,以验证提出的SR模型的鲁棒性并评估加速优化器的性能。实验结果表明,与最先进的方法相比,我们的方法在严重的混合噪声条件下实现了更好的重建质量。此外,提议的方法克服了处理大规模SR任务的先前工作的局限性。虽然适合单个现成的GPU,但建议的加速器提供的平均加速度为2.46 $ \ times $和1.57 $ \ times $,分别为$ \ times 2 $和$ \ times 3 $ SR任务。此外,与CPU执行相比,达到$ 77 \ times $的加速。
translated by 谷歌翻译
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
translated by 谷歌翻译
Deconvolution is a widely used strategy to mitigate the blurring and noisy degradation of hyperspectral images~(HSI) generated by the acquisition devices. This issue is usually addressed by solving an ill-posed inverse problem. While investigating proper image priors can enhance the deconvolution performance, it is not trivial to handcraft a powerful regularizer and to set the regularization parameters. To address these issues, in this paper we introduce a tuning-free Plug-and-Play (PnP) algorithm for HSI deconvolution. Specifically, we use the alternating direction method of multipliers (ADMM) to decompose the optimization problem into two iterative sub-problems. A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels. A measure of 3D residual whiteness is then investigated to adjust the penalty parameters when solving the quadratic sub-problems, as well as a stopping criterion. Experimental results on both simulated and real-world data with ground-truth demonstrate the superiority of the proposed method.
translated by 谷歌翻译
Neural networks have recently allowed solving many ill-posed inverse problems with unprecedented performance. Physics informed approaches already progressively replace carefully hand-crafted reconstruction algorithms in real applications. However, these networks suffer from a major defect: when trained on a given forward operator, they do not generalize well to a different one. The aim of this paper is twofold. First, we show through various applications that training the network with a family of forward operators allows solving the adaptivity problem without compromising the reconstruction quality significantly. Second, we illustrate that this training procedure allows tackling challenging blind inverse problems. Our experiments include partial Fourier sampling problems arising in magnetic resonance imaging (MRI), computerized tomography (CT) and image deblurring.
translated by 谷歌翻译
In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H * H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 × 512 image on the GPU. K.H. Jin acknowledges the support from the "EPFL Fellows" fellowship program co-funded by Marie Curie from the European Unions Horizon 2020 Framework Programme for Research and Innovation under grant agreement 665667.
translated by 谷歌翻译
Wasserstein barycenter, built on the theory of optimal transport, provides a powerful framework to aggregate probability distributions, and it has increasingly attracted great attention within the machine learning community. However, it suffers from severe computational burden, especially for high dimensional and continuous settings. To this end, we develop a novel continuous approximation method for the Wasserstein barycenters problem given sample access to the input distributions. The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter. Leveraging the variational distribution, we construct a tractable dual formulation for the regularized Wasserstein barycenter problem with c-cyclical monotonicity, which can be efficiently solved by stochastic optimization. We provide theoretical analysis on convergence and demonstrate the practical effectiveness of our method on real applications of subset posterior aggregation and synthetic data.
translated by 谷歌翻译
我们考虑人口Wasserstein Barycenter问题,用于随机概率措施支持有限一组点,由在线数据流生成。这导致了复杂的随机优化问题,其中目标是作为作为随机优化问题的解决方案给出的函数的期望。我们采用了问题的结构,并获得了这个问题的凸凹陷的随机鞍点重构。在设置随机概率措施的分布是离散的情况下,我们提出了一种随机优化算法并估计其复杂性。基于内核方法的第二个结果将前一个延伸到随机概率措施的任意分布。此外,这种新算法在许多情况下,与随机近似方法相结合的随机近似方法,具有优于随机近似方法的总复杂性。我们还通过一系列数值实验说明了我们的发展。
translated by 谷歌翻译
We introduce a parametric view of non-local two-step denoisers, for which BM3D is a major representative, where quadratic risk minimization is leveraged for unsupervised optimization. Within this paradigm, we propose to extend the underlying mathematical parametric formulation by iteration. This generalization can be expected to further improve the denoising performance, somehow curbed by the impracticality of repeating the second stage for all two-step denoisers. The resulting formulation involves estimating an even larger amount of parameters in a unsupervised manner which is all the more challenging. Focusing on the parameterized form of NL-Ridge, the simplest but also most efficient non-local two-step denoiser, we propose a progressive scheme to approximate the parameters minimizing the risk. In the end, the denoised images are made up of iterative linear combinations of patches. Experiments on artificially noisy images but also on real-world noisy images demonstrate that our method compares favorably with the very best unsupervised denoisers such as WNNM, outperforming the recent deep-learning-based approaches, while being much faster.
translated by 谷歌翻译
Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.
translated by 谷歌翻译
物理驱动的深度学习方法已成为计算磁共振成像(MRI)问题的强大工具,将重建性能推向新限制。本文概述了将物理信息纳入基于学习的MRI重建中的最新发展。我们考虑了用于计算MRI的线性和非线性正向模型的逆问题,并回顾了解决这些方法的经典方法。然后,我们专注于物理驱动的深度学习方法,涵盖了物理驱动的损失功能,插件方法,生成模型和展开的网络。我们重点介绍了特定于领域的挑战,例如神经网络的实现和复杂值的构建基块,以及具有线性和非线性正向模型的MRI转换应用。最后,我们讨论常见问题和开放挑战,并与物理驱动的学习与医学成像管道中的其他下游任务相结合时,与物理驱动的学习的重要性联系在一起。
translated by 谷歌翻译
由于其高质量的重建以及将现有迭代求解器结合起来的易于性,因此最近将扩散模型作为强大的生成反问题解决器研究。但是,大多数工作都专注于在无噪声设置中解决简单的线性逆问题,这显着不足以使实际问题的复杂性不足。在这项工作中,我们将扩散求解器扩展求解器,以通过后采样的拉普拉斯近似有效地处理一般噪声(非)线性反问题。有趣的是,所得的后验采样方案是扩散采样的混合版本,具有歧管约束梯度,而没有严格的测量一致性投影步骤,与先前的研究相比,在嘈杂的设置中产生了更可取的生成路径。我们的方法表明,扩散模型可以结合各种测量噪声统计量,例如高斯和泊松,并且还有效处理嘈杂的非线性反问题,例如傅立叶相检索和不均匀的脱毛。
translated by 谷歌翻译
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. This paper develops a method, termed as the linearised deep image prior (DIP), to estimate the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). Specifically, we endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. To preserve conjugacy, we approximate the TV regulariser with a Gaussian surrogate. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. We demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP. Our code is available at https://github.com/educating-dip/bayes_dip.
translated by 谷歌翻译