随着对图像重建任务的深度神经网络(DNN)的兴趣,其可靠性已受到质疑(Antun等,2020; Gottschling等,2020)。然而,最近的工作表明,与总变化(TV)最小化相比,它们与对抗性噪声相似,以$ \ ell^2 $ - 重构误差(Genzel等,2022)。我们考虑使用$ \ ell^\ infty $ -norm的不同鲁棒性概念,并认为本地化的重建工件比$ \ ell^2 $ -Error更相关。我们创造了对不足采样的MRI测量值的对抗性扰动,这些测量值在电视调查重建中诱导严重的局部伪影。相同的攻击方法对基于DNN的重建不如有效。最后,我们表明,这种现象是可以保证精确恢复的重建方法固有的,就像用$ \ ell^1 $或TV-最小化的压缩传感重建一样。
translated by 谷歌翻译
总变化(TV)正则化已经提高了用于图像处理任务的各种变分模型。我们提出了与电视正则化的早期文献中的倒扩散过程与电视正常化相结合,并表明所得到的增强电视最小化模型对于降低对比度的损失特别有效,这通常由使用电视正常化的模型遇到。我们从嘈杂的额相测量中建立了增强电视模型的稳定重建保证;考虑非自适应线性测量和可变密度采样的傅里叶测量。特别地,在一些较弱的受限制的等距特性条件下,增强的电视最小化模型被示出为比各种基于电视的模型具有更严格的重建误差界限,用于噪声水平很大并且测量量有限。增强电视模型的优点也通过初步实验进行了数值验证,通过一些合成,自然和医学图像的重建。
translated by 谷歌翻译
新磁共振(MR)成像方式可以量化血流动力学,但需要长时间的采集时间,妨碍其广泛用于早期诊断心血管疾病。为了减少采集​​时间,常规使用来自未采样测量的重建方法,使得利用旨在提高图像可压缩性的表示。重建的解剖和血液动力学图像可能存在视觉伪影。尽管这些工件中的一些基本上是重建错误,因此欠采样的后果,其他人可能是由于测量噪声或采样频率的随机选择。另有说明,重建的图像变为随机变量,并且其偏差和其协方差都可以导致视觉伪影;后者会导致可能误解的空间相关性以用于视觉信息。虽然前者的性质已经在文献中已经研究过,但后者尚未得到关注。在这项研究中,我们研究了从重建过程产生的随机扰动的理论特性,并对模拟和主动脉瘤进行了许多数值实验。我们的结果表明,当基于$ \ ell_1 $ -norm最小化的高斯欠采样模式与恢复算法组合时,相关长度保持限制为2到三个像素。然而,对于其他欠采样模式,相关长度可以显着增加,较高的欠采样因子(即8倍或16倍压缩)和不同的重建方法。
translated by 谷歌翻译
Discriminative features extracted from the sparse coding model have been shown to perform well for classification. Recent deep learning architectures have further improved reconstruction in inverse problems by considering new dense priors learned from data. We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features. The model studies the problem of recovering a dense vector $\mathbf{x}$ and a sparse vector $\mathbf{u}$ given measurements of the form $\mathbf{y} = \mathbf{A}\mathbf{x}+\mathbf{B}\mathbf{u}$. Our first analysis proposes a geometric condition based on the minimal angle between spanning subspaces corresponding to the matrices $\mathbf{A}$ and $\mathbf{B}$ that guarantees unique solution to the model. The second analysis shows that, under mild assumptions, a convex program recovers the dense and sparse components. We validate the effectiveness of the model on simulated data and propose a dense and sparse autoencoder (DenSaE) tailored to learning the dictionaries from the dense and sparse model. We demonstrate that (i) DenSaE denoises natural images better than architectures derived from the sparse coding model ($\mathbf{B}\mathbf{u}$), (ii) in the presence of noise, training the biases in the latter amounts to implicitly learning the $\mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u}$ model, (iii) $\mathbf{A}$ and $\mathbf{B}$ capture low- and high-frequency contents, respectively, and (iv) compared to the sparse coding model, DenSaE offers a balance between discriminative power and representation.
translated by 谷歌翻译
近年来,深度学习在图像重建方面取得了显着的经验成功。这已经促进了对关键用例中数据驱动方法的正确性和可靠性的精确表征的持续追求,例如在医学成像中。尽管基于深度学习的方法具有出色的性能和功效,但对其稳定性或缺乏稳定性的关注以及严重的实际含义。近年来,已经取得了重大进展,以揭示数据驱动的图像恢复方法的内部运作,从而挑战了其广泛认为的黑盒本质。在本文中,我们将为数据驱动的图像重建指定相关的融合概念,该概念将构成具有数学上严格重建保证的学习方法调查的基础。强调的一个例子是ICNN的作用,提供了将深度学习的力量与经典凸正则化理论相结合的可能性,用于设计被证明是融合的方法。这篇调查文章旨在通过提供对数据驱动的图像重建方法以及从业人员的理解,旨在通过提供可访问的融合概念的描述,并通过将一些现有的经验实践放在可靠的数学上,来推进我们对数据驱动图像重建方法的理解以及从业人员的了解。基础。
translated by 谷歌翻译
近年来,在诸如denoing,压缩感应,介入和超分辨率等反问题中使用深度学习方法的使用取得了重大进展。尽管这种作品主要是由实践算法和实验驱动的,但它也引起了各种有趣的理论问题。在本文中,我们调查了这一作品中一些突出的理论发展,尤其是生成先验,未经训练的神经网络先验和展开算法。除了总结这些主题中的现有结果外,我们还强调了一些持续的挑战和开放问题。
translated by 谷歌翻译
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f ∈ C N and a randomly chosen set of frequencies Ω of mean size τ N . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω?A typical result of this paper is as follows: for each M > 0, suppose that f obeysthen with probability at least 1 − O(N −M ), f can be reconstructed exactly as the solution to the ℓ 1 minimization problem min g N −1 t=0 |g(t)|, s.t. ĝ(ω) = f (ω) for all ω ∈ Ω.In short, exact recovery may be obtained by solving a convex optimization problem.We give numerical values for α which depends on the desired probability of success; except for the logarithmic factor, the condition on the size of the support is sharp.The methodology extends to a variety of other setups and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one or two-dimensional) object from incomplete frequency samples-provided that the number of jumps (discontinuities) obeys the condition above-by minimizing other convex functionals such as the total-variation of f .
translated by 谷歌翻译
我们提出了一个基于一般学习的框架,用于解决非平滑和非凸图像重建问题。我们将正则函数建模为$ l_ {2,1} $ norm的组成,并将平滑但非convex功能映射参数化为深卷积神经网络。我们通过利用Nesterov的平滑技术和残留学习的概念来开发一种可证明的趋同的下降型算法来解决非平滑非概念最小化问题,并学习网络参数,以使算法的输出与培训数据中的参考匹配。我们的方法用途广泛,因为人们可以将各种现代网络结构用于正规化,而所得网络继承了算法的保证收敛性。我们还表明,所提出的网络是参数有效的,其性能与实践中各种图像重建问题中的最新方法相比有利。
translated by 谷歌翻译
In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H * H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 × 512 image on the GPU. K.H. Jin acknowledges the support from the "EPFL Fellows" fellowship program co-funded by Marie Curie from the European Unions Horizon 2020 Framework Programme for Research and Innovation under grant agreement 665667.
translated by 谷歌翻译
通过结合使用卷积神经网(CNN)指定的物理测量模型和学习的图像验证者,对基于模型的架构(DMBA)的兴趣越来越大。例如,用于系统设计DMBA的著名框架包括插件培训(PNP),深度展开(DU)和深度平衡模型(DEQ)。尽管已广泛研究了DMBA的经验性能和理论特性,但当确切地知道所需的图像之前,该地区的现有工作主要集中在其性能上。这项工作通过在不匹配的CNN先验下向DMBA提供新的理论和数值见解来解决先前工作的差距。当训练和测试数据之间存在分布变化时,自然会出现不匹配的先验,例如,由于测试图像来自与用于训练CNN先验的图像不同的分布。当CNN事先用于推理是一些所需的统计估计器(MAP或MMSE)的近似值时,它们也会出现。我们的理论分析在一组明确指定的假设下,由于不匹配的CNN先验,在解决方案上提供了明显的误差界限。我们的数值结果比较了在现实分布变化和近似统计估计器下DMBA的经验性能。
translated by 谷歌翻译
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R k → R n . Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an 2/ 2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.
translated by 谷歌翻译
Countless signal processing applications include the reconstruction of signals from few indirect linear measurements. The design of effective measurement operators is typically constrained by the underlying hardware and physics, posing a challenging and often even discrete optimization task. While the potential of gradient-based learning via the unrolling of iterative recovery algorithms has been demonstrated, it has remained unclear how to leverage this technique when the set of admissible measurement operators is structured and discrete. We tackle this problem by combining unrolled optimization with Gumbel reparametrizations, which enable the computation of low-variance gradient estimates of categorical random variables. Our approach is formalized by GLODISMO (Gradient-based Learning of DIscrete Structured Measurement Operators). This novel method is easy-to-implement, computationally efficient, and extendable due to its compatibility with automatic differentiation. We empirically demonstrate the performance and flexibility of GLODISMO in several prototypical signal recovery applications, verifying that the learned measurement matrices outperform conventional designs based on randomization as well as discrete optimization baselines.
translated by 谷歌翻译
CSGM框架(Bora-Jalal-Price-Dimakis'17)表明,深度生成前沿可能是解决逆问题的强大工具。但是,迄今为止,此框架仅在某些数据集(例如,人称和MNIST数字)上经验成功,并且已知在分布外样品上表现不佳。本文介绍了CSGM框架在临床MRI数据上的第一次成功应用。我们在FastMri DataSet上培训了大脑扫描之前的生成,并显示通过Langevin Dynamics的后验采样实现了高质量的重建。此外,我们的实验和理论表明,后部采样是对地面定语分布和测量过程的变化的强大。我们的代码和型号可用于:\ URL {https://github.com/utcsilab/csgm-mri-langevin}。
translated by 谷歌翻译
Neural networks have recently allowed solving many ill-posed inverse problems with unprecedented performance. Physics informed approaches already progressively replace carefully hand-crafted reconstruction algorithms in real applications. However, these networks suffer from a major defect: when trained on a given forward operator, they do not generalize well to a different one. The aim of this paper is twofold. First, we show through various applications that training the network with a family of forward operators allows solving the adaptivity problem without compromising the reconstruction quality significantly. Second, we illustrate that this training procedure allows tackling challenging blind inverse problems. Our experiments include partial Fourier sampling problems arising in magnetic resonance imaging (MRI), computerized tomography (CT) and image deblurring.
translated by 谷歌翻译
在本文中,我们考虑使用Palentir在两个和三个维度中对分段常数对象的恢复和重建,这是相对于当前最新ART的显着增强的参数级别集(PALS)模型。本文的主要贡献是一种新的PALS公式,它仅需要一个单个级别的函数来恢复具有具有多个未知对比度的分段常数对象的场景。我们的模型比当前的多对抗性,多对象问题提供了明显的优势,所有这些问题都需要多个级别集并明确估计对比度大小。给定对比度上的上限和下限,我们的方法能够以任何对比度分布恢复对象,并消除需要知道给定场景中的对比度或其值的需求。我们提供了一个迭代过程,以找到这些空间变化的对比度限制。相对于使用径向基函数(RBF)的大多数PAL方法,我们的模型利用了非异型基函数,从而扩展了给定复杂性的PAL模型可以近似的形状类别。最后,Palentir改善了作为参数识别过程一部分所需的Jacobian矩阵的条件,因此通过控制PALS扩展系数的幅度来加速优化方法,固定基本函数的中心,以及参数映射到图像映射的唯一性,由新参数化提供。我们使用X射线计算机断层扫描,弥漫性光学断层扫描(DOT),Denoising,DeonConvolution问题的2D和3D变体证明了新方法的性能。应用于实验性稀疏CT数据和具有不同类型噪声的模拟数据,以进一步验证所提出的方法。
translated by 谷歌翻译
我们在凸优化和深度学习的界面上引入了一类新的迭代图像重建算法,以启发凸出和深度学习。该方法包括通过训练深神网络(DNN)作为Denoiser学习先前的图像模型,并将其替换为优化算法的手工近端正则操作员。拟议的airi(``````````````''''')框架,用于成像复杂的强度结构,并从可见性数据中扩散和微弱的发射,继承了优化的鲁棒性和解释性,以及网络的学习能力和速度。我们的方法取决于三个步骤。首先,我们从光强度图像设计了一个低动态范围训练数据库。其次,我们以从数据的信噪比推断出的噪声水平来训练DNN Denoiser。我们使用训练损失提高了术语,可确保算法收敛,并通过指示进行即时数据库动态范围增强。第三,我们将学习的DeNoiser插入前向后的优化算法中,从而产生了一个简单的迭代结构,该结构与梯度下降的数据输入步骤交替出现Denoising步骤。我们已经验证了SARA家族的清洁,优化算法的AIRI,并经过DNN训练,可以直接从可见性数据中重建图像。仿真结果表明,AIRI与SARA及其基于前卫的版本USARA具有竞争力,同时提供了显着的加速。干净保持更快,但质量较低。端到端DNN提供了进一步的加速,但质量远低于AIRI。
translated by 谷歌翻译
通过获取有限的测量,近来有很多关于加速MRI中的数据采集过程的兴趣。部署经常复杂的重建算法以在这种设置中保持高图像质量。在这项工作中,我们提出了一种使用卷积神经网络,MNET的数据驱动采样器,为每个扫描对象提供自适应的特定于对象的采样模式。该网络针对每个物体观察非常有限的低频k空间数据,并且在一个达到高图像重建质量的情况下快速预测所需的下采样模式。我们提出了一个伴随的交流型训练框架,其掩模后向过程可以有效地生成用于采样器网络的训练标签并共同列举图像重建网络。 FastMri膝关节数据集上的实验结果证明了提出的学习欠采样网络在四倍和八倍加速下产生对象特定的掩模的能力,该八倍的加速度实现了优于几种现有方案的卓越图像重建性能。拟议的联合采样和重建学习框架的源代码可在https://github.com/zhishenhuang/mri获得。
translated by 谷歌翻译
除了预测误差的最小化之外,回归方案的两个最期望的性质是稳定性和解释性。由这些原则驱动,我们提出了连续域配方进行一维回归问题。在我们的第一种方法中,我们使用Lipschitz常数作为规范器,这导致了解学习映射的整体稳健性的调整。在我们的第二种方法中,我们使用用户定义的上限和使用稀疏性常规程序来控制Lipschitz常数,以便更简单地支持(以及因此,更可取的可解释)的解决方案。后者制剂的理论研究部分地通过其证明的等效性,利用整流线性单元(Relu)激活和重量衰减,训练Lipschitz受约束的两层单变量神经网络。通过证明代表定理,我们表明这两个问题都承认是连续和分段线性(CPWL)功能的全局最小值。此外,我们提出了高效的算法,该算法找到了每个问题的稀疏解决方案:具有最少数量的线性区域的CPWL映射。最后,我们在数字上说明了我们的配方的结果。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
我们为从嘈杂和稀疏的相位对比度磁共振信号重建速度场的物理学压缩传感(图片)方法。该方法解决了逆向纳维尔的边界值问题,这使我们可以共同重建和分割速度场,同时推断隐藏量(例如流体力压力和壁剪应力)。使用贝叶斯框架,我们通过以高斯随机字段的形式引入有关未知参数的先验信息来使问题正常。使用Navier-Stokes问题,基于能量的分割功能,并要求重建与$ K $ -SPACE信号一致。我们创建了一种解决此重建问题的算法,并通过收敛喷嘴测试流量的噪声和稀疏$ K $空间信号。我们发现该方法能够从稀疏采样(15%$ k $ - 空间覆盖范围),低($ \ sim $$ 10 $ 10 $)信噪比(SNR)信号(SNR)信号和速度区域重建和细分速度字段。重建的速度场与来自相同流量的全部采样(100%$ k $ - 空间覆盖范围)高($> 40 $)SNR信号进行了很好的比较。
translated by 谷歌翻译