新磁共振(MR)成像方式可以量化血流动力学,但需要长时间的采集时间,妨碍其广泛用于早期诊断心血管疾病。为了减少采集​​时间,常规使用来自未采样测量的重建方法,使得利用旨在提高图像可压缩性的表示。重建的解剖和血液动力学图像可能存在视觉伪影。尽管这些工件中的一些基本上是重建错误,因此欠采样的后果,其他人可能是由于测量噪声或采样频率的随机选择。另有说明,重建的图像变为随机变量,并且其偏差和其协方差都可以导致视觉伪影;后者会导致可能误解的空间相关性以用于视觉信息。虽然前者的性质已经在文献中已经研究过,但后者尚未得到关注。在这项研究中,我们研究了从重建过程产生的随机扰动的理论特性,并对模拟和主动脉瘤进行了许多数值实验。我们的结果表明,当基于$ \ ell_1 $ -norm最小化的高斯欠采样模式与恢复算法组合时,相关长度保持限制为2到三个像素。然而,对于其他欠采样模式,相关长度可以显着增加,较高的欠采样因子(即8倍或16倍压缩)和不同的重建方法。
translated by 谷歌翻译
我们为从嘈杂和稀疏的相位对比度磁共振信号重建速度场的物理学压缩传感(图片)方法。该方法解决了逆向纳维尔的边界值问题,这使我们可以共同重建和分割速度场,同时推断隐藏量(例如流体力压力和壁剪应力)。使用贝叶斯框架,我们通过以高斯随机字段的形式引入有关未知参数的先验信息来使问题正常。使用Navier-Stokes问题,基于能量的分割功能,并要求重建与$ K $ -SPACE信号一致。我们创建了一种解决此重建问题的算法,并通过收敛喷嘴测试流量的噪声和稀疏$ K $空间信号。我们发现该方法能够从稀疏采样(15%$ k $ - 空间覆盖范围),低($ \ sim $$ 10 $ 10 $)信噪比(SNR)信号(SNR)信号和速度区域重建和细分速度字段。重建的速度场与来自相同流量的全部采样(100%$ k $ - 空间覆盖范围)高($> 40 $)SNR信号进行了很好的比较。
translated by 谷歌翻译
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f ∈ C N and a randomly chosen set of frequencies Ω of mean size τ N . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω?A typical result of this paper is as follows: for each M > 0, suppose that f obeysthen with probability at least 1 − O(N −M ), f can be reconstructed exactly as the solution to the ℓ 1 minimization problem min g N −1 t=0 |g(t)|, s.t. ĝ(ω) = f (ω) for all ω ∈ Ω.In short, exact recovery may be obtained by solving a convex optimization problem.We give numerical values for α which depends on the desired probability of success; except for the logarithmic factor, the condition on the size of the support is sharp.The methodology extends to a variety of other setups and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one or two-dimensional) object from incomplete frequency samples-provided that the number of jumps (discontinuities) obeys the condition above-by minimizing other convex functionals such as the total-variation of f .
translated by 谷歌翻译
为了解决逆问题,已经开发了插件(PNP)方法,可以用呼叫特定于应用程序的DeNoiser在凸优化算法中替换近端步骤,该算法通常使用深神经网络(DNN)实现。尽管这种方法已经成功,但可以改进它们。例如,Denoiser通常经过设计/训练以消除白色高斯噪声,但是PNP算法中的DINOISER输入误差通常远非白色或高斯。近似消息传递(AMP)方法提供了白色和高斯DEOISER输入误差,但仅当正向操作员是一个大的随机矩阵时。在这项工作中,对于基于傅立叶的远期运营商,我们提出了一种基于普遍期望一致性(GEC)近似的PNP算法 - AMP的紧密表弟 - 在每次迭代时提供可预测的错误统计信息,以及新的DNN利用这些统计数据的Denoiser。我们将方法应用于磁共振成像(MRI)图像恢复,并证明其优于现有的PNP和AMP方法。
translated by 谷歌翻译
CSGM框架(Bora-Jalal-Price-Dimakis'17)表明,深度生成前沿可能是解决逆问题的强大工具。但是,迄今为止,此框架仅在某些数据集(例如,人称和MNIST数字)上经验成功,并且已知在分布外样品上表现不佳。本文介绍了CSGM框架在临床MRI数据上的第一次成功应用。我们在FastMri DataSet上培训了大脑扫描之前的生成,并显示通过Langevin Dynamics的后验采样实现了高质量的重建。此外,我们的实验和理论表明,后部采样是对地面定语分布和测量过程的变化的强大。我们的代码和型号可用于:\ URL {https://github.com/utcsilab/csgm-mri-langevin}。
translated by 谷歌翻译
磁共振指纹(MRF)是一种新型技术,它同时估算了多个与组织相关的参数,例如纵向松弛时间T1,横向松弛时间T2,离子频率B0和质子密度,从仅在二十秒内的扫描对象, 。但是,MRF方法遭受混乱的伪像,因为它明显地示例了K空间数据。在这项工作中,我们提出了一个基于MRF方法同时估算多个组织相关参数的压缩传感(CS)框架。它比低采样比更健壮,因此在估计对象所有体素的MR参数方面更有效。此外,MRF方法需要从具有L2距离的MR-Signal-Evolution词典中鉴定出最接近的查询指纹原子。但是,我们观察到L2距离并不总是是测量MR指纹之间相似性的合适度量。从不足采样的训练数据中自适应地学习距离度量,可以显着提高查询指纹的匹配精度。广泛的模拟案例的数值结果表明,就参数估计的准确性而言,我们的方法基本上优于先进方法。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
translated by 谷歌翻译
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case.In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank.The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
translated by 谷歌翻译
总变化(TV)正则化已经提高了用于图像处理任务的各种变分模型。我们提出了与电视正则化的早期文献中的倒扩散过程与电视正常化相结合,并表明所得到的增强电视最小化模型对于降低对比度的损失特别有效,这通常由使用电视正常化的模型遇到。我们从嘈杂的额相测量中建立了增强电视模型的稳定重建保证;考虑非自适应线性测量和可变密度采样的傅里叶测量。特别地,在一些较弱的受限制的等距特性条件下,增强的电视最小化模型被示出为比各种基于电视的模型具有更严格的重建误差界限,用于噪声水平很大并且测量量有限。增强电视模型的优点也通过初步实验进行了数值验证,通过一些合成,自然和医学图像的重建。
translated by 谷歌翻译
我们考虑了使用显微镜或X射线散射技术产生的图像数据自组装的模型的贝叶斯校准。为了说明BCP平衡结构中的随机远程疾病,我们引入了辅助变量以表示这种不确定性。然而,这些变量导致了高维图像数据的综合可能性,通常可以评估。我们使用基于测量运输的可能性方法以及图像数据的摘要统计数据来解决这一具有挑战性的贝叶斯推理问题。我们还表明,可以计算出有关模型参数的数据中的预期信息收益(EIG),而无需额外的成本。最后,我们介绍了基于二嵌段共聚物薄膜自组装和自上而下显微镜表征的ohta-kawasaki模型的数值案例研究。为了进行校准,我们介绍了一些基于域的能量和傅立叶的摘要统计数据,并使用EIG量化了它们的信息性。我们证明了拟议方法研究数据损坏和实验设计对校准结果的影响的力量。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
近年来,在诸如denoing,压缩感应,介入和超分辨率等反问题中使用深度学习方法的使用取得了重大进展。尽管这种作品主要是由实践算法和实验驱动的,但它也引起了各种有趣的理论问题。在本文中,我们调查了这一作品中一些突出的理论发展,尤其是生成先验,未经训练的神经网络先验和展开算法。除了总结这些主题中的现有结果外,我们还强调了一些持续的挑战和开放问题。
translated by 谷歌翻译
随着对图像重建任务的深度神经网络(DNN)的兴趣,其可靠性已受到质疑(Antun等,2020; Gottschling等,2020)。然而,最近的工作表明,与总变化(TV)最小化相比,它们与对抗性噪声相似,以$ \ ell^2 $ - 重构误差(Genzel等,2022)。我们考虑使用$ \ ell^\ infty $ -norm的不同鲁棒性概念,并认为本地化的重建工件比$ \ ell^2 $ -Error更相关。我们创造了对不足采样的MRI测量值的对抗性扰动,这些测量值在电视调查重建中诱导严重的局部伪影。相同的攻击方法对基于DNN的重建不如有效。最后,我们表明,这种现象是可以保证精确恢复的重建方法固有的,就像用$ \ ell^1 $或TV-最小化的压缩传感重建一样。
translated by 谷歌翻译
在本文中,我们考虑使用Palentir在两个和三个维度中对分段常数对象的恢复和重建,这是相对于当前最新ART的显着增强的参数级别集(PALS)模型。本文的主要贡献是一种新的PALS公式,它仅需要一个单个级别的函数来恢复具有具有多个未知对比度的分段常数对象的场景。我们的模型比当前的多对抗性,多对象问题提供了明显的优势,所有这些问题都需要多个级别集并明确估计对比度大小。给定对比度上的上限和下限,我们的方法能够以任何对比度分布恢复对象,并消除需要知道给定场景中的对比度或其值的需求。我们提供了一个迭代过程,以找到这些空间变化的对比度限制。相对于使用径向基函数(RBF)的大多数PAL方法,我们的模型利用了非异型基函数,从而扩展了给定复杂性的PAL模型可以近似的形状类别。最后,Palentir改善了作为参数识别过程一部分所需的Jacobian矩阵的条件,因此通过控制PALS扩展系数的幅度来加速优化方法,固定基本函数的中心,以及参数映射到图像映射的唯一性,由新参数化提供。我们使用X射线计算机断层扫描,弥漫性光学断层扫描(DOT),Denoising,DeonConvolution问题的2D和3D变体证明了新方法的性能。应用于实验性稀疏CT数据和具有不同类型噪声的模拟数据,以进一步验证所提出的方法。
translated by 谷歌翻译
套索是一种高维回归的方法,当时,当协变量$ p $的订单数量或大于观测值$ n $时,通常使用它。由于两个基本原因,经典的渐近态性理论不适用于该模型:$(1)$正规风险是非平滑的; $(2)$估算器$ \ wideHat {\ boldsymbol {\ theta}} $与true参数vector $ \ boldsymbol {\ theta}^*$无法忽略。结果,标准的扰动论点是渐近正态性的传统基础。另一方面,套索估计器可以精确地以$ n $和$ p $大,$ n/p $的订单为一。这种表征首先是在使用I.I.D的高斯设计的情况下获得的。协变量:在这里,我们将其推广到具有非偏差协方差结构的高斯相关设计。这是根据更简单的``固定设计''模型表示的。我们在两个模型中各种数量的分布之间的距离上建立了非反应界限,它们在合适的稀疏类别中均匀地固定在信号上$ \ boldsymbol {\ theta}^*$。作为应用程序,我们研究了借助拉索的分布,并表明需要校正程度对于计算有效的置信区间是必要的。
translated by 谷歌翻译
Discriminative features extracted from the sparse coding model have been shown to perform well for classification. Recent deep learning architectures have further improved reconstruction in inverse problems by considering new dense priors learned from data. We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features. The model studies the problem of recovering a dense vector $\mathbf{x}$ and a sparse vector $\mathbf{u}$ given measurements of the form $\mathbf{y} = \mathbf{A}\mathbf{x}+\mathbf{B}\mathbf{u}$. Our first analysis proposes a geometric condition based on the minimal angle between spanning subspaces corresponding to the matrices $\mathbf{A}$ and $\mathbf{B}$ that guarantees unique solution to the model. The second analysis shows that, under mild assumptions, a convex program recovers the dense and sparse components. We validate the effectiveness of the model on simulated data and propose a dense and sparse autoencoder (DenSaE) tailored to learning the dictionaries from the dense and sparse model. We demonstrate that (i) DenSaE denoises natural images better than architectures derived from the sparse coding model ($\mathbf{B}\mathbf{u}$), (ii) in the presence of noise, training the biases in the latter amounts to implicitly learning the $\mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u}$ model, (iii) $\mathbf{A}$ and $\mathbf{B}$ capture low- and high-frequency contents, respectively, and (iv) compared to the sparse coding model, DenSaE offers a balance between discriminative power and representation.
translated by 谷歌翻译
许多现代数据集,从神经影像和地统计数据等领域都以张量数据的随机样本的形式来说,这可以被理解为对光滑的多维随机功能的嘈杂观察。来自功能数据分析的大多数传统技术被维度的诅咒困扰,并且随着域的尺寸增加而迅速变得棘手。在本文中,我们提出了一种学习从多维功能数据样本的持续陈述的框架,这些功能是免受诅咒的几种表现形式的。这些表示由一组可分离的基函数构造,该函数被定义为最佳地适应数据。我们表明,通过仔细定义的数据的仔细定义的减少转换的张测仪分解可以有效地解决所得到的估计问题。使用基于差分运算符的惩罚,并入粗糙的正则化。也建立了相关的理论性质。在模拟研究中证明了我们对竞争方法的方法的优点。我们在神经影像动物中得出真正的数据应用。
translated by 谷歌翻译
从卫星图像中提取的大气运动向量(AMV)是唯一具有良好全球覆盖范围的风观测。它们是进食数值天气预测(NWP)模型的重要特征。已经提出了几种贝叶斯模型来估计AMV。尽管对于正确同化NWP模型至关重要,但很少有方法可以彻底表征估计误差。估计误差的困难源于后验分布的特异性,这既是很高的维度,又是由于奇异的可能性而导致高度不良的条件,这在缺少数据(未观察到的像素)的情况下特别重要。这项工作研究了使用基于梯度的Markov链Monte Carlo(MCMC)算法评估AMV的预期误差。我们的主要贡献是提出一种回火策略,这相当于在点估计值附近的AMV和图像变量的联合后验分布的局部近似。此外,我们提供了与先前家庭本身有关的协方差(分数布朗运动),并具有不同的超参数。从理论的角度来看,我们表明,在规律性假设下,随着温度降低到{optimal}高斯近似值,在最大a后验(MAP)对数密度给出的点估计下,温度降低到{optimal}高斯近似值。从经验的角度来看,我们根据一些定量的贝叶斯评估标准评估了提出的方法。我们对合成和真实气象数据进行的数值模拟揭示了AMV点估计的准确性及其相关的预期误差估计值的显着提高,但在MCMC算法的收敛速度方面也有很大的加速度。
translated by 谷歌翻译
Countless signal processing applications include the reconstruction of signals from few indirect linear measurements. The design of effective measurement operators is typically constrained by the underlying hardware and physics, posing a challenging and often even discrete optimization task. While the potential of gradient-based learning via the unrolling of iterative recovery algorithms has been demonstrated, it has remained unclear how to leverage this technique when the set of admissible measurement operators is structured and discrete. We tackle this problem by combining unrolled optimization with Gumbel reparametrizations, which enable the computation of low-variance gradient estimates of categorical random variables. Our approach is formalized by GLODISMO (Gradient-based Learning of DIscrete Structured Measurement Operators). This novel method is easy-to-implement, computationally efficient, and extendable due to its compatibility with automatic differentiation. We empirically demonstrate the performance and flexibility of GLODISMO in several prototypical signal recovery applications, verifying that the learned measurement matrices outperform conventional designs based on randomization as well as discrete optimization baselines.
translated by 谷歌翻译