目的:开发基于深度学习的图像重建框架,以在MRI中可复制研究。方法:Bart Toolbox提供了丰富的校准和重建算法的实现,用于并行成像和压缩传感。在这项工作中,BART由非线性操作员框架扩展,该框架提供了自动差异以允许计算梯度。 BART的现有特定于MRI的操作员,例如非均匀的快速傅立叶变换,直接集成到该框架中,并与神经网络中使用的常见构件相辅相成。为了评估用于先进的基于深度学习的重建框架的使用,实现了两个最先进的展开的重建网络,即变异网络[1]和MODL [2]。结果:可以使用BART的基于BART的优化算法来构建和训练最新的深层图像重建网络。与基于TensorFlow的原始实现相比,BART实施在训练时间和重建质量方面具有相似的性能。结论:通过将非线性操作员和神经网络整合到BART中,我们为MRI中的深度学习重建提供了一个一般框架。
translated by 谷歌翻译
物理驱动的深度学习方法已成为计算磁共振成像(MRI)问题的强大工具,将重建性能推向新限制。本文概述了将物理信息纳入基于学习的MRI重建中的最新发展。我们考虑了用于计算MRI的线性和非线性正向模型的逆问题,并回顾了解决这些方法的经典方法。然后,我们专注于物理驱动的深度学习方法,涵盖了物理驱动的损失功能,插件方法,生成模型和展开的网络。我们重点介绍了特定于领域的挑战,例如神经网络的实现和复杂值的构建基块,以及具有线性和非线性正向模型的MRI转换应用。最后,我们讨论常见问题和开放挑战,并与物理驱动的学习与医学成像管道中的其他下游任务相结合时,与物理驱动的学习的重要性联系在一起。
translated by 谷歌翻译
可解释性和鲁棒性必须在临床应用中整合加速磁共振成像(MRI)重建的机器学习方法。这样做会允许快速高质量的解剖和病理学成像。数据一致性(DC)对于多模态数据的泛化至关重要,以及检测病理学的鲁棒性。这项工作提出了独立复发推理机(CIRIM)的级联,通过展开优化来评估DC,通过梯度下降,并通过设计的术语明确地明确。我们对CIRIM与其他展开的优化方法进行广泛的比较,是端到端变分网络(E2EVN)和轮辋,以及UNET和压缩感测(CS)。评估是分两个阶段完成的。首先,评估关于多次训练的MRI模型的学习,即用{t_1} $ - 加权和平凡对比,以及$ {t_2} $ - 加权膝盖数据。其次,在通过3D Flair MRI数据中重建依赖多发性硬化(MS)患者的3D Flair MRI数据来测试鲁棒性。结果表明,CIRIM在隐式强制执行DC时表现最佳,而E2EVN需要明确制定的DC。 CIRIM在重建临床MS数据时显示出最高病变对比度分辨率。与CS相比,性能提高了大约11%,而重建时间是二十次减少。
translated by 谷歌翻译
The data consistency for the physical forward model is crucial in inverse problems, especially in MR imaging reconstruction. The standard way is to unroll an iterative algorithm into a neural network with a forward model embedded. The forward model always changes in clinical practice, so the learning component's entanglement with the forward model makes the reconstruction hard to generalize. The proposed method is more generalizable for different MR acquisition settings by separating the forward model from the deep learning component. The deep learning-based proximal gradient descent was proposed to create a learned regularization term independent of the forward model. We applied the one-time trained regularization term to different MR acquisition settings to validate the proposed method and compared the reconstruction with the commonly used $\ell_1$ regularization. We showed ~3 dB improvement in the peak signal to noise ratio, compared with conventional $\ell_1$ regularized reconstruction. We demonstrated the flexibility of the proposed method in choosing different undersampling patterns. We also evaluated the effect of parameter tuning for the deep learning regularization.
translated by 谷歌翻译
展开的神经网络最近实现了最先进的MRI重建。这些网络通过在基于物理的一致性和基于神经网络的正则化之间交替来展开迭代优化算法。但是,它们需要大型神经网络的几次迭代来处理高维成像任务,例如3D MRI。这限制了基于反向传播的传统训练算法,这是由于较大的记忆力和计算梯度和存储中间激活的计算要求。为了应对这一挑战,我们提出了加速MRI(GLEAM)重建的贪婪学习,这是一种高维成像设置的有效培训策略。 GLEAM将端到端网络拆分为脱钩的网络模块。每个模块都以贪婪的方式优化,并通过脱钩的梯度更新,从而减少了训练过程中的内存足迹。我们表明,可以在多个图形处理单元(GPU)上并行执行解耦梯度更新,以进一步减少训练时间。我们介绍了2D和3D数据集的实验,包括多线圈膝,大脑和动态心脏Cine MRI。我们观察到:i)闪闪发光的概括以及最先进的记忆效率基线,例如具有相同内存足迹的梯度检查点和可逆网络,但训练速度更快1.3倍; ii)对于相同的内存足迹,闪光在2D中产生1.1dB PSNR的增益,而3D在端到端基线中产生1.8 dB。
translated by 谷歌翻译
我们引入了一个框架,该框架可以从学习概率分布中进行有效的MRI重建。与传统的基于深度学习的MRI重建技术不同,鉴于使用Markov链蒙特卡洛(MCMC)方法测得的K空间,样品是从后部分布中得出的。除了可以通过常规方法获得的图像的最大后验(MAP)估计值外,还可以计算最小平方误差(MMSE)估计值和不确定性图。数据驱动的马尔可夫链是根据从给定的图像数据库中学到的生成模型构建的,并且独立于用于建模K空间测量的前向操作员。这提供了灵活性,因为该方法可以应用于使用不同的采样方案获得的K空间或使用相同的预训练模型接收线圈。此外,我们使用基于反向扩散过程的框架来利用高级生成模型。该方法的性能使用K空间中的10倍下采样在开放数据集上进行评估。
translated by 谷歌翻译
压缩传感(CS)一直在加速磁共振成像(MRI)采集过程中的关键作用。随着人工智能的复苏,深神经网络和CS算法正在集成以重新定义快速MRI的领域。过去几年目睹了基于深度学习的CS技术的复杂性,多样性和表现的大量增长,这些技术致力于快速MRI。在该荟萃分析中,我们系统地审查了快速MRI的深度学习的CS技术,描述了关键模型设计,突出突破,并讨论了有希望的方向。我们还介绍了一个综合分析框架和分类系统,以评估深度学习在基于CS的加速度的MRI的关键作用。
translated by 谷歌翻译
这本数字本书包含在物理模拟的背景下与深度学习相关的一切实际和全面的一切。尽可能多,所有主题都带有Jupyter笔记本的形式的动手代码示例,以便快速入门。除了标准的受监督学习的数据中,我们将看看物理丢失约束,更紧密耦合的学习算法,具有可微分的模拟,以及加强学习和不确定性建模。我们生活在令人兴奋的时期:这些方法具有从根本上改变计算机模拟可以实现的巨大潜力。
translated by 谷歌翻译
我们介绍了Netket的版本3,机器学习工具箱适用于许多身体量子物理学。Netket围绕神经网络量子状态构建,并为其评估和优化提供有效的算法。这个新版本是基于JAX的顶部,一个用于Python编程语言的可差分编程和加速的线性代数框架。最重要的新功能是使用机器学习框架的简明符号来定义纯Python代码中的任意神经网络ANS \“凝固的可能性,这允许立即编译以及渐变的隐式生成自动化。Netket 3还带来了GPU和TPU加速器的支持,对离散对称组的高级支持,块以缩放多程度的自由度,Quantum动态应用程序的驱动程序,以及改进的模块化,允许用户仅使用部分工具箱是他们自己代码的基础。
translated by 谷歌翻译
磁共振成像可以产生人体解剖和生理学的详细图像,可以帮助医生诊断和治疗肿瘤等病理。然而,MRI遭受了非常长的收购时间,使其易于患者运动伪影并限制其潜力以提供动态治疗。诸如并行成像和压缩感测的常规方法允许通过使用多个接收器线圈获取更少的MRI数据来改变MR图像来增加MRI采集速度。深度学习的最新进步与平行成像和压缩传感技术相结合,具有从高度加速的MRI数据产生高保真重建。在这项工作中,我们通过利用卷积复发网络的特性和展开算法来解决复发变分网络(RevurrentVarnet)的加速改变网络(RevurrentVarnet)的任务,提出了一种基于深入的深度学习的反问题解决者。 RevurrentVarnet由多个块组成,每个块都负责梯度下降优化算法的一个展开迭代,以解决逆问题。与传统方法相反,优化步骤在观察域($ k $ -space)而不是图像域中进行。每次反复出的Varnet块都会通过观察到的$ k $ -space,并由数据一致性术语和复制单元组成,它将作为输入的隐藏状态和前一个块的预测。我们所提出的方法实现了新的最新状态,定性和定量重建导致来自公共多通道脑数据集的5倍和10倍加速数据,优于以前的传统和基于深度学习的方法。我们将在公共存储库上释放所有型号代码和基线。
translated by 谷歌翻译
CSGM框架(Bora-Jalal-Price-Dimakis'17)表明,深度生成前沿可能是解决逆问题的强大工具。但是,迄今为止,此框架仅在某些数据集(例如,人称和MNIST数字)上经验成功,并且已知在分布外样品上表现不佳。本文介绍了CSGM框架在临床MRI数据上的第一次成功应用。我们在FastMri DataSet上培训了大脑扫描之前的生成,并显示通过Langevin Dynamics的后验采样实现了高质量的重建。此外,我们的实验和理论表明,后部采样是对地面定语分布和测量过程的变化的强大。我们的代码和型号可用于:\ URL {https://github.com/utcsilab/csgm-mri-langevin}。
translated by 谷歌翻译
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.
translated by 谷歌翻译
MRI扫描时间减少通常通过并行成像方法实现,通常基于逆图像空间(A.K.A.K空间)的均匀下采样和具有多个接收器线圈的同时信号接收。 Grappa方法通过跨越所有线圈的相邻获取信号的线性组合来插入缺失的k空间信号,并且可以通过k空间中的卷积来描述。最近,介绍了一种称为RAKI的更广泛的方法。 Raki是一种深入学习方法,将Grappa推广到附加的卷积层,在此期间应用非线性激活功能。这使得卷积神经网络能够实现缺失信号的非线性估计。与Grappa类似,Raki中的卷积核心使用从自动校准信号(ACS)获得的特定训练样本进行培训。 Raki与Grappa相比提供了卓越的重建质量,然而,由于其未知参数的数量增加,通常需要更多的AC。为了克服这一限制,本研究调查了训练数据对标准2D成像重建质量的影响,特别关注其金额和对比信息。此外,评估迭代k空间插值方法(araki),包括通过初始的格拉普重建训练数据增强,并通过迭代培训改进卷积滤波器。仅使用18,20和25个ACS线(8%),通过抑制在加速度因子R = 4和r = 5时发生的残余人工制品,并且与Grappa相比,通过定量质量指标加下划线,产生强烈的噪声抑制。与相约束的组合进一步改善。此外,在预扫描校准的情况下,伊拉克基显示比GRAPPA和RAKI更好的性能,并且在训练和缺乏采样的数据之间强烈不同的对比度。
translated by 谷歌翻译
我们提出了明确结合频率和图像特征表示的神经网络层,并表明它们可以用作频率空间数据重建的多功能构建块。我们的工作是由MRI习得引起的挑战所激发的,该挑战是信号是所需图像的傅立叶变换。提出的联合学习方案既可以校正频率空间的天然伪像,又可以操纵图像空间表示,以重建网络各层的相干图像结构。这与图像重建的大多数当前深度学习方法形成鲜明对比,该方法分别处理频率和图像空间特征,并且通常在两个空间之一中仅运行。我们证明了联合卷积学习在各种任务中的优势,包括运动校正,denosing,从不足采样的采集中重建,以及对模拟和现实世界多层MRI数据的混合采样和运动校正。联合模型在所有任务和数据集中都始终如一地产生高质量的输出图像。当整合到具有物理启发的数据一致性约束的最终采样重建的情况下,将其集成到艺术风化的优化网络中时,提议的体系结构显着改善了优化景观,从而产生了减少训练时间的数量级。该结果表明,联合表示特别适合深度学习网络中的MRI信号。我们的代码和预算模型可在https://github.com/nalinimsingh/interlacer上公开获得。
translated by 谷歌翻译
我们提出了一个基于一般学习的框架,用于解决非平滑和非凸图像重建问题。我们将正则函数建模为$ l_ {2,1} $ norm的组成,并将平滑但非convex功能映射参数化为深卷积神经网络。我们通过利用Nesterov的平滑技术和残留学习的概念来开发一种可证明的趋同的下降型算法来解决非平滑非概念最小化问题,并学习网络参数,以使算法的输出与培训数据中的参考匹配。我们的方法用途广泛,因为人们可以将各种现代网络结构用于正规化,而所得网络继承了算法的保证收敛性。我们还表明,所提出的网络是参数有效的,其性能与实践中各种图像重建问题中的最新方法相比有利。
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
基于分数的扩散模型为使用数据分布的梯度建模图像提供了一种强大的方法。利用学到的分数函数为先验,在这里,我们引入了一种从条件分布中进行测量的方法,以便可以轻松地用于求解成像中的反问题,尤其是用于加速MRI。简而言之,我们通过denoising得分匹配来训练连续的时间依赖分数函数。然后,在推论阶段,我们在数值SDE求解器和数据一致性投影步骤之间进行迭代以实现重建。我们的模型仅需要用于训练的幅度图像,但能够重建复杂值数据,甚至扩展到并行成像。所提出的方法是不可知论到子采样模式,可以与任何采样方案一起使用。同样,由于其生成性质,我们的方法可以量化不确定性,这是标准回归设置不可能的。最重要的是,我们的方法还具有非常强大的性能,甚至击败了经过全面监督训练的模型。通过广泛的实验,我们在质量和实用性方面验证了我们方法的优势。
translated by 谷歌翻译
Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
translated by 谷歌翻译
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.
translated by 谷歌翻译
In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译