在目前的工作中,我们提出了一个自制的坐标投影网络(范围),以通过解决逆断层扫描成像问题来从单个SV正弦图中重建无伪像的CT图像。与使用隐式神经代表网络(INR)解决类似问题的最新相关工作相比,我们的基本贡献是一种有效而简单的重新注射策略,可以将层析成像图像重建质量推向监督的深度学习CT重建工作。提出的策略是受线性代数与反问题之间的简单关系的启发。为了求解未确定的线性方程式系统,我们首先引入INR以通过图像连续性之前限制解决方案空间并实现粗糙解决方案。其次,我们建议生成一个密集的视图正式图,以改善线性方程系统的等级并产生更稳定的CT图像解决方案空间。我们的实验结果表明,重新投影策略显着提高了图像重建质量(至少为PSNR的+3 dB)。此外,我们将最近的哈希编码集成到我们的范围模型中,这极大地加速了模型培训。最后,我们评估并联和风扇X射线梁SVCT重建任务的范围。实验结果表明,所提出的范围模型优于两种基于INR的方法和两种受欢迎的监督DL方法。
translated by 谷歌翻译
Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.
translated by 谷歌翻译
本文提出了一种新颖而快速的自我监督解决方案,用于稀疏视图CBCT重建(锥束计算机断层扫描),不需要外部训练数据。具体而言,所需的衰减系数表示为3D空间坐标的连续函数,该功能由完全连接的深神经网络参数化。我们可以离散地综合预测并通过最大程度地减少真实和合成预测之间的误差来培训网络。采用基于学习的编码器需要哈希编码来帮助网络捕获高频细节。该编码器在具有更高的性能和效率方面优于常用的频域编码器,因为它利用了人体器官的平稳性和稀疏性。已经在人体器官和幻影数据集上进行了实验。所提出的方法可实现最先进的准确性,并花费相当短的计算时间。
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
在呼吸运动下重建肺部锥体束计算机断层扫描(CBCT)是一个长期的挑战。这项工作更进一步,以解决一个具有挑战性的设置,以重建仅来自单个} 3D CBCT采集的多相肺图像。为此,我们介绍了对观点或Regas的概述综合。 Regas提出了一种自我监督的方法,以合成不足的层析成像视图并减轻重建图像中的混叠伪像。该方法可以更好地估计相间变形矢量场(DVF),这些矢量场(DVF)用于增强无合成的直接观察结果的重建质量。为了解决高分辨率4D数据上深神经网络的庞大记忆成本,Regas引入了一种新颖的射线路径变换(RPT),该射线路径转换(RPT)允许分布式,可区分的远期投影。 REGA不需要其他量度尺寸,例如先前的扫描,空气流量或呼吸速度。我们的广泛实验表明,REGA在定量指标和视觉质量方面的表现明显优于可比的方法。
translated by 谷歌翻译
Cone beam computed tomography (CBCT) has been widely used in clinical practice, especially in dental clinics, while the radiation dose of X-rays when capturing has been a long concern in CBCT imaging. Several research works have been proposed to reconstruct high-quality CBCT images from sparse-view 2D projections, but the current state-of-the-arts suffer from artifacts and the lack of fine details. In this paper, we propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields, where we have invented a novel view augmentation strategy to overcome the challenges introduced by insufficient data from sparse input views. Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views (25 times fewer than clinical collections), which outperforms the state-of-the-arts. We have further conducted comprehensive experiments and ablation analysis to validate the effectiveness of our approach.
translated by 谷歌翻译
We propose a deep learning method for three-dimensional reconstruction in low-dose helical cone-beam computed tomography. We reconstruct the volume directly, i.e., not from 2D slices, guaranteeing consistency along all axes. In a crucial step beyond prior work, we train our model in a self-supervised manner in the projection domain using noisy 2D projection data, without relying on 3D reference data or the output of a reference reconstruction method. This means the fidelity of our results is not limited by the quality and availability of such data. We evaluate our method on real helical cone-beam projections and simulated phantoms. Our reconstructions are sharper and less noisy than those of previous methods, and several decibels better in quantitative PSNR measurements. When applied to full-dose data, our method produces high-quality results orders of magnitude faster than iterative techniques.
translated by 谷歌翻译
从有限角度范围内获取的X射线投影的计算机断层扫描(CT)重建是具有挑战性的,特别是当角度范围非常小时。分析和迭代模型都需要更多的投影来有效建模。由于其出色的重建性能,深度学习方法已经取得了普遍存在,但此类成功主要限制在同一数据集中,并且在具有不同分布的数据集中不概括。在此,我们通过引入铭顶推销模块来提出用于有限角度CT重建的外推网,这是理论上的合理的。该模块补充了额外的铭顶信息和靴子型号概括性。广泛的实验结果表明,我们的重建模型在NIH-AAPM数据集上实现了最先进的性能,类似于现有方法。更重要的是,我们表明,与现有方法相比,使用这种Sinogram推断模块显着提高了在未经持续数据集(例如,Covid-19和LIDC数据集)上的模型的泛化能力。
translated by 谷歌翻译
基于深度学习的解决方案正在为各种应用程序成功实施。最值得注意的是,临床用例已增加了兴趣,并且是过去几年提出的一些尖端数据驱动算法背后的主要驱动力。对于诸如稀疏视图重建等应用,其中测量数据的量很少,以使获取时间短而且辐射剂量较低,降低了串联的伪像,促使数据驱动的DeNoINEDENO算法的开发,其主要目标是获得获得的主要目标。只有一个全扫描数据的子集诊断可行的图像。我们提出了WNET,这是一个数据驱动的双域denoising模型,其中包含用于稀疏视图deNoising的可训练的重建层。两个编码器 - 模型网络同时在正式和重建域中执行deno,而实现过滤后的反向投影算法的第三层则夹在前两种之间,并照顾重建操作。我们研究了该网络在稀疏视图胸部CT扫描上的性能,并突出显示了比更传统的固定层具有可训练的重建层的额外好处。我们在两个临床相关的数据集上训练和测试我们的网络,并将获得的结果与三种不同类型的稀疏视图CT CT DeNoisis和重建算法进行了比较。
translated by 谷歌翻译
强度衍射断层扫描(IDT)是指用于从一组仅2D强度测量的样品成像样品的3D折射率(RI)分布的一类光学显微镜技术。由于相位信息的丢失和缺失的锥体问题,无伪影RI地图的重建是IDT的一个基本挑战。神经领域(NF)最近成为一种新的深度学习方法(DL),用于学习物理领域的连续表示。 NF使用基于坐标的神经网络来表示该场,通过将空间坐标映射到相应的物理量,在我们的情况下,复杂价值的折射率值。我们将DEPAF作为第一种基于NF的IDT方法,可以从仅强度和有限角度的测量值中学习RI体积的高质量连续表示。 DECAF中的表示形式是通过使用IDT向前模型直接从测试样品的测量值中学到的,而无需任何地面真相图。我们对模拟和实验生物学样品进行定性和定量评估DECAF。我们的结果表明,DECAF可以生成高对比度和无伪影RI图,并导致MSE超过现有方法的2.1倍。
translated by 谷歌翻译
计算机断层扫描(CT)使用从身体周围的传感器取出的X射线测量以产生人体的断层图像。如果X射线数据充分采样和高质量,则可以使用传统的重建算法;然而,诸如将剂量减少给患者的问题,或数据采集的几何限制可能导致低质量或不完整的数据。由于噪声和其他伪像,使用传统方法从这些数据重建的图像具有差的质量。本研究的目的是训练单个神经网络,从嘈杂或不完全CT扫描数据重建高质量CT图像,包括低剂量,稀疏视图和有限的角度场景。为了完成这项任务,我们将生成的对冲网络(GaN)作为信号训练,以与CT数据的迭代同步代数重建技术(SART)结合使用。网络包括自我关注块,以模拟数据中的远程依赖性。我们将我们的自我关注GaN进行CT图像重建,包括几种最先进的方法,包括去噪循环GaN,Circle GaN和总变化的校长算法。我们的方法被证明是可以相当的整体性能来圈出GaN,同时优于其他两种方法。
translated by 谷歌翻译
Low-dose computed tomography (CT) plays a significant role in reducing the radiation risk in clinical applications. However, lowering the radiation dose will significantly degrade the image quality. With the rapid development and wide application of deep learning, it has brought new directions for the development of low-dose CT imaging algorithms. Therefore, we propose a fully unsupervised one sample diffusion model (OSDM)in projection domain for low-dose CT reconstruction. To extract sufficient prior information from single sample, the Hankel matrix formulation is employed. Besides, the penalized weighted least-squares and total variation are introduced to achieve superior image quality. Specifically, we first train a score-based generative model on one sinogram by extracting a great number of tensors from the structural-Hankel matrix as the network input to capture prior distribution. Then, at the inference stage, the stochastic differential equation solver and data consistency step are performed iteratively to obtain the sinogram data. Finally, the final image is obtained through the filtered back-projection algorithm. The reconstructed results are approaching to the normal-dose counterparts. The results prove that OSDM is practical and effective model for reducing the artifacts and preserving the image quality.
translated by 谷歌翻译
基于深入的学习的断层摄影图像重建一直在这些年来引起了很多关注。稀疏视图数据重建是典型的未确定逆问题之一,如何从数十个投影重建高质量CT图像仍然是实践中的挑战。为了解决这一挑战,在本文中,我们提出了一个多域一体化的Swin变压器网络(MIST-NET)。首先,使用灵活的网络架构,所提出的雾网掺入了来自数据,残差数据,图像和剩余图像的豪华域特征。这里,残差数据和残差 - 图像域网组件可以被认为是数据一致性模块,以消除残差数据和图像域中的插值误差,然后进一步保持图像细节。其次,为了检测图像特征和进一步保护图像边缘,将培训的Sobel滤波器结合到网络中以提高编码解码能力。第三,随着经典的Swin变压器,我们进一步设计了高质量的重建变压器(即,REFFORMER)来提高重建性能。 REFFORMER继承了SWIN变压器的功率以捕获重建图像的全局和本地特征。具有48种视图的数值数据集的实验证明了我们所提出的雾网提供更高的重建图像质量,具有小的特征恢复和边缘保护,而不是其他竞争对手,包括高级展开网络。定量结果表明,我们的雾网也获得了最佳性能。训练有素的网络被转移到真实的心脏CT数据集,48次视图,重建结果进一步验证了我们的雾网的优势,进一步证明了临床应用中雾的良好稳健性。
translated by 谷歌翻译
在计算断层摄影(CT)成像过程中,患者内的金属植入物总是造成有害伪影,这对重建的CT图像的视觉质量产生了负面影响,并且对随后的临床诊断产生负面影响。对于金属伪影减少(MAR)任务,基于深度学习的方法取得了有希望的表现。然而,大多数主要共享两个主要常见限制:1)CT物理成像几何约束是完全融入深网络结构中的; 2)整个框架对特定MAR任务具有薄弱的可解释性;因此,难以评估每个网络模块的作用。为了减轻这些问题,在本文中,我们构建了一种新的可解释的双域网络,称为Indudonet +,CT成像过程被精细地嵌入到其中。具体地说,我们推出了一个联合空间和氡域重建模型,并提出了一种仅具有简单操作员的优化算法来解决它。通过将所提出的算法中涉及的迭代步骤展开到相应的网络模块中,我们可以轻松地构建Indudonet +,以明确的解释性。此外,我们分析了不同组织之间的CT值,并将现有的观察合并到Endudonet +的现有网络中,这显着提高了其泛化性能。综合数据和临床数据的综合实验证实了所提出的方法的优越性以及超出当前最先进(SOTA)MAR方法的卓越概括性性能。代码可用于\ url {https://github.com/hongwang01/indududonet_plus}。
translated by 谷歌翻译
高光谱图像(HSI)没有额外辅助图像的超分辨率仍然是由于其高维光谱图案的恒定挑战,其中学习有效的空间和光谱表示是基本问题。最近,隐式的神经表示(INR)正在进行进步,作为新颖且有效的代表,特别是在重建任务中。因此,在这项工作中,我们提出了一种基于INR的新颖的HSI重建模型,其通过将空间坐标映射到其对应的光谱辐射值值的连续函数来表示HSI。特别地,作为INR的特定实现,参数模型的参数是通过使用卷积网络在特征提取的超通知来预测的。它使连续功能以内容感知方式将空间坐标映射到像素值。此外,周期性空间编码与重建过程深度集成,这使得我们的模型能够恢复更高的频率细节。为了验证我们模型的功效,我们在三个HSI数据集(洞穴,NUS和NTIRE2018)上进行实验。实验结果表明,与最先进的方法相比,该建议的模型可以实现竞争重建性能。此外,我们提供了对我们模型各个组件的效果的消融研究。我们希望本文可以服务器作为未来研究的效率参考。
translated by 谷歌翻译
In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译
最近在图像重建之前被引入了深度图像。它表示要作为深度卷积神经网络的输出恢复的图像,并学习网络的参数,使得输出适合损坏的观察。尽管它令人印象深刻的重建属性,但与学到的学习或传统的重建技术相比,该方法缓慢。我们的工作开发了一个两阶段学习范式来解决计算挑战:(i)我们在合成数据集上执行网络的监督预测;(ii)我们微调网络的参数,以适应目标重建。我们展示了预先预测的预测,从实际测量的生物样本的实际微型计算机断层扫描数据中提高了随后的重建。代码和附加实验材料可在https://educateddip.github.io/docs.educated_deep_image_prior/处获得。
translated by 谷歌翻译
在计算机断层扫描成像的实际应用中,投影数据可以在有限角度范围内获取,并由于扫描条件的限制而被噪声损坏。嘈杂的不完全投影数据导致反问题的不良性。在这项工作中,我们从理论上验证了低分辨率重建问题的数值稳定性比高分辨率问题更好。在接下来的内容中,提出了一个新型的低分辨率图像先验的CT重建模型,以利用低分辨率图像来提高重建质量。更具体地说,我们在下采样的投影数据上建立了低分辨率重建问题,并将重建的低分辨率图像作为原始限量角CT问题的先验知识。我们通过交替的方向方法与卷积神经网络近似的所有子问题解决了约束最小化问题。数值实验表明,我们的双分辨率网络在嘈杂的有限角度重建问题上的变异方法和流行的基于学习的重建方法都优于变异方法。
translated by 谷歌翻译
用于医学图像重建的深度神经网络传统上使用高质量的地基图像作为训练目标训练。最近关于噪声的工作(N2N)已经示出了使用与具有地面真理的多个噪声测量的潜力。然而,现有的基于N2N的方法不适合于从经历非身份变形的物体的测量来学习。本文通过补偿对象变形来提出用于训练深层重建网络的变形补偿学习(DecoLearn)方法来解决此问题。DecoLearn的一个关键组件是一个深度登记模块,它与深度重建网络共同培训,没有任何地理监督。我们在模拟和实验收集的磁共振成像(MRI)数据上验证了甲板,并表明它显着提高了成像质量。
translated by 谷歌翻译