Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.
translated by 谷歌翻译
在目前的工作中,我们提出了一个自制的坐标投影网络(范围),以通过解决逆断层扫描成像问题来从单个SV正弦图中重建无伪像的CT图像。与使用隐式神经代表网络(INR)解决类似问题的最新相关工作相比,我们的基本贡献是一种有效而简单的重新注射策略,可以将层析成像图像重建质量推向监督的深度学习CT重建工作。提出的策略是受线性代数与反问题之间的简单关系的启发。为了求解未确定的线性方程式系统,我们首先引入INR以通过图像连续性之前限制解决方案空间并实现粗糙解决方案。其次,我们建议生成一个密集的视图正式图,以改善线性方程系统的等级并产生更稳定的CT图像解决方案空间。我们的实验结果表明,重新投影策略显着提高了图像重建质量(至少为PSNR的+3 dB)。此外,我们将最近的哈希编码集成到我们的范围模型中,这极大地加速了模型培训。最后,我们评估并联和风扇X射线梁SVCT重建任务的范围。实验结果表明,所提出的范围模型优于两种基于INR的方法和两种受欢迎的监督DL方法。
translated by 谷歌翻译
本文提出了一种新颖而快速的自我监督解决方案,用于稀疏视图CBCT重建(锥束计算机断层扫描),不需要外部训练数据。具体而言,所需的衰减系数表示为3D空间坐标的连续函数,该功能由完全连接的深神经网络参数化。我们可以离散地综合预测并通过最大程度地减少真实和合成预测之间的误差来培训网络。采用基于学习的编码器需要哈希编码来帮助网络捕获高频细节。该编码器在具有更高的性能和效率方面优于常用的频域编码器,因为它利用了人体器官的平稳性和稀疏性。已经在人体器官和幻影数据集上进行了实验。所提出的方法可实现最先进的准确性,并花费相当短的计算时间。
translated by 谷歌翻译
从有限角度范围内获取的X射线投影的计算机断层扫描(CT)重建是具有挑战性的,特别是当角度范围非常小时。分析和迭代模型都需要更多的投影来有效建模。由于其出色的重建性能,深度学习方法已经取得了普遍存在,但此类成功主要限制在同一数据集中,并且在具有不同分布的数据集中不概括。在此,我们通过引入铭顶推销模块来提出用于有限角度CT重建的外推网,这是理论上的合理的。该模块补充了额外的铭顶信息和靴子型号概括性。广泛的实验结果表明,我们的重建模型在NIH-AAPM数据集上实现了最先进的性能,类似于现有方法。更重要的是,我们表明,与现有方法相比,使用这种Sinogram推断模块显着提高了在未经持续数据集(例如,Covid-19和LIDC数据集)上的模型的泛化能力。
translated by 谷歌翻译
由于其显着的合成质量,最近,神经辐射场(NERF)最近对3D场景重建和新颖的视图合成进行了相当大的关注。然而,由散焦或运动引起的图像模糊,这通常发生在野外的场景中,显着降低了其重建质量。为了解决这个问题,我们提出了DeBlur-nerf,这是一种可以从模糊输入恢复尖锐的nerf的第一种方法。我们采用逐合成方法来通过模拟模糊过程来重建模糊的视图,从而使NERF对模糊输入的鲁棒。该仿真的核心是一种新型可变形稀疏内核(DSK)模块,其通过在每个空间位置变形规范稀疏内核来模拟空间变形模糊内核。每个内核点的射线起源是共同优化的,受到物理模糊过程的启发。该模块作为MLP参数化,具有能够概括为各种模糊类型。联合优化NERF和DSK模块允许我们恢复尖锐的NERF。我们证明我们的方法可用于相机运动模糊和散焦模糊:真实场景中的两个最常见的模糊。合成和现实世界数据的评估结果表明,我们的方法优于几个基线。合成和真实数据集以及源代码将公开可用于促进未来的研究。
translated by 谷歌翻译
Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion affected reconstruction alone. Using the proposed method, we are the first to optimize such an autofocus-inspired algorithm based on analytical gradients. The algorithm achieves a reduction in MSE by 35.5 % and an improvement in SSIM by 12.6 % over the motion affected reconstruction. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.
translated by 谷歌翻译
Cone beam computed tomography (CBCT) has been widely used in clinical practice, especially in dental clinics, while the radiation dose of X-rays when capturing has been a long concern in CBCT imaging. Several research works have been proposed to reconstruct high-quality CBCT images from sparse-view 2D projections, but the current state-of-the-arts suffer from artifacts and the lack of fine details. In this paper, we propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields, where we have invented a novel view augmentation strategy to overcome the challenges introduced by insufficient data from sparse input views. Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views (25 times fewer than clinical collections), which outperforms the state-of-the-arts. We have further conducted comprehensive experiments and ablation analysis to validate the effectiveness of our approach.
translated by 谷歌翻译
Neural Radiance Field(NeRF) has exhibited outstanding three-dimensional(3D) reconstruction quality via the novel view synthesis from multi-view images and paired calibrated camera parameters. However, previous NeRF-based systems have been demonstrated under strictly controlled settings, with little attention paid to less ideal scenarios, including with the presence of noise such as exposure, illumination changes, and blur. In particular, though blur frequently occurs in real situations, NeRF that can handle blurred images has received little attention. The few studies that have investigated NeRF for blurred images have not considered geometric and appearance consistency in 3D space, which is one of the most important factors in 3D reconstruction. This leads to inconsistency and the degradation of the perceptual quality of the constructed scene. Hence, this paper proposes a DP-NeRF, a novel clean NeRF framework for blurred images, which is constrained with two physical priors. These priors are derived from the actual blurring process during image acquisition by the camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency utilizing the physical priors and adaptive weight proposal to refine the color composition error in consideration of the relationship between depth and blur. We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur. The results demonstrate that DP-NeRF successfully improves the perceptual quality of the constructed NeRF ensuring 3D geometric and appearance consistency. We further demonstrate the effectiveness of our model with comprehensive ablation analysis.
translated by 谷歌翻译
我们呈现高动态范围神经辐射字段(HDR-NERF),以从一组低动态范围(LDR)视图的HDR辐射率字段与不同的曝光。使用HDR-NERF,我们能够在不同的曝光下生成新的HDR视图和新型LDR视图。我们方法的关键是模拟物理成像过程,该过程决定了场景点的辐射与具有两个隐式功能的LDR图像中的像素值转换为:RADIACE字段和音调映射器。辐射场对场景辐射(值在0到+末端之间的值变化),其通过提供相应的射线源和光线方向来输出光线的密度和辐射。 TONE MAPPER模拟映射过程,即在相机传感器上击中的光线变为像素值。通过将辐射和相应的曝光时间送入音调映射器来预测光线的颜色。我们使用经典的卷渲染技术将输出辐射,颜色和密度投影为HDR和LDR图像,同时只使用输入的LDR图像作为监控。我们收集了一个新的前瞻性的HDR数据集,以评估所提出的方法。综合性和现实世界场景的实验结果验证了我们的方法不仅可以准确控制合成视图的曝光,还可以用高动态范围呈现视图。
translated by 谷歌翻译
我们介绍了一个自由视的渲染方法 - Humannerf - 这对人类进行了复杂的身体运动的给定单曲视频工作,例如,来自YouTube的视频。我们的方法可以在任何帧中暂停视频,并从任意新相机视点呈现对象,甚至是该特定帧和身体姿势的完整360度摄像机路径。这项任务特别具有挑战性,因为它需要合成身体的光电型细节,如从输入视频中可能不存在的各种相机角度所见,以及合成布折叠和面部外观的细细节。我们的方法优化了在规范T型姿势中的人的体积表示,同时通过运动场,该运动场通过向后的警报将估计的规范表示映射到视频的每个帧。运动场分解成骨骼刚性和非刚性运动,由深网络产生。我们对现有工作显示出显着的性能改进,以及从移动人类的单眼视频的令人尖锐的观点渲染的阐释示例,以挑战不受控制的捕获场景。
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
我们提出了一种新颖的神经辐射模型,该模型可以以自我监督的方式进行训练,以构成动态非结构化场景的新型视图综合。我们的端到端可训练算法在几秒钟内学习了高度复杂,现实世界中的静态场景,并在几分钟内具有刚性和非刚性运动。通过区分静态和运动以运动为中心的像素,我们从一组稀疏的图像组中创建高质量的表示。我们对现有基准进行了广泛的定性和定量评估,并在具有挑战性的NVIDIA动态场景数据集上为性能度量设置了最先进的评估。此外,我们评估了我们在挑战现实数据集(例如cholec80 and Surgicalactions160)上的模型性能。
translated by 谷歌翻译
在呼吸运动下重建肺部锥体束计算机断层扫描(CBCT)是一个长期的挑战。这项工作更进一步,以解决一个具有挑战性的设置,以重建仅来自单个} 3D CBCT采集的多相肺图像。为此,我们介绍了对观点或Regas的概述综合。 Regas提出了一种自我监督的方法,以合成不足的层析成像视图并减轻重建图像中的混叠伪像。该方法可以更好地估计相间变形矢量场(DVF),这些矢量场(DVF)用于增强无合成的直接观察结果的重建质量。为了解决高分辨率4D数据上深神经网络的庞大记忆成本,Regas引入了一种新颖的射线路径变换(RPT),该射线路径转换(RPT)允许分布式,可区分的远期投影。 REGA不需要其他量度尺寸,例如先前的扫描,空气流量或呼吸速度。我们的广泛实验表明,REGA在定量指标和视觉质量方面的表现明显优于可比的方法。
translated by 谷歌翻译
In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译
We propose a deep learning method for three-dimensional reconstruction in low-dose helical cone-beam computed tomography. We reconstruct the volume directly, i.e., not from 2D slices, guaranteeing consistency along all axes. In a crucial step beyond prior work, we train our model in a self-supervised manner in the projection domain using noisy 2D projection data, without relying on 3D reference data or the output of a reference reconstruction method. This means the fidelity of our results is not limited by the quality and availability of such data. We evaluate our method on real helical cone-beam projections and simulated phantoms. Our reconstructions are sharper and less noisy than those of previous methods, and several decibels better in quantitative PSNR measurements. When applied to full-dose data, our method produces high-quality results orders of magnitude faster than iterative techniques.
translated by 谷歌翻译
在本文中,我们开发了一种高效的回顾性深度学习方法,称为堆叠U-网,具有自助前沿,解决MRI中刚性运动伪影的问题。拟议的工作利用损坏的图像本身使用额外的知识前瞻,而无需额外的对比度数据。所提出的网络通过共享来自相同失真对象的连续片的辅助信息来学习错过的结构细节。我们进一步设计了一种堆叠的U-网的细化,便于保持图像空间细节,从而提高了像素到像素依赖性。为了执行网络培训,MRI运动伪像的模拟是不可避免的。我们使用各种类型的图像前瞻呈现了一个密集的分析:来自同一主题的其他图像对比的提出的自助前锋和前锋。实验分析证明了自助前锋的有效性和可行性,因为它不需要任何进一步的数据扫描。
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
我们提出了高动态范围辐射(HDR)字段,HDR-PLENOXELS,它学习了3D HDR辐射场的肺化功能,几何信息和2D低动态范围(LDR)图像中固有的不同摄像机设置。我们基于体素的卷渲染管道可重建HDR辐射字段,仅以端到端的方式从不同的相机设置中拍摄的多视图LDR图像,并且具有快速的收敛速度。为了在现实世界中处理各种摄像机,我们引入了一个音调映射模块,该模块模拟了数字相机内成像管道(ISP)(ISP)和DISTANGLES辐射测定设置。我们的音调映射模块可以通过控制每个新型视图的辐射设置来渲染。最后,我们构建一个具有不同摄像机条件的多视图数据集,适合我们的问题设置。我们的实验表明,HDR-Plenoxels可以从具有各种相机的LDR图像中表达细节和高质量的HDR新型视图。
translated by 谷歌翻译
CT和MRI是两种广泛使用的临床成像方式,用于非侵入性诊断。然而,这两种方式都有一定的问题。 CT使用有害电离辐射,MRI患有缓慢的采集速度。欠采样可以解决这两个问题,例如稀疏抽样。然而,这种向下采样的数据导致降低分辨率并引入人工制品。已经提出了几种技术,包括基于深度的学习方法,以重建此类数据。然而,这两个方式的欠采样重建问题总是被认为是两个不同的问题,并通过不同的研究工作分开解决。本文通过在径向MRI上应用傅立叶变换的预处理来实现稀疏CT和缺口MRI重建的统一解决方案,然后使用SCOMAGE ups采样与滤波后投影结合使用SCOMAGE Cups采样来实现的基于傅里叶变换的预处理。原始网络是一种基于深度学习的方法,用于重建稀疏采样的CT数据。本文介绍了原始 - 双工UNET,从精度和重建速度方面提高了原始双网络。所提出的方法导致平均SSSIM为0.932,同时对风扇束几何进行稀疏CT重建,其稀疏水平为16,实现了对先前模型的统计上显着的改进,这导致0.919。此外,所提出的模型导致0.903和0.957平均SSIM,同时重建具有16-统计上显着改善的加速因子,在原始模型上重建了缺乏采样的脑和腹部MRI数据,这导致0.867和0.949。最后,本文表明,所提出的网络不仅提高了整体图像质量,而且还提高了兴趣区域的图像质量;以及在针的存在下更好地推广。
translated by 谷歌翻译
我们提出EV-NERF,这是一个从事件数据得出的神经辐射场。虽然事件摄像机可以测量高框架速率的细微亮度变化,但低照明或极端运动的测量却遭受了显着的域差异,并具有复杂的噪声。结果,基于事件的视觉任务的性能不会转移到具有挑战性的环境中,在这种环境中,事件摄像机预计会在普通摄像机上蓬勃发展。我们发现,NERF的多视图一致性提供了强大的自我实施信号,以消除虚假测量结果并提取一致的基础结构,尽管输入高度嘈杂。 EV-NERF的输入不是原始NERF的图像,而是事件测量值,并伴随着传感器的运动。使用反映传感器测量模型的损耗函数,EV-NERF创建了一个集成的神经体积,该量总结了捕获约2-4秒的非结构化和稀疏数据点。生成的神经体积还可以从具有合理深度估计的新型视图中产生强度图像,这可以作为各种基于视觉任务的高质量输入。我们的结果表明,EV-NERF在极端噪声条件和高动力范围成像下实现了强度图像重建的竞争性能。
translated by 谷歌翻译