最近,通过神经网络参数化的隐式神经表示(INR)已成为一种强大而有前途的工具,可以代表不同种类的信号,因为其连续的,可区分的属性,表现出与经典离散表示的优越性。但是,对INR的神经网络的培训仅利用输入输出对,而目标输出相对于输入的衍生物通常忽略了输入。在本文中,我们为目标输出为图像像素的INR提出了一个训练范式,以编码图像衍生物除了神经网络中的图像值外。具体而言,我们使用有限的差异来近似图像导数。我们展示了如何利用训练范式来解决典型的INRS问题,即图像回归和逆渲染,并证明这种训练范式可以提高INR的数据效率和概括能力。我们方法的代码可在\ url {https://github.com/megvii-research/sobolev_inrs}中获得。
translated by 谷歌翻译
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions. Please see the project website for a video overview of the proposed method and all applications.
translated by 谷歌翻译
我们提出了一个小说嵌入字段\ emph {pref}作为促进神经信号建模和重建任务的紧凑表示。基于纯的多层感知器(MLP)神经技术偏向低频信号,并依赖于深层或傅立叶编码以避免丢失细节。取而代之的是,基于傅立叶嵌入空间的相拟合公式,PREF采用了紧凑且物理上解释的编码场。我们进行全面的实验,以证明PERF比最新的空间嵌入技术的优势。然后,我们使用近似的逆傅里叶变换方案以及新型的parseval正常器来开发高效的频率学习框架。广泛的实验表明,我们的高效和紧凑的基于频率的神经信号处理技术与2D图像完成,3D SDF表面回归和5D辐射场现场重建相同,甚至比最新的。
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
神经隐式功能对于数据表示非常有效。但是,如果输入数据具有许多细节或含有低频和高频带宽,则神经网络学到的隐式功能通常包括意外的噪声或失去细节。在保留细尺度内容的同时,删除工件具有挑战性,通常会出现过度平滑或嘈杂的问题。为了解决这一难题,我们提出了一个新框架(FINN),该框架(FINN)将过滤模块集成到MLP中以执行数据重建,同时适应包含不同频率的区域。过滤模块的平滑操作员作用于网络的中间结果,鼓励结果是平滑的,并且恢复的操作员将高频带到区域过于光滑。两个反活性操作员在所有MLP层中连续播放,以适应重建。我们证明了Finn在几个任务上的优势,并与最新方法相比,展示了显着改善。此外,Finn在收敛速度和网络稳定性方面还能产生更好的性能。
translated by 谷歌翻译
我们介绍了一种超快速的收敛方法来重建从一组图像中捕获具有已知姿势的场景的图像的每场辐射场。该任务通常适用于新颖的视图综合,最近是由神经辐射领域(NERF)彻底改革为其最先进的质量和灵活性。然而,NERF及其变体需要漫长的训练时间来为单个场景的数小时到几天。相比之下,我们的方法实现了NERF相当的质量,并通过单个GPU在不到15分钟内从划痕中迅速收敛。我们采用由密度体素网格组成的表示,用于场景几何形状和具有浅网络的特征体素网格,用于复杂的视图依赖性外观。用明确和离散化卷表示的建模并不是新的,但我们提出了两种简单而非琐碎的技术,有助于快速收敛速度和高质量的输出。首先,我们介绍了体素密度的激活后插值,其能够以较低的网格分辨率产生尖锐的表面。其次,直接体素密度优化容易发生次优几何解决方案,因此我们通过施加多个前沿来强制优化过程。最后,对五个内向的基准评估表明,我们的方法匹配,如果没有超越Nerf的质量,但它只需15分钟即可从头开始训练新场景。
translated by 谷歌翻译
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in lowdimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP fails to learn high frequencies both in theory and in practice. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.
translated by 谷歌翻译
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译
神经辐射场(NERF)在代表3D场景和合成新颖视图中示出了很大的潜力,但是在推理阶段的NERF的计算开销仍然很重。为了减轻负担,我们进入了NERF的粗细分,分层采样过程,并指出粗阶段可以被我们命名神经样本场的轻量级模块代替。所提出的示例场地图光线进入样本分布,可以将其转换为点坐标并进料到radiance字段以进行体积渲染。整体框架被命名为Neusample。我们在现实合成360 $ ^ {\ circ} $和真正的前瞻性,两个流行的3D场景集上进行实验,并表明Neusample在享受更快推理速度时比NERF实现更好的渲染质量。Neusample进一步压缩,以提出的样品场提取方法朝向质量和速度之间的更好的权衡。
translated by 谷歌翻译
中心位置是否完全能够代表像素?在离散的图像表示中表示具有它们的中心的像素的错误,但是在图像超分辨率(SR)上下文中的局域脉中的信号的聚合时,它更有意义地考虑每个像素。尽管任意级图像SR领域的基于坐标的隐式表示的能力很大,但该区域的像素的性质不完全考虑。为此,我们提出了集成的位置编码(IPE),通过聚合在像素区域上聚合频率信息来扩展传统的位置编码。我们将IPE应用于最先进的任意级图像超分辨率方法:本地隐式图像功能(LIIF),呈现IPE-LIIF。我们通过定量和定性评估显示IPE-LIIF的有效性,并进一步证明了IPE泛化能力与更大的图像尺度和基于多种隐式的方法。代码将被释放。
translated by 谷歌翻译
我们介绍了Plenoxels(plenoptic voxels),是一种光电型观测合成系统。Plenoxels表示作为具有球形谐波的稀疏3D网格的场景。该表示可以通过梯度方法和正则化从校准图像进行优化,而没有任何神经元件。在标准,基准任务中,Plenoxels优化了比神经辐射场更快的两个数量级,无需视觉质量损失。
translated by 谷歌翻译
我们呈现NERF-SR,一种用于高分辨率(HR)新型视图合成的解决方案,主要是低分辨率(LR)输入。我们的方法是基于神经辐射场(NERF)的内置,其预测每点密度和颜色,具有多层的射击。在在任意尺度上产生图像时,NERF与超越观察图像的分辨率努力。我们的关键识别是NERF具有本地之前的,这意味着可以在附近区域传播3D点的预测,并且保持准确。我们首先通过超级采样策略来利用它,该策略在每个图像像素处射击多个光线,这在子像素级别强制了多视图约束。然后,我们表明,NERF-SR可以通过改进网络进一步提高超级采样的性能,该细化网络利用估计的深度来实现HR参考图像上的相关补丁的幻觉。实验结果表明,NERF-SR在合成和现实世界数据集的HR上为新型视图合成产生高质量结果。
translated by 谷歌翻译
Photorealistic rendering of real-world scenes is a tremendous challenge with a wide range of applications, including MR (Mixed Reality), and VR (Mixed Reality). Neural networks, which have long been investigated in the context of solving differential equations, have previously been introduced as implicit representations for Photorealistic rendering. However, realistic rendering using classic computing is challenging because it requires time-consuming optical ray marching, and suffer computational bottlenecks due to the curse of dimensionality. In this paper, we propose Quantum Radiance Fields (QRF), which integrate the quantum circuit, quantum activation function, and quantum volume rendering for implicit scene representation. The results indicate that QRF not only takes advantage of the merits of quantum computing technology such as high speed, fast convergence, and high parallelism, but also ensure high quality of volume rendering.
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8 hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, called NeuS2, which achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality. To accelerate the training process, we integrate multi-resolution hash encodings into a neural surface representation and implement our whole algorithm in CUDA. We also present a lightweight calculation of second-order derivatives tailored to our networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To further stabilize training, a progressive learning strategy is proposed to optimize multi-resolution hash encodings from coarse to fine. In addition, we extend our method for reconstructing dynamic scenes with an incremental training strategy. Our experiments on various datasets demonstrate that NeuS2 significantly outperforms the state-of-the-arts in both surface reconstruction accuracy and training speed. The video is available at https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
translated by 谷歌翻译
我们提出了一个基于变压器的NERF(Transnerf),以学习在新视图合成任务的观察视图图像上进行的通用神经辐射场。相比之下,现有的基于MLP的NERF无法直接接收具有任意号码的观察视图,并且需要基于辅助池的操作来融合源视图信息,从而导致源视图与目标渲染视图之间缺少复杂的关系。此外,当前方法分别处理每个3D点,忽略辐射场场景表示的局部一致性。这些局限性可能会在挑战现实世界应用中降低其性能,在这些应用程序中可能存在巨大的差异和新颖的渲染视图之间的巨大差异。为了应对这些挑战,我们的Transnerf利用注意机制自然地将任意数量的源视图的深层关联解码为基于坐标的场景表示。在统一变压器网络中,在射线铸造空间和周围视图空间中考虑了形状和外观的局部一致性。实验表明,与基于图像的最先进的基于图像的神经渲染方法相比,我们在各种场景上接受过培训的Transnf可以在场景 - 敏捷和每个场景的燃烧场景中获得更好的性能。源视图与渲染视图之间的差距很大。
translated by 谷歌翻译
我们呈现高动态范围神经辐射字段(HDR-NERF),以从一组低动态范围(LDR)视图的HDR辐射率字段与不同的曝光。使用HDR-NERF,我们能够在不同的曝光下生成新的HDR视图和新型LDR视图。我们方法的关键是模拟物理成像过程,该过程决定了场景点的辐射与具有两个隐式功能的LDR图像中的像素值转换为:RADIACE字段和音调映射器。辐射场对场景辐射(值在0到+末端之间的值变化),其通过提供相应的射线源和光线方向来输出光线的密度和辐射。 TONE MAPPER模拟映射过程,即在相机传感器上击中的光线变为像素值。通过将辐射和相应的曝光时间送入音调映射器来预测光线的颜色。我们使用经典的卷渲染技术将输出辐射,颜色和密度投影为HDR和LDR图像,同时只使用输入的LDR图像作为监控。我们收集了一个新的前瞻性的HDR数据集,以评估所提出的方法。综合性和现实世界场景的实验结果验证了我们的方法不仅可以准确控制合成视图的曝光,还可以用高动态范围呈现视图。
translated by 谷歌翻译
神经辐射场(NERF)在建模3D场景和合成新型视图图像方面取得了巨大成功。但是,大多数以前的NERF方法需要大量时间来优化一个场景。显式数据结构,例如体素特征,显示出加速训练过程的巨大潜力。但是,体素特征面临两个大挑战,要应用于动态场景,即建模时间信息并捕获不同的点运动尺度。我们通过用时间感知的体素特征(称为Tineuvox)表示场景来提出一个辐射现场框架。引入了一个微小的坐标变形网络,以模拟粗糙运动轨迹,并在辐射网络中进一步增强了时间信息。提出了一种多距离插值方法,并应用于体素特征,以模拟小运动和大型运动。我们的框架大大加快了动态光芒度场的优化,同时保持高渲染质量。经验评估均在合成场景和真实场景上进行。我们的Tineuvox仅需8分钟和8 MB的存储成本即可完成培训,同时表现出比以前的动态NERF方法相似甚至更好的渲染性能。
translated by 谷歌翻译
Neural basis functionsReflectance coefficients Figure 1: (a) Each pixel in NeX multiplane image consists of an alpha transparency value, base color k 0 , and view-dependent reflectance coefficients k 1 ...k n . A linear combination of these coefficients and basis functions learned from a neural network produces the final color value. (b, c) show our synthesized images that can be rendered in real time with view-dependent effects such as the reflection on the silver spoon.
translated by 谷歌翻译
隐式神经表示(INR)使用多层的感知来代表低维问题域中的高频函数。最近,这些表示在与复杂的3D对象和场景相关的任务上实现了最先进的结果。核心问题是高度详细信号的表示,其使用具有周期性激活功能(警报器)的网络来解决或将傅立叶映射应用于输入。这项工作分析了两种方法之间的连接,并表明傅里叶映射的Perceptron在结构上像一个隐藏层警报器。此外,我们确定先前提出的傅里叶映射与一般D维傅里叶系列之间的关系,导致整数晶格映射。此外,我们修改了渐进式培训策略,以便在任意傅里叶映射上工作,并表明它提高了插值任务的泛化。最后,我们比较图像回归和新颖观看综合任务的不同映射。我们确认前面发现映射性能的主要贡献者是其元素的嵌入和标准偏差的大小。
translated by 谷歌翻译