了解动态场景中的3D运动对于许多视觉应用至关重要。最近的进步主要集中在估计人类等某些特定元素的活动上。在本文中,我们利用神经运动场来估计多视图设置中所有点的运动。由于颜色相似的点和与时变颜色的点的歧义,从动态场景中对动态场景进行建模运动是具有挑战性的。我们建议将估计运动的正规化为可预测。如果已知来自以前的帧的运动,那么在不久的将来的运动应该是可以预测的。因此,我们通过首先调节潜在嵌入的估计运动来引入可预测性正则化,然后通过采用预测网络来在嵌入式上执行可预测性。所提出的框架pref(可预测性正则化字段)比基于最先进的神经运动场的动态场景表示方法在PAR或更好的结果上取得了更好的成绩,同时不需要对场景的先验知识。
translated by 谷歌翻译
Figure 1: Our method can synthesize novel views in both space and time from a single monocular video of a dynamic scene. Here we show video results with various configurations of fixing and interpolating view and time (left), as well as a visualization of the recovered scene geometry (right). Please view with Adobe Acrobat or KDE Okular to see animations.
translated by 谷歌翻译
我们提出了一种神经动力构造(NDR),这是一种无模板的方法,可从单眼RGB-D摄像机中恢复动态场景的高保真几何形状和动作。在NDR中,我们采用神经隐式函数进行表面表示和渲染,使捕获的颜色和深度可以完全利用以共同优化表面和变形。为了表示和限制非刚性变形,我们提出了一种新型的神经可逆变形网络,以便自动满足任意两个帧之间的循环一致性。考虑到动态场景的表面拓扑可能会随着时间的流逝而发生变化,我们采用一种拓扑感知的策略来构建融合框架的拓扑变化对应关系。NDR还以全球优化的方式进一步完善了相机的姿势。公共数据集和我们收集的数据集的实验表明,NDR的表现优于现有的单眼动态重建方法。
translated by 谷歌翻译
Figure 1. Given a monocular image sequence, NR-NeRF reconstructs a single canonical neural radiance field to represent geometry and appearance, and a per-time-step deformation field. We can render the scene into a novel spatio-temporal camera trajectory that significantly differs from the input trajectory. NR-NeRF also learns rigidity scores and correspondences without direct supervision on either. We can use the rigidity scores to remove the foreground, we can supersample along the time dimension, and we can exaggerate or dampen motion.
translated by 谷歌翻译
捕获一般的变形场景对于许多计算机图形和视觉应用至关重要,当只有单眼RGB视频可用时,这尤其具有挑战性。竞争方法假设密集的点轨道,3D模板,大规模训练数据集或仅捕获小规模的变形。与这些相反,我们的方法UB4D在挑战性的情况下超过了先前的艺术状态,而没有做出这些假设。我们的技术包括两个新的,在非刚性3D重建的背景下,组件,即1)1)针对非刚性场景的基于坐标的和隐性的神经表示,这使动态场景无偏重建,2)新颖的新颖。动态场景流量损失,可以重建较大的变形。我们的新数据集(将公开可用)的结果表明,就表面重建精度和对大变形的鲁棒性而言,对最新技术的明显改善。访问项目页面https://4dqv.mpi-inf.mpg.de/ub4d/。
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
translated by 谷歌翻译
Obtaining photorealistic reconstructions of objects from sparse views is inherently ambiguous and can only be achieved by learning suitable reconstruction priors. Earlier works on sparse rigid object reconstruction successfully learned such priors from large datasets such as CO3D. In this paper, we extend this approach to dynamic objects. We use cats and dogs as a representative example and introduce Common Pets in 3D (CoP3D), a collection of crowd-sourced videos showing around 4,200 distinct pets. CoP3D is one of the first large-scale datasets for benchmarking non-rigid 3D reconstruction "in the wild". We also propose Tracker-NeRF, a method for learning 4D reconstruction from our dataset. At test time, given a small number of video frames of an unseen object, Tracker-NeRF predicts the trajectories of its 3D points and generates new views, interpolating viewpoint and time. Results on CoP3D reveal significantly better non-rigid new-view synthesis performance than existing baselines.
translated by 谷歌翻译
本文解决了从多视频视频中重建动画人类模型的挑战。最近的一些作品提出,将一个非刚性变形的场景分解为规范的神经辐射场和一组变形场,它们映射观察空间指向规范空间,从而使它们能够从图像中学习动态场景。但是,它们代表变形场作为转换矢量场或SE(3)字段,这使得优化高度不受限制。此外,这些表示无法通过输入动议明确控制。取而代之的是,我们基于线性混合剥皮算法引入了一个姿势驱动的变形场,该算法结合了混合重量场和3D人类骨架,以产生观察到的对应对应。由于3D人类骨骼更容易观察到,因此它们可以正规化变形场的学习。此外,可以通过输入骨骼运动来控制姿势驱动的变形场,以生成新的变形字段来动画规范人类模型。实验表明,我们的方法显着优于最近的人类建模方法。该代码可在https://zju3dv.github.io/animatable_nerf/上获得。
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
Humans constantly interact with objects in daily life tasks. Capturing such processes and subsequently conducting visual inferences from a fixed viewpoint suffers from occlusions, shape and texture ambiguities, motions, etc. To mitigate the problem, it is essential to build a training dataset that captures free-viewpoint interactions. We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects. To process the HODome dataset, we develop NeuralDome, a layer-wise neural processing pipeline tailored for multi-view video inputs to conduct accurate tracking, geometry reconstruction and free-view rendering, for both human subjects and objects. Extensive experiments on the HODome dataset demonstrate the effectiveness of NeuralDome on a variety of inference, modeling, and rendering tasks. Both the dataset and the NeuralDome tools will be disseminated to the community for further development.
translated by 谷歌翻译
铰接式3D形状重建的事先工作通常依赖于专用传感器(例如,同步的多摄像机系统)或预先构建的3D可变形模型(例如,Smal或SMPL)。这些方法无法在野外扩展到不同的各种物体。我们呈现Banmo,这是一种需要专用传感器的方法,也不需要预定义的模板形状。 Banmo在可怜的渲染框架中从许多单眼休闲视频中建立高保真,铰接式的3D模型(包括形状和动画皮肤的重量)。虽然许多视频的使用提供了更多的相机视图和对象关节的覆盖范围,但它们在建立不同背景,照明条件等方面建立了重大挑战。我们的主要洞察力是合并三所思想学校; (1)使用铰接骨骼和混合皮肤的经典可变形形状模型,(2)可容纳基于梯度的优化,(3)在像素之间产生对应关系的规范嵌入物模型。我们介绍了神经混合皮肤模型,可允许可微分和可逆的铰接变形。与规范嵌入式结合时,这些模型允许我们在跨越可通过循环一致性自我监督的视频中建立密集的对应。在真实和合成的数据集上,Banmo显示比人类和动物的先前工作更高保真3D重建,具有从新颖的观点和姿势的现实图像。项目网页:Banmo-www.github.io。
translated by 谷歌翻译
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene. State-of-the-art methods based on temporally varying Neural Radiance Fields (aka dynamic NeRFs) have shown impressive results on this task. However, for long videos with complex object motions and uncontrolled camera trajectories, these methods can produce blurry or inaccurate renderings, hampering their use in real-world applications. Instead of encoding the entire dynamic scene within the weights of an MLP, we present a new approach that addresses these limitations by adopting a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views in a scene-motion-aware manner. Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories. We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets, and also apply our approach to in-the-wild videos with challenging camera and object motion, where prior methods fail to produce high-quality renderings. Our project webpage is at dynibar.github.io.
translated by 谷歌翻译
我们向渲染和时间(4D)重建人类的渲染和时间(4D)重建的神经辐射场,通过稀疏的摄像机捕获或甚至来自单眼视频。我们的方法将思想与神经场景表示,新颖的综合合成和隐式统计几何人称的人类表示相结合,耦合使用新颖的损失功能。在先前使用符号距离功能表示的结构化隐式人体模型,而不是使用统一的占用率来学习具有统一占用的光域字段。这使我们能够从稀疏视图中稳健地融合信息,并概括超出在训练中观察到的姿势或视图。此外,我们应用几何限制以共同学习观察到的主题的结构 - 包括身体和衣服 - 并将辐射场正规化为几何合理的解决方案。在多个数据集上的广泛实验证明了我们方法的稳健性和准确性,其概括能力显着超出了一系列的姿势和视图,以及超出所观察到的形状的统计外推。
translated by 谷歌翻译
捕获和渲染寿命状的头发由于其细微的几何结构,复杂的物理相互作用及其非琐碎的视觉外观而特别具有挑战性。灰色是可信的头像的关键部件。在本文中,我们解决了上述问题:1)我们使用一种新的体积发型,这是成千上万的基元提出的。通过构建神经渲染的最新进步,每个原始可以有效地渲染。 2)具有可靠的控制信号,我们呈现了一种在股线水平上跟踪头发的新方法。为了保持计算努力,我们使用引导毛和经典技术将那些扩展到致密的头发罩中。 3)为了更好地强制执行我们模型的时间一致性和泛化能力,我们使用体积射线前导,进一步优化了我们的表示光流的3D场景流。我们的方法不仅可以创建录制的多视图序列的真实渲染,还可以通过提供新的控制信号来为新的头发配置创建渲染。我们将我们的方法与现有的方法进行比较,在视点合成和可驱动动画和实现最先进的结果。
translated by 谷歌翻译
Point of View & TimeFigure 1: We propose D-NeRF, a method for synthesizing novel views, at an arbitrary point in time, of dynamic scenes with complex non-rigid geometries. We optimize an underlying deformable volumetric function from a sparse set of input monocular views without the need of ground-truth geometry nor multi-view images. The figure shows two scenes under variable points of view and time instances synthesised by the proposed model.
translated by 谷歌翻译
https://video-nerf.github.io Figure 1. Our method takes a single casually captured video as input and learns a space-time neural irradiance field. (Top) Sample frames from the input video. (Middle) Novel view images rendered from textured meshes constructed from depth maps. (Bottom) Our results rendered from the proposed space-time neural irradiance field.
translated by 谷歌翻译
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a challenging problem to edit the scenes modeled by NeRF-based methods, especially for dynamic scenes. We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes and even support topological changes. Input with an image sequence from a single camera, our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points. Then end-users can edit the scene by easily dragging the key points to desired new positions. To achieve this, we propose a scene analysis method to detect and initialize key points by considering the dynamics in the scene, and a weighted key points strategy to model topologically varying dynamics by joint key points and weights optimization. Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence. Experiments demonstrate that our method achieves high-quality editing on various dynamic scenes and outperforms the state-of-the-art. We will release our code and captured data.
translated by 谷歌翻译
我们介绍重做,一个类无话的框架来重建RGBD或校准视频的动态对象。与事先工作相比,我们的问题设置是更真实的,更具挑战性的三个原因:1)由于遮挡或相机设置,感兴趣的对象可能永远不会完全可见,但我们的目标是重建完整的形状; 2)我们的目标是处理不同的对象动态,包括刚性运动,非刚性运动和关节; 3)我们的目标是通过一个统一的框架重建不同类别的对象。为了解决这些挑战,我们开发了两种新模块。首先,我们介绍了一个规范的4D隐式功能,它是与聚合的时间视觉线索对齐的像素对齐。其次,我们开发了一个4D变换模块,它捕获对象动态以支持时间传播和聚合。我们研究了重做在综合性RGBD视频数据集风帆-VOS 3D和Deformingthings4d ++上的大量实验中的疗效,以及现实世界视频数据3DPW。我们发现重做优于最先进的动态重建方法。在消融研究中,我们验证每个发达的组件。
translated by 谷歌翻译
由于基础物理学的复杂性以及捕获中的复杂遮挡和照明,从稀疏多视频RGB视频中对流体的高保真重建仍然是一个巨大的挑战。现有的解决方案要么假设障碍和照明知识,要么仅专注于没有障碍物或复杂照明的简单流体场景,因此不适合具有未知照明或任意障碍的现实场景。我们提出了第一种通过从稀疏视频的端到端优化中利用管理物理(即,navier -stokes方程)来重建动态流体的第一种方法,而无需采取照明条件,几何信息或边界条件作为输入。我们使用神经网络作为流体的密度和速度解决方案函数以及静态对象的辐射场函数提供连续的时空场景表示。通过将静态和动态含量分开的混合体系结构,与静态障碍物的流体相互作用首次重建,而没有其他几何输入或人类标记。通过用物理知识的深度学习来增强随时间变化的神经辐射场,我们的方法受益于对图像和物理先验的监督。为了从稀疏视图中实现强大的优化,我们引入了逐层增长策略,以逐步提高网络容量。使用具有新的正则化项的逐步增长的模型,我们设法在不拟合的情况下解除了辐射场中的密度彩色歧义。在避免了次优速度之前,将预验证的密度到速度流体模型借用了,该数据低估了涡度,但可以微不足道地满足物理方程。我们的方法在一组代表性的合成和真实流动捕获方面表现出具有放松的约束和强大的灵活性的高质量结果。
translated by 谷歌翻译