我们提出了神经特征融合场(N3F),当将后者应用于分析多个图像作为3D场景时,可改善密集的2D图像特征提取器的方法。给定图像功能提取器,例如使用自学的预训练,N3F使用它作为老师来学习在3D空间中定义的学生网络。 3D学生网络类似于蒸馏所述功能的神经辐射领域,可以使用通常的可区分渲染机械进行培训。结果,N3F很容易适用于大多数神经渲染制剂,包括香草Nerf及其扩展到复杂的动态场景。我们表明,我们的方法不仅可以在不使用手动标签的情况下在场景特定的神经领域的上下文中实现语义理解,而且还可以始终如一地改善自我监督的2D基线。通过考虑各种任务,例如2D对象检索,3D细分和场景编辑,包括各种序列,包括史诗般的基金斯基准中的长期以上的视频,可以证明这一点。
translated by 谷歌翻译
Neural Radiance Fields (NeRFs) encode the radiance in a scene parameterized by the scene's plenoptic function. This is achieved by using an MLP together with a mapping to a higher-dimensional space, and has been proven to capture scenes with a great level of detail. Naturally, the same parameterization can be used to encode additional properties of the scene, beyond just its radiance. A particularly interesting property in this regard is the semantic decomposition of the scene. We introduce a novel technique for semantic soft decomposition of neural radiance fields (named SSDNeRF) which jointly encodes semantic signals in combination with radiance signals of a scene. Our approach provides a soft decomposition of the scene into semantic parts, enabling us to correctly encode multiple semantic classes blending along the same direction -- an impossible feat for existing methods. Not only does this lead to a detailed, 3D semantic representation of the scene, but we also show that the regularizing effects of the MLP used for encoding help to improve the semantic representation. We show state-of-the-art segmentation and reconstruction results on a dataset of common objects and demonstrate how the proposed approach can be applied for high quality temporally consistent video editing and re-compositing on a dataset of casually captured selfie videos.
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
获取3D对象表示对于创建照片现实的模拟器和为AR/VR应用程序收集资产很重要。神经领域已经显示出其在学习2D图像的场景的连续体积表示方面的有效性,但是从这些模型中获取对象表示,并以较弱的监督仍然是一个开放的挑战。在本文中,我们介绍了Laterf,一种从给定的2D图像和已知相机姿势的2D图像中提取感兴趣对象的方法,对象的自然语言描述以及少数对象和非对象标签 - 输入图像中的对象点。为了忠实地从场景中提取对象,后来在每个3D点上都以其他“对象”概率扩展NERF公式。此外,我们利用预先训练的剪辑模型与我们可区分的对象渲染器相结合的丰富潜在空间来注入对象的封闭部分。我们在合成数据集和真实数据集上展示了高保真对象提取,并通过广泛的消融研究证明我们的设计选择是合理的。
translated by 谷歌翻译
We propose Panoptic Lifting, a novel approach for learning panoptic 3D volumetric representations from images of in-the-wild scenes. Once trained, our model can render color images together with 3D-consistent panoptic segmentation from novel viewpoints. Unlike existing approaches which use 3D input directly or indirectly, our method requires only machine-generated 2D panoptic segmentation masks inferred from a pre-trained network. Our core contribution is a panoptic lifting scheme based on a neural field representation that generates a unified and multi-view consistent, 3D panoptic representation of the scene. To account for inconsistencies of 2D instance identifiers across views, we solve a linear assignment with a cost based on the model's current predictions and the machine-generated segmentation masks, thus enabling us to lift 2D instances to 3D in a consistent way. We further propose and ablate contributions that make our method more robust to noisy, machine-generated labels, including test-time augmentations for confidence estimates, segment consistency loss, bounded segmentation fields, and gradient stopping. Experimental results validate our approach on the challenging Hypersim, Replica, and ScanNet datasets, improving by 8.4, 13.8, and 10.6% in scene-level PQ over state of the art.
translated by 谷歌翻译
我们向渲染和时间(4D)重建人类的渲染和时间(4D)重建的神经辐射场,通过稀疏的摄像机捕获或甚至来自单眼视频。我们的方法将思想与神经场景表示,新颖的综合合成和隐式统计几何人称的人类表示相结合,耦合使用新颖的损失功能。在先前使用符号距离功能表示的结构化隐式人体模型,而不是使用统一的占用率来学习具有统一占用的光域字段。这使我们能够从稀疏视图中稳健地融合信息,并概括超出在训练中观察到的姿势或视图。此外,我们应用几何限制以共同学习观察到的主题的结构 - 包括身体和衣服 - 并将辐射场正规化为几何合理的解决方案。在多个数据集上的广泛实验证明了我们方法的稳健性和准确性,其概括能力显着超出了一系列的姿势和视图,以及超出所观察到的形状的统计外推。
translated by 谷歌翻译
我们呈现NESF,一种用于单独从构成的RGB图像中生成3D语义场的方法。代替经典的3D表示,我们的方法在最近的基础上建立了隐式神经场景表示的工作,其中3D结构被点亮功能捕获。我们利用这种方法来恢复3D密度领域,我们然后在其中培训由构成的2D语义地图监督的3D语义分段模型。尽管仅在2D信号上培训,我们的方法能够从新颖的相机姿势生成3D一致的语义地图,并且可以在任意3D点查询。值得注意的是,NESF与产生密度场的任何方法兼容,并且随着密度场的质量改善,其精度可提高。我们的实证分析在复杂的实际呈现的合成场景中向竞争性2D和3D语义分割基线表现出可比的质量。我们的方法是第一个提供真正密集的3D场景分段,需要仅需要2D监督培训,并且不需要任何关于新颖场景的推论的语义输入。我们鼓励读者访问项目网站。
translated by 谷歌翻译
人类可以从少量的2D视图中从3D中感知场景。对于AI代理商,只有几个图像的任何视点识别场景的能力使它们能够有效地与场景及其对象交互。在这项工作中,我们试图通过这种能力赋予机器。我们提出了一种模型,它通过将新场景的几个RGB图像进行输入,并通过将其分割为语义类别来识别新的视点中的场景。所有这一切都没有访问这些视图的RGB图像。我们将2D场景识别与隐式3D表示,并从数百个场景的多视图2D注释中学习,而无需超出相机姿势的3D监督。我们试验具有挑战性的数据集,并展示我们模型的能力,共同捕捉新颖场景的语义和几何形状,具有不同的布局,物体类型和形状。
translated by 谷歌翻译
Radiance Fields (RF) are popular to represent casually-captured scenes for new view generation and have been used for applications beyond it. Understanding and manipulating scenes represented as RFs have to naturally follow to facilitate mixed reality on personal spaces. Semantic segmentation of objects in the 3D scene is an important step for that. Prior segmentation efforts using feature distillation show promise but don't scale to complex objects with diverse appearance. We present a framework to interactively segment objects with fine structure. Nearest neighbor feature matching identifies high-confidence regions of the objects using distilled features. Bilateral filtering in a joint spatio-semantic space grows the region to recover accurate segmentation. We show state-of-the-art results of segmenting objects from RFs and compositing them to another scene, changing appearance, etc., moving closer to rich scene manipulation and understanding. Project Page: https://rahul-goel.github.io/isrf/
translated by 谷歌翻译
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene. State-of-the-art methods based on temporally varying Neural Radiance Fields (aka dynamic NeRFs) have shown impressive results on this task. However, for long videos with complex object motions and uncontrolled camera trajectories, these methods can produce blurry or inaccurate renderings, hampering their use in real-world applications. Instead of encoding the entire dynamic scene within the weights of an MLP, we present a new approach that addresses these limitations by adopting a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views in a scene-motion-aware manner. Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories. We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets, and also apply our approach to in-the-wild videos with challenging camera and object motion, where prior methods fail to produce high-quality renderings. Our project webpage is at dynibar.github.io.
translated by 谷歌翻译
具有高质量注释的大规模培训数据对于训练语义和实例分割模型至关重要。不幸的是,像素的注释是劳动密集型且昂贵的,从而提高了对更有效的标签策略的需求。在这项工作中,我们提出了一种新颖的3D到2D标签传输方法,即Panoptic Nerf,该方法旨在从易于体现的粗3D边界原始基原始素中获取每个像素2D语义和实例标签。我们的方法利用NERF作为可区分的工具来统一从现有数据集中传输的粗3D注释和2D语义提示。我们证明,这种组合允许通过语义信息指导的几何形状,从而使跨多个视图的准确语义图渲染。此外,这种融合过程解决了粗3D注释的标签歧义,并过滤了2D预测中的噪声。通过推断3D空间并渲染到2D标签,我们的2D语义和实例标签是按设计一致的多视图。实验结果表明,在挑战Kitti-360数据集的挑战性城市场景方面,Pastic Nerf的表现优于现有标签传输方法。
translated by 谷歌翻译
Figure 1. Given a monocular image sequence, NR-NeRF reconstructs a single canonical neural radiance field to represent geometry and appearance, and a per-time-step deformation field. We can render the scene into a novel spatio-temporal camera trajectory that significantly differs from the input trajectory. NR-NeRF also learns rigidity scores and correspondences without direct supervision on either. We can use the rigidity scores to remove the foreground, we can supersample along the time dimension, and we can exaggerate or dampen motion.
translated by 谷歌翻译
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene representation has taken the field of Computer Vision by storm. As a novel view synthesis and 3D reconstruction method, NeRF models find applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more. Since the original paper by Mildenhall et al., more than 250 preprints were published, with more than 100 eventually being accepted in tier one Computer Vision Conferences. Given NeRF popularity and the current interest in this research area, we believe it necessary to compile a comprehensive survey of NeRF papers from the past two years, which we organized into both architecture, and application based taxonomies. We also provide an introduction to the theory of NeRF based novel view synthesis, and a benchmark comparison of the performance and speed of key NeRF models. By creating this survey, we hope to introduce new researchers to NeRF, provide a helpful reference for influential works in this field, as well as motivate future research directions with our discussion section.
translated by 谷歌翻译
我们扩展了神经3D表示,以允许直观和可解释的用户控制超出新颖视图渲染(即相机控制)。我们允许用户注释一个希望在训练图像中只用少量掩模注释来控制的场景的哪个部分。我们的主要思想是将属性视为给定场景编码的神经网络回归的潜在变量。这导致了几次拍摄的学习框架,当未提供注释时,框架会自动发现属性。我们将我们的方法应用于具有不同类型的可控属性的各种场景(例如,人类面上的表达式控制,或在无生命对象的移动中的状态控制)。总体而言,我们据我们所知,我们的知识展示了第一次新颖的视图和新颖的属性从单一视频重新渲染场景。
translated by 谷歌翻译
人类的感知可靠地识别3D场景的可移动和不可移动的部分,并通过不完整的观测来完成对象和背景的3D结构。我们不是通过标记的示例来学习此技能,而只是通过观察对象移动来学习。在这项工作中,我们提出了一种方法,该方法在训练时间观察未标记的多视图视频,并学会绘制对复杂场景的单个图像观察,例如带有汽车的街道,将其绘制为3D神经场景表示,该表演将其分解为可移动和可移动和不可移动的零件,同时合理地完成其3D结构。我们通过2D神经地面计划分别参数可移动和不可移动的场景部分。这些地面计划是与接地平面对齐的2D网格,可以将其局部解码为3D神经辐射场。我们的模型通过神经渲染受过训练的自我监督。我们证明,使用简单的启发式方法,例如提取对象以对象的3D表示,新颖的视图合成,实例段和3D边界框预测,预测,预测,诸如提取以对象为中心的3D表示,诸如提取街道规模的3D场景中的各种下游任务可以实现各种下游任务。强调其作为数据效率3D场景理解模型的骨干的价值。这种分离进一步通过对象操纵(例如删除,插入和刚体运动)进行了现场编辑。
translated by 谷歌翻译
神经体积表示表明,MLP网络可以通过多视图校准图像来训练MLP网络,以表示场景的几何形状和外观,而无需显式3D监督。对象分割可以根据学习的辐射字段丰富许多下游应用程序。但是,引入手工制作的细分以在复杂的现实世界中定义感兴趣的区域是非平凡且昂贵的,因为它获得了每个视图注释。本文使用NERF进行复杂的现实世界场景来探索对物体分割的自我监督学习。我们的框架,nerf-sos,夫妻对象分割和神经辐射字段,以在场景中的任何视图中分割对象。通过提出一种新颖的合作对比度损失,在外观和几何水平上,NERF-SOS鼓励NERF模型将紧凑的几何学分割簇从其密度字段中提炼出紧凑的几何学分割簇以及自我监督的预训练的预训练的2D视觉特征。可以将自我监督的对象分割框架应用于各种NERF模型,这些模型既可以导致室内和室外场景的照片真实的渲染结果和令人信服的分割。 LLFF,坦克和寺庙数据集的广泛结果验证了NERF-SOS的有效性。它始终超过其他基于图像的自我监督基线,甚至比监督的语义nerf捕捉细节。
translated by 谷歌翻译
最近已经提出了方法,仅使用稀疏语义注释像素的形式使用颜色图像和专家监督,将密度段3D卷成类。尽管令人印象深刻,但这些方法仍然需要相对较大的监督和对象进行分割可能需要几分钟的实践。这样的系统通常仅在其拟合的特定场景上优化其表示形式,而无需利用先前看到的图像中的任何先前信息。在本文中,我们建议使用在大型现有数据集中训练的模型提取的功能,以提高细分性能。我们通过体积渲染特征图和从每个输入图像提取的特征进行监督,将此特征表示形式烘烤到神经辐射场(NERF)中。我们表明,通过将此表示形式烘烤到NERF中,我们可以使后续的分类任务更加容易。我们的实验表明,与在各种场景中现有方法相比,我们的方法具有更高的分割精度,语义注释较少。
translated by 谷歌翻译
https://video-nerf.github.io Figure 1. Our method takes a single casually captured video as input and learns a space-time neural irradiance field. (Top) Sample frames from the input video. (Middle) Novel view images rendered from textured meshes constructed from depth maps. (Bottom) Our results rendered from the proposed space-time neural irradiance field.
translated by 谷歌翻译
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.
translated by 谷歌翻译
Figure 1: Our method can synthesize novel views in both space and time from a single monocular video of a dynamic scene. Here we show video results with various configurations of fixing and interpolating view and time (left), as well as a visualization of the recovered scene geometry (right). Please view with Adobe Acrobat or KDE Okular to see animations.
translated by 谷歌翻译