可微分的渲染是现代视觉中的重要操作,允许在现代机器学习框架中使用逆图形方法3D理解。显式形状表示(体素,点云或网格),而相对容易呈现,通常遭受有限的几何保真度或拓扑限制。另一方面,隐式表示(占用,距离或辐射字段)保持更大的保真度,但遭受复杂或低效的渲染过程,限制可扩展性。在这项工作中,我们努力解决具有新颖形状表示的缺点,允许在隐式架构内快速可分辨地渲染。构建隐式距离表示,我们定义了指向距离字段(DDF),将定向点(位置和方向)映射到表面可见性和深度。这种场可以通过网络衍生物能够使差分表面几何提取(例如,表面法线和曲率)能够容易地构成,并且允许提取经典无符号距离场。使用概率DDFS(PDDFS),我们展示了如何模拟底层字段中固有的不连续性。最后,我们将方法应用于拟合单一形状,未配对的3D感知生成图像建模和单像3D重建任务,通过我们表示的多功能性展示具有简单架构组件的强大性能。
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
在视觉计算中,3D几何形状以许多不同的形式表示,包括网格,点云,体素电网,水平集和深度图像。每个表示都适用于不同的任务,从而使一个表示形式转换为另一个表示(前向地图)是一个重要且常见的问题。我们提出了全向距离字段(ODF),这是一种新的3D形状表示形式,该表示通过将深度从任何观看方向从任何3D位置存储到对象的表面来编码几何形状。由于射线是ODF的基本单元,因此可以轻松地从通用的3D表示和点云等常见的3D表示。与限制代表封闭表面的水平集方法不同,ODF是未签名的,因此可以对开放表面进行建模(例如服装)。我们证明,尽管在遮挡边界处存在固有的不连续性,但可以通过神经网络(Neururodf)有效地学习ODF。我们还引入了有效的前向映射算法,以转换odf to&从常见的3D表示。具体而言,我们引入了一种有效的跳跃立方体算法,用于从ODF生成网格。实验表明,神经模型可以通过过度拟合单个对象学会学会捕获高质量的形状,并学会概括对共同的形状类别。
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
将3D坐标映射到签名距离函数(SDF)或占用值的神经网络具有启用对象形状的高保真隐式表示。本文开发了一种新的形状模型,允许通过优化连续符号定向距离功能(SDDF)来合成新颖距离视图。与Deep SDF模型类似,我们的SDDF配方可以代表整个类别的形状并从部分输入数据中跨越形状填写或插入。与SDF不同,该SDF在任何方向上测量到最近表面的距离,SDDF测量给定方向的距离。这允许训练没有3D形状监控的SDDF模型,仅使用距离测量,从深度相机或激光雷达传感器易获得。我们的模型还通过直接在任意位置和观察方向上直接预测距离,去除像表面提取或渲染的后处理步骤。与深色视角综合技术不同,例如培训高容量黑盒型号的神经辐射字段,我们的模型通过构造SDDF值沿着观察方向线性降低的性质。这种结构约束不仅导致维度降低,而且还提供了关于SDDF预测的准确性的分析信心,无论到物体表面的距离如何。
translated by 谷歌翻译
我们介绍了我们称呼STYLESDF的高分辨率,3D一致的图像和形状生成技术。我们的方法仅在单视图RGB数据上培训,并站在StyleGan2的肩部,用于图像生成,同时解决3D感知GANS中的两个主要挑战:1)RGB图像的高分辨率,视图 - 一致生成RGB图像,以及2)详细的3D形状。通过使用基于样式的2D发生器合并基于SDF的3D表示来实现这一目标。我们的3D隐式网络呈现出低分辨率的特征映射,其中基于样式的网络生成了View-Consive,1024x1024图像。值得注意的是,基于SDF的3D建模定义了详细的3D曲面,导致一致的卷渲染。在视觉和几何质量方面,我们的方法显示出更高的质量结果。
translated by 谷歌翻译
Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3Dstructure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3Dstructure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-toend from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. 1
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function. Due to the nature of the implicit function, the rendering process requires tremendous function queries, which is particularly problematic when the function is represented as a neural network. We optimize both the forward and backward passes of our rendering layer to make it run efficiently with affordable memory consumption on a commodity graphics card. Our rendering method is fully differentiable such that losses can be directly computed on the rendered 2D observations, and the gradients can be propagated backwards to optimize the 3D geometry. We show that our rendering method can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization. With the geometry based reasoning, our 3D shape prediction methods show excellent generalization capability and robustness against various noises. * Work done while Shaohui Liu was an academic guest at ETH Zurich.
translated by 谷歌翻译
In this work we address the challenging problem of multiview 3D surface reconstruction. We introduce a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera. The geometry is represented as a zero level-set of a neural network, while the neural renderer, derived from the rendering equation, is capable of (implicitly) modeling a wide set of lighting conditions and materials. We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera initializations from the DTU MVS dataset. We found our model to produce state of the art 3D surface reconstructions with high fidelity, resolution and detail.
translated by 谷歌翻译
深度生成模型的最新进展导致了3D形状合成的巨大进展。虽然现有模型能够合成表示为体素,点云或隐式功能的形状,但这些方法仅间接强制执行最终3D形状表面的合理性。在这里,我们提出了一种直接将对抗训练施加到物体表面的3D形状合成框架(Surfgen)。我们的方法使用可分解的球面投影层来捕获并表示隐式3D发生器的显式零IsoSurface作为在单元球上定义的功能。通过在对手设置中用球形CNN处理3D对象表面的球形表示,我们的发电机可以更好地学习自然形状表面的统计数据。我们在大规模形状数据集中评估我们的模型,并证明了端到端训练的模型能够产生具有不同拓扑的高保真3D形状。
translated by 谷歌翻译
我们提出了一种方法,可以在神经SDF渲染器中相对于几何场景参数自动计算正确的梯度。最近基于物理的可区分渲染技术用于网格采样来处理不连续性,尤其是在对象轮廓上,但是SDF没有简单的参数形式,可用于采样。取而代之的是,我们的方法建立在区域采样技术的基础上,并为SDFS开发了连续的翘曲功能,以解决这些不连续性。我们的方法利用了在SDF中编码的表面的距离,并在球形示踪剂点上使用正交来计算此翘曲功能。我们进一步表明,这可以通过对要点进行次采样来使神经SDF的方法进行。我们可区分的渲染器可用于优化从多视图图像中的神经形状,并对最近基于SDF的反向渲染方法产生可比较的3D重建,而无需2D分割掩码来指导几何形状优化,而无需对几何形状进行体积近似。
translated by 谷歌翻译
我们提出了一种准确的3D重建方法的方法。我们基于神经重建和渲染(例如神经辐射场(NERF))的最新进展的优势。这种方法的一个主要缺点是,它们未能重建对象的任何部分,这些部分在训练图像中不明确可见,这通常是野外图像和视频的情况。当缺乏证据时,可以使用诸如对称的结构先验来完成缺失的信息。但是,在神经渲染中利用此类先验是高度不平凡的:虽然几何和非反射材料可能是对称的,但环境场景的阴影和反射通常不是对称的。为了解决这个问题,我们将软对称性约束应用于3D几何和材料特性,并将外观纳入照明,反照率和反射率。我们在最近引入的CO3D数据集上评估了我们的方法,这是由于重建高度反射材料的挑战,重点是汽车类别。我们表明,它可以用高保真度重建未观察到的区域,并渲染高质量的新型视图图像。
translated by 谷歌翻译
我们引入了一个新的隐式形状表示,称为基于射线的隐式函数(PRIF)。与基于处理空间位置的签名距离函数(SDF)的大多数现有方法相反,我们的表示形式在定向射线上运行。具体而言,PRIF的配制是直接产生给定输入射线的表面命中点,而无需昂贵的球体跟踪操作,因此可以有效地提取形状提取和可区分的渲染。我们证明,经过编码PRIF的神经网络在各种任务中取得了成功,包括单个形状表示,类别形状的生成,从稀疏或嘈杂的观察到形状完成,相机姿势估计的逆渲染以及带有颜色的神经渲染。
translated by 谷歌翻译
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.
translated by 谷歌翻译
where the highest resolution is required, using facial performance capture as a case in point.
translated by 谷歌翻译
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene representation has taken the field of Computer Vision by storm. As a novel view synthesis and 3D reconstruction method, NeRF models find applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more. Since the original paper by Mildenhall et al., more than 250 preprints were published, with more than 100 eventually being accepted in tier one Computer Vision Conferences. Given NeRF popularity and the current interest in this research area, we believe it necessary to compile a comprehensive survey of NeRF papers from the past two years, which we organized into both architecture, and application based taxonomies. We also provide an introduction to the theory of NeRF based novel view synthesis, and a benchmark comparison of the performance and speed of key NeRF models. By creating this survey, we hope to introduce new researchers to NeRF, provide a helpful reference for influential works in this field, as well as motivate future research directions with our discussion section.
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译