我们在2D和3D域中介绍了一个Unist,是通用,未配对的形状转换的第一深度神经隐式模型。我们的模型是在自动编码隐式字段上构建的,而不是表示最先进的点云。此外,我们的翻译网络受过培训,以在潜在的网格表示上执行任务,该任务结合了潜在空间处理和位置意识的优点,不仅能够实现剧烈形状变换,而且很好地保护空间特征和用于自然形状的优质局部细节翻译。使用相同的网络架构和仅由输入域对决定,我们的模型可以了解风格保留的内容改变和内容保留的样式传输。我们展示了翻译结果的一般性和质量,并将它们与众所周知的基线进行比较。
translated by 谷歌翻译
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Code and supplementary material are available at https://github.com/czq142857/implicit-decoder.
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
基于单个草图图像重建3D形状是由于稀疏,不规则的草图和常规,密集的3D形状之间的较大域间隙而具有挑战性的。现有的作品尝试采用从草图提取的全局功能来直接预测3D坐标,但通常会遭受失去对输入草图不忠心的细节。通过分析3D到2D投影过程,我们注意到表征2D点云分布的密度图(即,投影平面每个位置的点的概率)可以用作代理,以促进该代理重建过程。为此,我们首先通过图像翻译网络将草图翻译成一个更有信息的2D表示,可用于生成密度映射。接下来,通过两个阶段的概率采样过程重建一个3D点云:首先通过对密度映射进行采样,首先恢复2D点(即X和Y坐标);然后通过在每个2D点确定的射线处采样深度值来预测深度​​(即Z坐标)。进行了广泛的实验,定量和定性结果都表明,我们提出的方法显着优于其他基线方法。
translated by 谷歌翻译
Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to represent and generate 3D shapes, as well as a vast number of use cases. However, single-view reconstruction remains a challenging topic that can unlock various interesting use cases such as interactive design. In this work, we propose a novel framework that leverages the intermediate latent spaces of Vision Transformer (ViT) and a joint image-text representational model, CLIP, for fast and efficient Single View Reconstruction (SVR). More specifically, we propose a novel mapping network architecture that learns a mapping between deep features extracted from ViT and CLIP, and the latent space of a base 3D generative model. Unlike previous work, our method enables view-agnostic reconstruction of 3D shapes, even in the presence of large occlusions. We use the ShapeNetV2 dataset and perform extensive experiments with comparisons to SOTA methods to demonstrate our method's effectiveness.
translated by 谷歌翻译
在本文中,我们展示了Facetunegan,一种新的3D面部模型表示分解和编码面部身份和面部表情。我们提出了对图像到图像翻译网络的第一次适应,该图像已经成功地用于2D域,到3D面几何。利用最近释放的大面扫描数据库,神经网络已经过培训,以便与面部更好的了解,使面部表情转移和中和富有效应面的变异因素。具体而言,我们设计了一种适应基础架构的对抗架构,并使用Spiralnet ++进行卷积和采样操作。使用两个公共数据集(FACESCAPE和COMA),Facetunegan具有比最先进的技术更好的身份分解和面部中和。它还通过预测较近地面真实数据的闪烁形状并且由于源极和目标之间的面部形态过于不同的面部形态而越来越多的不期望的伪像来优异。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
本地化隐式功能的最新进展使神经隐式表示能够可扩展到大型场景。然而,这些方法采用的3D空间的定期细分未能考虑到表面占用的稀疏性和几何细节的变化粒度。结果,其内存占地面积与输入体积均别较大,即使在适度密集的分解中也导致禁止的计算成本。在这项工作中,我们为3D表面,编码OCTFIELD提供了一种学习的分层隐式表示,允许具有低内存和计算预算的复杂曲面的高精度编码。我们方法的关键是仅在感兴趣的表面周围分发本地隐式功能的3D场景的自适应分解。我们通过引入分层Octree结构来实现这一目标,以根据表面占用和部件几何形状的丰富度自适应地细分3D空间。随着八十六是离散和不可分辨性的,我们进一步提出了一种新颖的等级网络,其模拟八偏细胞的细分作为概率的过程,并以可差的方式递归地编码和解码八叠结构和表面几何形状。我们展示了Octfield的一系列形状建模和重建任务的价值,显示出在替代方法方面的优越性。
translated by 谷歌翻译
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fullyconnected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
translated by 谷歌翻译
We introduce anchored radial observations (ARO), a novel shape encoding for learning neural field representation of shapes that is category-agnostic and generalizable amid significant shape variations. The main idea behind our work is to reason about shapes through partial observations from a set of viewpoints, called anchors. We develop a general and unified shape representation by employing a fixed set of anchors, via Fibonacci sampling, and designing a coordinate-based deep neural network to predict the occupancy value of a query point in space. Differently from prior neural implicit models, that use global shape feature, our shape encoder operates on contextual, query-specific features. To predict point occupancy, locally observed shape information from the perspective of the anchors surrounding the input query point are encoded and aggregated through an attention module, before implicit decoding is performed. We demonstrate the quality and generality of our network, coined ARO-Net, on surface reconstruction from sparse point clouds, with tests on novel and unseen object categories, "one-shape" training, and comparisons to state-of-the-art neural and classical methods for reconstruction and tessellation.
translated by 谷歌翻译
您将如何通过一些错过来修复物理物体?您可能会想象它的原始形状从先前捕获的图像中,首先恢复其整体(全局)但粗大的形状,然后完善其本地细节。我们有动力模仿物理维修程序以解决点云完成。为此,我们提出了一个跨模式的形状转移双转化网络(称为CSDN),这是一种带有全循环参与图像的粗到精细范式,以完成优质的点云完成。 CSDN主要由“ Shape Fusion”和“ Dual-Refinect”模块组成,以应对跨模式挑战。第一个模块将固有的形状特性从单个图像传输,以指导点云缺失区域的几何形状生成,在其中,我们建议iPadain嵌入图像的全局特征和部分点云的完成。第二个模块通过调整生成点的位置来完善粗糙输出,其中本地改进单元通过图卷积利用了小说和输入点之间的几何关系,而全局约束单元则利用输入图像来微调生成的偏移。与大多数现有方法不同,CSDN不仅探讨了图像中的互补信息,而且还可以在整个粗到精细的完成过程中有效利用跨模式数据。实验结果表明,CSDN对十个跨模式基准的竞争对手表现出色。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.
translated by 谷歌翻译
Figure 1. This paper introduces Local Deep Implicit Functions, a 3D shape representation that decomposes an input shape (mesh on left in every triplet) into a structured set of shape elements (colored ellipses on right) whose contributions to an implicit surface reconstruction (middle) are represented by latent vectors decoded by a deep network. Project video and website at ldif.cs.princeton.edu.
translated by 谷歌翻译
我们介绍DMTET,深度3D条件生成模型,可以使用诸如粗体素的简单用户指南来合成高分辨率3D形状。它通过利用新型混合3D表示来结婚隐式和显式3D表示的优点。与当前隐含的方法相比,培训涉及符号距离值,DMTET直接针对重建的表面进行了优化,这使我们能够用更少的伪像来合成更精细的几何细节。与直接生成诸如网格之类的显式表示的深度3D生成模型不同,我们的模型可以合成具有任意拓扑的形状。 DMTET的核心包括可变形的四面体网格,其编码离散的符号距离函数和可分行的行进Tetrahedra层,其将隐式符号距离表示转换为显式谱图表示。这种组合允许使用在表面网格上明确定义的重建和对抗性损耗来联合优化表面几何形状和拓扑以及生成细分层次结构。我们的方法显着优于来自粗体素输入的条件形状合成的现有工作,培训在复杂的3D动物形状的数据集上。项目页面:https://nv-tlabs.github.io/dmtet/
translated by 谷歌翻译
建筑摄影是一种摄影类型,重点是捕获前景中带有戏剧性照明的建筑物或结构。受图像到图像翻译方法的成功启发,我们旨在为建筑照片执行风格转移。但是,建筑摄影中的特殊构图对这类照片中的样式转移构成了巨大挑战。现有的神经风格转移方法将建筑图像视为单个实体,它将产生与原始建筑的几何特征,产生不切实际的照明,错误的颜色演绎以及可视化伪影,例如幽灵,外观失真或颜色不匹配。在本文中,我们专门针对建筑摄影的神经风格转移方法。我们的方法解决了两个分支神经网络中建筑照片中前景和背景的组成,该神经网络分别考虑了前景和背景的样式转移。我们的方法包括一个分割模块,基于学习的图像到图像翻译模块和图像混合优化模块。我们使用了一天中不同的魔术时代捕获的不受限制的户外建筑照片的新数据集培训了图像到图像的翻译神经网络,利用其他语义信息,以更好地匹配和几何形状保存。我们的实验表明,我们的方法可以在前景和背景上产生逼真的照明和颜色演绎,并且在定量和定性上都优于一般图像到图像转换和任意样式转移基线。我们的代码和数据可在https://github.com/hkust-vgd/architectural_style_transfer上获得。
translated by 谷歌翻译
我们为3D形状生成(称为SDF-Stylegan)提供了一种基于stylegan2的深度学习方法,目的是降低生成形状和形状集合之间的视觉和几何差异。我们将stylegan2扩展到3D世代,并利用隐式签名的距离函数(SDF)作为3D形状表示,并引入了两个新颖的全球和局部形状鉴别器,它们区分了真实和假的SDF值和梯度,以显着提高形状的几何形状和视觉质量。我们进一步补充了基于阴影图像的FR \'Echet Inception距离(FID)分数的3D生成模型的评估指标,以更好地评估生成形状的视觉质量和形状分布。对形状生成的实验证明了SDF-Stylegan比最先进的表现出色。我们进一步证明了基于GAN倒置的各种任务中SDF-Stylegan的功效,包括形状重建,部分点云的形状完成,基于单图像的形状形状生成以及形状样式编辑。广泛的消融研究证明了我们框架设计的功效。我们的代码和训练有素的模型可在https://github.com/zhengxinyang/sdf-stylegan上找到。
translated by 谷歌翻译
点云的语义场景重建是3D场景理解的必不可少的任务。此任务不仅需要识别场景中的每个实例,而且还需要根据部分观察到的点云恢复其几何形状。现有方法通常尝试基于基于检测的主链的不完整点云建议直接预测完整对象的占用值。但是,由于妨碍了各种检测到的假阳性对象建议以及对完整对象学习占用值的不完整点观察的歧义,因此该框架始终无法重建高保真网格。为了绕开障碍,我们提出了一个分离的实例网格重建(DIMR)框架,以了解有效的点场景。采用基于分割的主链来减少假阳性对象建议,这进一步使我们对识别与重建之间关系的探索有益。根据准确的建议,我们利用网状意识的潜在代码空间来解开形状完成和网格生成的过程,从而缓解了由不完整的点观测引起的歧义。此外,通过在测试时间访问CAD型号池,我们的模型也可以通过在没有额外训练的情况下执行网格检索来改善重建质量。我们用多个指标彻底评估了重建的网格质量,并证明了我们在具有挑战性的扫描仪数据集上的优越性。代码可在\ url {https://github.com/ashawkey/dimr}上获得。
translated by 谷歌翻译