单视图3D对象重建是一项基本且具有挑战性的计算机视觉任务,旨在从单视RGB图像中恢复3D形状。大多数现有的基于深度学习的重建方法都是​​在同一类别上培训和评估的,并且在处理训练过程中未见的新颖类别的物体时,它们无法正常工作。本文着眼于这个问题,解决了零照片的单视3D网格重建,以研究对看不见类别的模型概括,并鼓励模型从字面上重建对象。具体而言,我们建议一个端到端的两阶段网络Zeromesh,以打破重建中的类别边界。首先,我们将复杂的图像到网格映射分解为两个较简单的映射,即图像对点映射和点对点映射,而后者主要是几何问题,而不是对象类别的依赖。其次,我们在2D和3D特征空间中设计了局部特征采样策略,以捕获跨对象共享的局部几何形状,以增强模型概括。第三,除了传统的点对点监督外,我们还引入了多视图轮廓损失以监督表面生成过程,该过程提供了其他正则化,并进一步缓解了过度拟合的问题。实验结果表明,我们的方法在不同方案和各种指标下,特别是对于新颖对象而言,在Shapenet和Pix3D上的现有作品显着优于Shapenet和Pix3D的现有作品。
translated by 谷歌翻译
We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.
translated by 谷歌翻译
基于单个草图图像重建3D形状是由于稀疏,不规则的草图和常规,密集的3D形状之间的较大域间隙而具有挑战性的。现有的作品尝试采用从草图提取的全局功能来直接预测3D坐标,但通常会遭受失去对输入草图不忠心的细节。通过分析3D到2D投影过程,我们注意到表征2D点云分布的密度图(即,投影平面每个位置的点的概率)可以用作代理,以促进该代理重建过程。为此,我们首先通过图像翻译网络将草图翻译成一个更有信息的2D表示,可用于生成密度映射。接下来,通过两个阶段的概率采样过程重建一个3D点云:首先通过对密度映射进行采样,首先恢复2D点(即X和Y坐标);然后通过在每个2D点确定的射线处采样深度值来预测深度​​(即Z坐标)。进行了广泛的实验,定量和定性结果都表明,我们提出的方法显着优于其他基线方法。
translated by 谷歌翻译
您将如何通过一些错过来修复物理物体?您可能会想象它的原始形状从先前捕获的图像中,首先恢复其整体(全局)但粗大的形状,然后完善其本地细节。我们有动力模仿物理维修程序以解决点云完成。为此,我们提出了一个跨模式的形状转移双转化网络(称为CSDN),这是一种带有全循环参与图像的粗到精细范式,以完成优质的点云完成。 CSDN主要由“ Shape Fusion”和“ Dual-Refinect”模块组成,以应对跨模式挑战。第一个模块将固有的形状特性从单个图像传输,以指导点云缺失区域的几何形状生成,在其中,我们建议iPadain嵌入图像的全局特征和部分点云的完成。第二个模块通过调整生成点的位置来完善粗糙输出,其中本地改进单元通过图卷积利用了小说和输入点之间的几何关系,而全局约束单元则利用输入图像来微调生成的偏移。与大多数现有方法不同,CSDN不仅探讨了图像中的互补信息,而且还可以在整个粗到精细的完成过程中有效利用跨模式数据。实验结果表明,CSDN对十个跨模式基准的竞争对手表现出色。
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
Rapid advances in 2D perception have led to systems that accurately detect objects in real-world images. However, these systems make predictions in 2D, ignoring the 3D structure of the world. Concurrently, advances in 3D shape prediction have mostly focused on synthetic benchmarks and isolated objects. We unify advances in these two areas. We propose a system that detects objects in real-world images and produces a triangle mesh giving the full 3D shape of each detected object. Our system, called Mesh R-CNN, augments Mask R-CNN with a mesh prediction branch that outputs meshes with varying topological structure by first predicting coarse voxel representations which are converted to meshes and refined with a graph convolution network operating over the mesh's vertices and edges. We validate our mesh prediction branch on ShapeNet, where we outperform prior work on single-image shape prediction. We then deploy our full Mesh R-CNN system on Pix3D, where we jointly detect objects and predict their 3D shapes. Project page: https://gkioxari.github.io/meshrcnn/.
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
精确地重建由单个图像的各种姿势和服装引起的精确复杂的人类几何形状非常具有挑战性。最近,基于像素对齐的隐式函数(PIFU)的作品已迈出了一步,并在基于图像的3D人数数字化上实现了最先进的保真度。但是,PIFU的培训在很大程度上取决于昂贵且有限的3D地面真相数据(即合成数据),从而阻碍了其对更多样化的现实世界图像的概括。在这项工作中,我们提出了一个名为selfpifu的端到端自我监督的网络,以利用丰富和多样化的野外图像,在对无约束的内部图像进行测试时,在很大程度上改善了重建。 SelfPifu的核心是深度引导的体积/表面感知的签名距离领域(SDF)学习,它可以自欺欺人地学习PIFU,而无需访问GT网格。整个框架由普通估计器,深度估计器和基于SDF的PIFU组成,并在训练过程中更好地利用了额外的深度GT。广泛的实验证明了我们自我监督框架的有效性以及使用深度作为输入的优越性。在合成数据上,与PIFUHD相比,我们的交叉点(IOU)达到93.5%,高18%。对于野外图像,我们对重建结果进行用户研究,与其他最先进的方法相比,我们的结果的选择率超过68%。
translated by 谷歌翻译
我们提出了一个新的框架,以重建整体3D室内场景,包括单视图像的房间背景和室内对象。由于室内场景的严重阻塞,现有方法只能产生具有有限几何质量的室内物体的3D形状。为了解决这个问题,我们提出了一个与实例一致的隐式函数(InstPifu),以进行详细的对象重建。与实例对齐的注意模块结合使用,我们的方法有权将混合的局部特征与遮挡实例相结合。此外,与以前的方法不同,该方法仅代表房间背景为3D边界框,深度图或一组平面,我们通过隐式表示恢复了背景的精细几何形状。在E SUN RGB-D,PIX3D,3D-FUTURE和3D-FRONT数据集上进行的广泛实验表明,我们的方法在背景和前景对象重建中均优于现有方法。我们的代码和模型将公开可用。
translated by 谷歌翻译
Point clouds captured by scanning devices are often incomplete due to occlusion. Point cloud completion aims to predict the complete shape based on its partial input. Existing methods can be classified into supervised and unsupervised methods. However, both of them require a large number of 3D complete point clouds, which are difficult to capture. In this paper, we propose Cross-PCC, an unsupervised point cloud completion method without requiring any 3D complete point clouds. We only utilize 2D images of the complete objects, which are easier to capture than 3D complete and clean point clouds. Specifically, to take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features and design a fusion module to fuse the 2D and 3D features extracted from the partial point cloud. To guide the shape of predicted point clouds, we project the predicted points of the object to the 2D plane and use the foreground pixels of its silhouette maps to constrain the position of the projected points. To reduce the outliers of the predicted point clouds, we propose a view calibrator to move the points projected to the background into the foreground by the single-view silhouette image. To the best of our knowledge, our approach is the first point cloud completion method that does not require any 3D supervision. The experimental results of our method are superior to those of the state-of-the-art unsupervised methods by a large margin. Moreover, compared to some supervised methods, our method achieves similar performance. We will make the source code publicly available at https://github.com/ltwu6/cross-pcc.
translated by 谷歌翻译
我们介绍DMTET,深度3D条件生成模型,可以使用诸如粗体素的简单用户指南来合成高分辨率3D形状。它通过利用新型混合3D表示来结婚隐式和显式3D表示的优点。与当前隐含的方法相比,培训涉及符号距离值,DMTET直接针对重建的表面进行了优化,这使我们能够用更少的伪像来合成更精细的几何细节。与直接生成诸如网格之类的显式表示的深度3D生成模型不同,我们的模型可以合成具有任意拓扑的形状。 DMTET的核心包括可变形的四面体网格,其编码离散的符号距离函数和可分行的行进Tetrahedra层,其将隐式符号距离表示转换为显式谱图表示。这种组合允许使用在表面网格上明确定义的重建和对抗性损耗来联合优化表面几何形状和拓扑以及生成细分层次结构。我们的方法显着优于来自粗体素输入的条件形状合成的现有工作,培训在复杂的3D动物形状的数据集上。项目页面:https://nv-tlabs.github.io/dmtet/
translated by 谷歌翻译
从单眼图像中恢复纹理的3D网格是高度挑战的,尤其是对于缺乏3D地面真理的野外物体。在这项工作中,我们提出了网络文化,这是一个新的框架,可通过利用3D GAN预先训练的3D纹理网格合成的3D GAN的生成性先验。重建是通过在3D GAN中搜索最类似于目标网格的潜在空间来实现重建。由于预先训练的GAN以网状几何形状和纹理封装了丰富的3D语义,因此在GAN歧管内进行搜索,因此自然地使重建的真实性和忠诚度正常。重要的是,这种正则化直接应用于3D空间,从而提供了在2D空间中未观察到的网格零件的关键指导。标准基准测试的实验表明,我们的框架获得了忠实的3D重建,并在观察到的部分和未观察到的部分中都具有一致的几何形状和纹理。此外,它可以很好地推广到不太常见的网格中,例如可变形物体的扩展表达。代码在https://github.com/junzhezhang/mesh-inversion上发布
translated by 谷歌翻译
Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-ofthe-art methods on single image based 3d reconstruction benchmarks; but it also shows strong performance for 3d shape completion and promising ability in making multiple plausible predictions.
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
3D重建问题中的一个关键问题是如何训练机器人或机器人以模型3D对象。在实时系统(例如自动驾驶汽车)中导航等许多任务直接取决于此问题。这些系统通常具有有限的计算能力。尽管近年来3D重建系统在3D重建系统中取得了长足的进展,但由于现有方法的高复杂性和计算需求,将它们应用于自动驾驶汽车中的导航系统等实时系统仍然具有挑战性。这项研究解决了以更快(实时)方式重建单视图像中显示的对象的当前问题。为此,开发了一个简单而强大的深度神经框架。提出的框架由两个组件组成:特征提取器模块和3D发电机模块。我们将点云表示为我们的重建模块的输出。将Shapenet数据集用于将方法与计算时间和准确性方面的现有结果进行比较。模拟证明了所提出的方法的出色性能。索引术语现实时间3D重建,单视图重建,监督学习,深神经网络
translated by 谷歌翻译
由于真实的3D注释的类别数据的不可用,在合成数据集中,传统的学习3D对象类别的方法主要受到培训和评估。我们的主要目标是通过在与现有的合成对应物类似的幅度下收集现实世界数据来促进该领域的进步。因此,这项工作的主要贡献是一个大型数据集,称为3D中的常见对象,具有使用相机姿势和地面真相3D点云注释的对象类别的真实多视图图像。 DataSet总共包含从50 MS-Coco类别的近19,000个视频中捕获对象的150万帧,因此,在类别和对象的数量方面,它比替代更大。我们利用这款新数据集进行了几个新型综合和以类别为中心的3D重建方法的第一个大规模“野外”评估。最后,我们贡献了一种新型的神经渲染方法,它利用强大的变压器来重建对象,给出少量的视图。 CO3D DataSet可在HTTPS://github.com/facebookResearch/co3d获取。
translated by 谷歌翻译
在3D形状分析的区域中,长期以来已经研究了形状的几何特性。本文专用于从形状形成过程中发现独特信息,而不是使用专业设计的描述符或端到端深神经网络直接提取代表功能。具体地,用作模板的球形点云逐渐变形以以粗细的方式拟合目标形状。在形状形成过程中,插入若干检查点以便于记录和研究中间阶段。对于每个阶段,偏移字段被评估为舞台感知的描述。整个形状形成过程的偏移的求和可以在几何形状方面完全定义目标形状。在这种观点中,人们可以廉价地从模板导出从模板的点亮形状对应,这有利于各种图形应用。在本文中,提出了基于逐行变形的自动编码器(PDAE)来通过粗到细小的形状拟合任务来学习舞台感知的描述。实验结果表明,所提出的PDAE具有重建高保真度的3D形状的能力,在多级变形过程中保留了一致的拓扑。执行基于舞台感知描述的其他应用程序,展示其普遍性。
translated by 谷歌翻译