线云虽然在先前的工作中受到评价不足,但与从多视图图像中提取的点云相比,可能对建筑物的结构信息进行了更紧凑的结构信息。在这项工作中,我们建议第一个处理用于构建线框抽象的线云的网络。该网络将线云作为输入,即从多视图图像提取的3D线段的非结构和无序集,并输出基础建筑物的3D线框,该建筑物由稀疏的3D连接组组成,由线段连接, 。我们观察到一个线斑块,即一组相邻的线段,编码足够的轮廓信息,以预测潜在连接的存在甚至3D位置,以及两个查询连接之间的连通性的可能性。因此,我们引入了两层线斑变压器,以从采样线贴片中提取连接和连接性,以形成3D构建线框模型。我们还介绍了带有地面3D线框的多视图图像的合成数据集。我们广泛证明,在多个基线建筑重建方法上,我们的重建3D线框模型可显着改善。
translated by 谷歌翻译
本文研究了整体3D线框感知的问题(HOW-3D),这是一项新的任务,即从单视2D图像中感知可见的3D线框和无形的任务。由于无法在单个视图中直接观察到对象的非前面表面,因此在HOF-3D中估算了非视线(NLOS)几何形状,这是一个根本上具有挑战性的问题,并且在计算机视觉中仍然保持开放。我们通过提出一个ABC-HOW基准来研究HOF-3D的问题,该基准是在带有12K单视图像和相应的整体3D线框模型的CAD模型之上创建的。借助我们的大规模ABC高音基准,我们提出了一种新颖的深空间格式塔(DSG)模型,以学习可见的连接和线段作为基础,然后从可见的线索中推断出NLOS 3D结构,并遵循遵循可见的线索。人类视觉系统。在我们的实验中,我们证明了我们的DSG模型在从单视图图像中推断出整体3D线框方面表现出色。与强大的基线方法相比,我们的DSG模型在单视图像中检测不可见线的几何形状方面优于先前的线框探测器,甚至与先前的艺术相比,这些艺术是对重建3D线框的输入的效力。
translated by 谷歌翻译
三维(3D)建筑模型在许多现实世界应用中发挥着越来越竞触的作用,同时获得紧凑的建筑物的表现仍然是一个公开的问题。在本文中,我们提出了一种从点云中重建紧凑,水密的多边形建筑模型的新框架。我们的框架包括三个组件:(a)通过自适应空间分区生成一个单元复合物,该分区提供了作为候选集的多面体嵌入; (b)由深度神经网络学习隐式领域,促进建立占用估计; (c)配制马尔可夫随机场,通过组合优化提取建筑物的外表面。我们在形状重建,表面逼近和几何简化中评估和比较我们的最先进方法的方法。综合性和现实世界点云的实验表明,通过我们的神经引导策略,可以获得高质量的建筑模型,在保真度,紧凑性和计算效率方面具有显着的优势。我们的方法显示了对噪声和测量不足的鲁棒性,并且可以从合成扫描到现实世界测量中直接概括。
translated by 谷歌翻译
本文从单个RGB图像中解决了人手的3D点云重建和3D姿势估计。为此,我们在学习姿势估计的潜在表示时,我们展示了一个用于本地和全球点云重建的新型管道,同时使用3D手模板。为了展示我们的方法,我们介绍了一个新的多视图手姿势数据集,以获得现实世界中的手的完整3D点云。我们新拟议的数据集和四个公共基准测试的实验展示了模型的优势。我们的方法优于3D姿势估计中的竞争对手,同时重建现实看的完整3D手云。
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
We introduce a novel deep learning-based framework to interpret 3D urban scenes represented as textured meshes. Based on the observation that object boundaries typically align with the boundaries of planar regions, our framework achieves semantic segmentation in two steps: planarity-sensible over-segmentation followed by semantic classification. The over-segmentation step generates an initial set of mesh segments that capture the planar and non-planar regions of urban scenes. In the subsequent classification step, we construct a graph that encodes the geometric and photometric features of the segments in its nodes and the multi-scale contextual features in its edges. The final semantic segmentation is obtained by classifying the segments using a graph convolutional network. Experiments and comparisons on two semantic urban mesh benchmarks demonstrate that our approach outperforms the state-of-the-art methods in terms of boundary quality, mean IoU (intersection over union), and generalization ability. We also introduce several new metrics for evaluating mesh over-segmentation methods dedicated to semantic segmentation, and our proposed over-segmentation approach outperforms state-of-the-art methods on all metrics. Our source code is available at \url{https://github.com/WeixiaoGao/PSSNet}.
translated by 谷歌翻译
我们为来自多视图立体声(MVS)城市场景的3D建筑物的实例分割了一部小说框架。与关注城市场景的语义分割的现有作品不同,即使它们安装在大型和不精确的3D表面模型中,这项工作的重点是检测和分割3D构建实例。通过添加高度图,首先将多视图RGB图像增强到RGBH图像,并且被分段以使用微调的2D实例分割神经网络获得所有屋顶实例。然后将来自不同的多视图图像的屋顶实例掩码被聚集到全局掩码中。我们的面具聚类占空间闭塞和重叠,可以消除多视图图像之间的分割歧义。基于这些全局掩码,3D屋顶实例由掩码背部投影分割,并通过Markov随机字段(MRF)优化扩展到整个建筑实例。定量评估和消融研究表明了该方法的所有主要步骤的有效性。提供了一种用于评估3D建筑模型的实例分割的数据集。据我们所知,它是一个在实例分割级别的3D城市建筑的第一个数据集。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
Geometric rectification of images of distorted documents finds wide applications in document digitization and Optical Character Recognition (OCR). Although smoothly curved deformations have been widely investigated by many works, the most challenging distortions, e.g. complex creases and large foldings, have not been studied in particular. The performance of existing approaches, when applied to largely creased or folded documents, is far from satisfying, leaving substantial room for improvement. To tackle this task, knowledge about document rectification should be incorporated into the computation, among which the developability of 3D document models and particular textural features in the images, such as straight lines, are the most essential ones. For this purpose, we propose a general framework of document image rectification in which a computational isometric mapping model is utilized for expressing a 3D document model and its flattening in the plane. Based on this framework, both model developability and textural features are considered in the computation. The experiments and comparisons to the state-of-the-art approaches demonstrated the effectiveness and outstanding performance of the proposed method. Our method is also flexible in that the rectification results can be enhanced by any other methods that extract high-quality feature lines in the images.
translated by 谷歌翻译
We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.
translated by 谷歌翻译
3D面重建结果的评估通常取决于估计的3D模型和地面真相扫描之间的刚性形状比对。我们观察到,将两个形状与不同的参考点进行排列可以在很大程度上影响评估结果。这给精确诊断和改进3D面部重建方法带来了困难。在本文中,我们提出了一种新的评估方法,并采用了新的基准测试,包括100张全球对齐的面部扫描,具有准确的面部关键点,高质量的区域口罩和拓扑符合的网格。我们的方法执行区域形状比对,并导致计算形状误差期间更准确,双向对应关系。细粒度,区域评估结果为我们提供了有关最先进的3D面部重建方法表现的详细理解。例如,我们对基于单图像的重建方法的实验表明,DECA在鼻子区域表现最好,而Ganfit在脸颊区域的表现更好。此外,使用与我们构造的相同过程以对齐和重新构造几个3D面部数据集的新型和高质量的3DMM基础HIFI3D ++。我们将在https://realy3dface.com上发布真正的HIFI3D ++以及我们的新评估管道。
translated by 谷歌翻译
Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-ofthe-art methods on single image based 3d reconstruction benchmarks; but it also shows strong performance for 3d shape completion and promising ability in making multiple plausible predictions.
translated by 谷歌翻译
我们提出了一个Point2cyl,一个监督网络将原始3D点云变换到一组挤出缸。从原始几何到CAD模型的逆向工程是能够在形状编辑软件中操纵3D数据的重要任务,从而在许多下游应用中扩展其使用。特别地,具有挤出圆柱序列的CAD模型的形式 - 2D草图加上挤出轴和范围 - 以及它们的布尔组合不仅广泛应用于CAD社区/软件,而且相比具有很大的形状表现性具有有限类型的基元(例如,平面,球形和汽缸)。在这项工作中,我们介绍了一种神经网络,通过首先学习底层几何代理来解决挤出汽缸分解问题的挤出圆柱分解问题。精确地,我们的方法首先预测每点分割,基础/桶标签和法线,然后估计可分离和闭合形式配方中的底层挤出参数。我们的实验表明,我们的方法展示了两个最近CAD数据集,融合画廊和Deepcad上的最佳性能,我们进一步展示了逆向工程和编辑的方法。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.
translated by 谷歌翻译
单视图3D对象重建是一项基本且具有挑战性的计算机视觉任务,旨在从单视RGB图像中恢复3D形状。大多数现有的基于深度学习的重建方法都是​​在同一类别上培训和评估的,并且在处理训练过程中未见的新颖类别的物体时,它们无法正常工作。本文着眼于这个问题,解决了零照片的单视3D网格重建,以研究对看不见类别的模型概括,并鼓励模型从字面上重建对象。具体而言,我们建议一个端到端的两阶段网络Zeromesh,以打破重建中的类别边界。首先,我们将复杂的图像到网格映射分解为两个较简单的映射,即图像对点映射和点对点映射,而后者主要是几何问题,而不是对象类别的依赖。其次,我们在2D和3D特征空间中设计了局部特征采样策略,以捕获跨对象共享的局部几何形状,以增强模型概括。第三,除了传统的点对点监督外,我们还引入了多视图轮廓损失以监督表面生成过程,该过程提供了其他正则化,并进一步缓解了过度拟合的问题。实验结果表明,我们的方法在不同方案和各种指标下,特别是对于新颖对象而言,在Shapenet和Pix3D上的现有作品显着优于Shapenet和Pix3D的现有作品。
translated by 谷歌翻译
社会VR,绩效捕获和虚拟试验的领域通常面临着忠实地在虚拟世界中重现真正的服装。一项关键的任务是由于织物特性,物理力和与身体接触而导致的固有服装形状不构成形状。我们建议使用一种逼真而紧凑的服装描述来促进固有的服装形状估计。另一个主要挑战是该域中的形状和设计多样性。 3D服装深度学习的最常见方法是为单个服装或服装类型建立专门的模型。我们认为,为各种服装设计建立统一的模型具有对新型服装类型的概括的好处,因此涵盖了比单个模型更大的设计领域。我们介绍了Neuraltailor,这是一种基于点级的新型架构,以可变的基数为基础回归,并将其应用于从3D点重建2D服装缝制模式的任务,可以使用服装模型。我们的实验表明,NeuralTailor成功地重建了缝纫模式,并将其推广到训练过程中未见模式拓扑的服装类型。
translated by 谷歌翻译
您将如何通过一些错过来修复物理物体?您可能会想象它的原始形状从先前捕获的图像中,首先恢复其整体(全局)但粗大的形状,然后完善其本地细节。我们有动力模仿物理维修程序以解决点云完成。为此,我们提出了一个跨模式的形状转移双转化网络(称为CSDN),这是一种带有全循环参与图像的粗到精细范式,以完成优质的点云完成。 CSDN主要由“ Shape Fusion”和“ Dual-Refinect”模块组成,以应对跨模式挑战。第一个模块将固有的形状特性从单个图像传输,以指导点云缺失区域的几何形状生成,在其中,我们建议iPadain嵌入图像的全局特征和部分点云的完成。第二个模块通过调整生成点的位置来完善粗糙输出,其中本地改进单元通过图卷积利用了小说和输入点之间的几何关系,而全局约束单元则利用输入图像来微调生成的偏移。与大多数现有方法不同,CSDN不仅探讨了图像中的互补信息,而且还可以在整个粗到精细的完成过程中有效利用跨模式数据。实验结果表明,CSDN对十个跨模式基准的竞争对手表现出色。
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译