我们提出了一个Point2cyl,一个监督网络将原始3D点云变换到一组挤出缸。从原始几何到CAD模型的逆向工程是能够在形状编辑软件中操纵3D数据的重要任务,从而在许多下游应用中扩展其使用。特别地,具有挤出圆柱序列的CAD模型的形式 - 2D草图加上挤出轴和范围 - 以及它们的布尔组合不仅广泛应用于CAD社区/软件,而且相比具有很大的形状表现性具有有限类型的基元(例如,平面,球形和汽缸)。在这项工作中,我们介绍了一种神经网络,通过首先学习底层几何代理来解决挤出汽缸分解问题的挤出圆柱分解问题。精确地,我们的方法首先预测每点分割,基础/桶标签和法线,然后估计可分离和闭合形式配方中的底层挤出参数。我们的实验表明,我们的方法展示了两个最近CAD数据集,融合画廊和Deepcad上的最佳性能,我们进一步展示了逆向工程和编辑的方法。
translated by 谷歌翻译
反向工程从其他表示形式进行的CAD形状是许多下游应用程序的重要几何处理步骤。在这项工作中,我们介绍了一种新型的神经网络体系结构,以解决这项具有挑战性的任务,并使用可编辑,受约束的棱镜CAD模型近似平滑的签名距离函数。在训练过程中,我们的方法通过将形状分解为一系列2D轮廓图像和1D包膜函数来重建体素空间中的输入几何形状。然后可以以不同的方式重新组合这些,以允许定义几何损失函数。在推断期间,我们通过首先搜索2D约束草图的数据库来获取CAD数据,以找到近似配置文件图像的曲线,然后将它们挤出并使用布尔操作来构建最终的CAD模型。我们的方法比其他方法更接近目标形状,并输出与现有CAD软件兼容的高度可编辑的约束参数草图。
translated by 谷歌翻译
我们呈现FURTIT,这是一种简单的3D形状分割网络的高效学习方法。FURTIT基于自我监督的任务,可以将3D形状的表面分解成几何基元。可以很容易地应用于用于3D形状分割的现有网络架构,并提高了几张拍摄设置中的性能,因为我们在广泛使用的ShapEnet和Partnet基准中展示。FISHIT在这种环境中优于现有的现有技术,表明对基元的分解是在学习对语义部分预测的陈述之前的有用。我们提出了许多实验,改变了几何基元和下游任务的选择,以证明该方法的有效性。
translated by 谷歌翻译
深层隐式表面在建模通用形状方面表现出色,但并不总是捕获制造物体中存在的规律性,这是简单的几何原始词特别擅长。在本文中,我们提出了一个结合潜在和显式参数的表示,可以将其解码为一组彼此一致的深层隐式和几何形状。结果,我们可以有效地对制成物体共存的复杂形状和高度规则形状进行建模。这使我们能够以有效而精确的方式操纵3D形状的方法。
translated by 谷歌翻译
从点云中自动创建几何模型在CAD(例如,逆向工程,制造,组装)中具有许多应用,并且通常在形状建模和处理中。给定一个代表人造对象的分段点云,我们提出了一种识别简单几何原语及其相互关系的方法。我们的方法基于Hough Transform(HT),以应对噪音,缺失零件和离群值的能力。在我们的方法中,我们介绍了一种用于处理分段点云的新技术,该技术通过投票程序能够提供表征每种原始类型的几何参数的初始估计。通过使用这些估计值,我们将对最佳解决方案的搜索定位在尺寸还原的参数空间中,从而使将HT扩展到比文献(即平面和球体中通常发现的)更有效。然后,我们提取了许多以唯一特征段的几何描述符,并且根据这些描述符,我们展示了如何汇总原语(段)(段)。对合成和工业扫描的实验揭示了原始拟合方法的鲁棒性及其在推断细分之间关系的有效性。
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
There is no settled universal 3D representation for geometry with many alternatives such as point clouds, meshes, implicit functions, and voxels to name a few. In this work, we present a new, compelling alternative for representing shapes using a sequence of cross-sectional closed loops. The loops across all planes form an organizational hierarchy which we leverage for autoregressive shape synthesis and editing. Loops are a non-local description of the underlying shape, as simple loop manipulations (such as shifts) result in significant structural changes to the geometry. This is in contrast to manipulating local primitives such as points in a point cloud or a triangle in a triangle mesh. We further demonstrate that loops are intuitive and natural primitive for analyzing and editing shapes, both computationally and for users.
translated by 谷歌翻译
近年来,由于其表达力和灵活性,神经隐式表示在3D重建中获得了普及。然而,神经隐式表示的隐式性质导致缓慢的推理时间并且需要仔细初始化。在本文中,我们重新审视经典且无处不在的点云表示,并使用泊松表面重建(PSR)的可分辨率配方引入可分化的点对网格层,其允许给予定向的GPU加速的指示灯的快速解决方案点云。可微分的PSR层允许我们通过隐式指示器字段有效地和分散地桥接与3D网格的显式3D点表示,从而实现诸如倒角距离的表面重建度量的端到端优化。因此,点和网格之间的这种二元性允许我们以面向点云表示形状,这是显式,轻量级和富有表现力的。与神经内隐式表示相比,我们的形状 - 点(SAP)模型更具可解释,轻量级,并通过一个级别加速推理时间。与其他显式表示相比,如点,补丁和网格,SA​​P产生拓扑无关的水密歧管表面。我们展示了SAP对无知点云和基于学习的重建的表面重建任务的有效性。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
我们提出了一种新颖的隐式表示 - 神经半空间表示(NH-REP),以将歧管B-REP固体转换为隐式表示。 NH-REP是一棵布尔树木,建立在由神经网络代表的一组隐式函数上,复合布尔函数能够代表实体几何形状,同时保留锐利的特征。我们提出了一种有效的算法,以从歧管B-Rep固体中提取布尔树,并设计一种基于神经网络的优化方法来计算隐式函数。我们证明了我们的转换算法在一千个流形B-REP CAD模型上提供的高质量,这些模型包含包括NURB在内的各种弯曲斑块,以及我们学习方法优于其他代表性的隐性转换算法,在表面重建,尖锐的特征保存,尖锐的特征保存,尖锐的特征,,符号距离场的近似和对各种表面几何形状的鲁棒性以及由NH-REP支持的一组应用。
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
Recent approaches to drape garments quickly over arbitrary human bodies leverage self-supervision to eliminate the need for large training sets. However, they are designed to train one network per clothing item, which severely limits their generalization abilities. In our work, we rely on self-supervision to train a single network to drape multiple garments. This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network, which models garments as unsigned distance fields. Our pipeline can generate and drape previously unseen garments of any topology, whose shape can be edited by manipulating their latent codes. Being fully differentiable, our formulation makes it possible to recover accurate 3D models of garments from partial observations -- images or 3D scans -- via gradient descent. Our code will be made publicly available.
translated by 谷歌翻译
基于单个草图图像重建3D形状是由于稀疏,不规则的草图和常规,密集的3D形状之间的较大域间隙而具有挑战性的。现有的作品尝试采用从草图提取的全局功能来直接预测3D坐标,但通常会遭受失去对输入草图不忠心的细节。通过分析3D到2D投影过程,我们注意到表征2D点云分布的密度图(即,投影平面每个位置的点的概率)可以用作代理,以促进该代理重建过程。为此,我们首先通过图像翻译网络将草图翻译成一个更有信息的2D表示,可用于生成密度映射。接下来,通过两个阶段的概率采样过程重建一个3D点云:首先通过对密度映射进行采样,首先恢复2D点(即X和Y坐标);然后通过在每个2D点确定的射线处采样深度值来预测深度​​(即Z坐标)。进行了广泛的实验,定量和定性结果都表明,我们提出的方法显着优于其他基线方法。
translated by 谷歌翻译
最近的工作建模3D开放表面培训深度神经网络以近似无符号距离字段(UDF)并隐含地代表形状。要将此表示转换为显式网格,它们要么使用计算上昂贵的方法来对表面的致密点云采样啮合,或者通过将其膨胀到符号距离字段(SDF)中来扭曲表面。相比之下,我们建议直接将深度UDFS直接以延伸行进立方体的开放表面,通过本地检测表面交叉。我们的方法是幅度的序列,比啮合致密点云,比膨胀开口表面更准确。此外,我们使我们的表面提取可微分,并显示它可以帮助稀疏监控信号。
translated by 谷歌翻译
Normal estimation for unstructured point clouds is an important task in 3D computer vision. Current methods achieve encouraging results by mapping local patches to normal vectors or learning local surface fitting using neural networks. However, these methods are not generalized well to unseen scenarios and are sensitive to parameter settings. To resolve these issues, we propose an implicit function to learn an angle field around the normal of each point in the spherical coordinate system, which is dubbed as Neural Angle Fields (NeAF). Instead of directly predicting the normal of an input point, we predict the angle offset between the ground truth normal and a randomly sampled query normal. This strategy pushes the network to observe more diverse samples, which leads to higher prediction accuracy in a more robust manner. To predict normals from the learned angle fields at inference time, we randomly sample query vectors in a unit spherical space and take the vectors with minimal angle values as the predicted normals. To further leverage the prior learned by NeAF, we propose to refine the predicted normal vectors by minimizing the angle offsets. The experimental results with synthetic data and real scans show significant improvements over the state-of-the-art under widely used benchmarks.
translated by 谷歌翻译
从稀疏的原始数据中生成密集的点云使下游3D理解任务,但现有模型仅限于固定的上采样率或短范围的整数值。在本文中,我们提出了APU-SMOG,这是一种基于变压器的模型,用于任意点云上采样(APU)。首先将稀疏输入映射到高斯(烟雾)分布的球形混合物,从中可以采样任意数量的点。然后,将这些样品作为查询馈送到变压器解码器,将它们映射回目标表面。广泛的定性和定量评估表明,APU-SMOG的表现优于最先进的固定比例方法,同时使用任何缩放因子(包括非直觉值)有效地启用了以单个训练有素的模型来提高采样。该代码将可用。
translated by 谷歌翻译
We present a neural technique for learning to select a local sub-region around a point which can be used for mesh parameterization. The motivation for our framework is driven by interactive workflows used for decaling, texturing, or painting on surfaces. Our key idea is to incorporate segmentation probabilities as weights of a classical parameterization method, implemented as a novel differentiable parameterization layer within a neural network framework. We train a segmentation network to select 3D regions that are parameterized into 2D and penalized by the resulting distortion, giving rise to segmentations which are distortion-aware. Following training, a user can use our system to interactively select a point on the mesh and obtain a large, meaningful region around the selection which induces a low-distortion parameterization. Our code and project page are currently available.
translated by 谷歌翻译
三维(3D)建筑模型在许多现实世界应用中发挥着越来越竞触的作用,同时获得紧凑的建筑物的表现仍然是一个公开的问题。在本文中,我们提出了一种从点云中重建紧凑,水密的多边形建筑模型的新框架。我们的框架包括三个组件:(a)通过自适应空间分区生成一个单元复合物,该分区提供了作为候选集的多面体嵌入; (b)由深度神经网络学习隐式领域,促进建立占用估计; (c)配制马尔可夫随机场,通过组合优化提取建筑物的外表面。我们在形状重建,表面逼近和几何简化中评估和比较我们的最先进方法的方法。综合性和现实世界点云的实验表明,通过我们的神经引导策略,可以获得高质量的建筑模型,在保真度,紧凑性和计算效率方面具有显着的优势。我们的方法显示了对噪声和测量不足的鲁棒性,并且可以从合成扫描到现实世界测量中直接概括。
translated by 谷歌翻译