场景完成是从场景的部分扫描中完成缺失几何形状的任务。大多数以前的方法使用3D网格上的截断签名距离函数(T-SDF)计算出隐式表示,作为神经网络的输入。截断限制,但不会删除由非关闭表面符号引入的模棱两可的案例。作为替代方案,我们提出了一个未签名的距离函数(UDF),称为未签名的加权欧几里得距离(UWED)作为场景完成神经网络的输入表示。 UWED作为几何表示是简单而有效的,并且可以在任何点云上计算,而与通常的签名距离函数(SDF)相比,UWED不需要正常的计算。为了获得明确的几何形状,我们提出了一种从常规网格上离散的UDF值提取点云的方法。我们比较了从RGB-D和LIDAR传感器收集的室内和室外点云上的场景完成任务的不同SDF和UDFS,并使用建议的UWED功能显示了改进的完成。
translated by 谷歌翻译
Paris-Carla-3d是由移动激光器和相机系统构建的几个浓彩色点云的数据集。数据由两组具有来自开源Carla模拟器(700百万分)的合成数据和在巴黎市中获取的真实数据(6000万分),因此Paris-Carla-3d的名称。此数据集的一个优点是在开源Carla模拟器中模拟了相同的LIDAR和相机平台,因为用于生产真实数据的开源Carla Simulator。此外,使用Carla的语义标记的手动注释在真实数据上执行,允许将转移方法从合成到实际数据进行测试。该数据集的目的是提供一个具有挑战性的数据集,以评估和改进户外环境3D映射的困难视觉任务的方法:语义分段,实例分段和场景完成。对于每项任务,我们描述了评估协议以及建立基线的实验。
translated by 谷歌翻译
LIDAR传感器提供有关周围场景的丰富3D信息,并且对于自动驾驶汽车的任务(例如语义细分,对象检测和跟踪)变得越来越重要。模拟激光雷达传感器的能力将加速自动驾驶汽车的测试,验证和部署,同时降低成本并消除现实情况下的测试风险。为了解决以高保真度模拟激光雷达数据的问题,我们提出了一条管道,该管道利用移动映射系统获得的现实世界点云。基于点的几何表示,更具体地说,已经证明了它们能够在非常大点云中准确对基础表面进行建模的能力。我们引入了一种自适应夹层生成方法,该方法可以准确地对基础3D几何形状进行建模,尤其是对于薄结构。我们还通过在GPU上铸造Ray铸造的同时,在有效处理大点云的同时,我们还开发了更快的时间激光雷达模拟。我们在现实世界中测试了激光雷达的模拟,与基本的碎片和网格划分技术相比,表现出定性和定量结果,证明了我们的建模技术的优势。
translated by 谷歌翻译
我们呈现梯度-SDF,这是三维几何形象的新颖表示,这些表达结合了暗示和显式表示的优势。通过在符号距离字段以及其梯度向量字段中存储每个体素以及其梯度矢量字段,我们通过最初配制的显式表面的方法增强隐式表示的能力。作为具体示例,我们示出了(1)梯度-SDF允许我们使用像哈希映射等有效存储方案的深度图像执行直接SDF跟踪,并且(2)梯度-SDF表示使我们能够执行光度束调节直接在Voxel表示中(不转换为点云或网格),自然地是几何和相机的完全隐含的优化,易于几何上采样。实验结果证实,这导致重建明显更敏锐。由于仍然遵守整体SDF体素结构,所提出的梯度-SDF同样适用于(GPU)并行化作为相关方法。
translated by 谷歌翻译
Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset. Code, data and trained models are available at https://wentaoyuan.github.io/pcn.
translated by 谷歌翻译
隐式神经网络已成功用于点云的表面重建。然而,它们中的许多人面临着可扩展性问题,因为它们将整个对象或场景的异构面功能编码为单个潜在载体。为了克服这种限制,一些方法在粗略普通的3D网格或3D补丁上推断潜伏向量,并将它们插入以应对占用查询。在这样做时,它们可以与对象表面上采样的输入点进行直接连接,并且它们在空间中均匀地附加信息,而不是其最重要的信息,即在表面附近。此外,依赖于固定的补丁大小可能需要离散化调整。要解决这些问题,我们建议使用点云卷积并计算每个输入点的潜伏向量。然后,我们使用推断的权重在最近的邻居上执行基于学习的插值。对象和场景数据集的实验表明,我们的方法在大多数古典指标上显着优于其他方法,产生更精细的细节和更好的重建更薄的卷。代码可在https://github.com/valeoai/poco获得。
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
神经隐式功能最近显示了来自多个视图的表面重建的有希望的结果。但是,当重建无限或复杂的场景时,当前的方法仍然遭受过度复杂性和稳健性不佳。在本文中,我们介绍了RegSDF,这表明适当的点云监督和几何正规化足以产生高质量和健壮的重建结果。具体而言,RegSDF将额外的定向点云作为输入,并优化了可区分渲染框架内的签名距离字段和表面灯场。我们还介绍了这两个关键的正规化。第一个是在给定嘈杂和不完整输入的整个距离字段中平稳扩散签名距离值的Hessian正则化。第二个是最小的表面正则化,可紧凑并推断缺失的几何形状。大量实验是在DTU,BlendenDMV以及储罐和寺庙数据集上进行的。与最近的神经表面重建方法相比,RegSDF即使对于具有复杂拓扑和非结构化摄像头轨迹的开放场景,RegSDF也能够重建表面。
translated by 谷歌翻译
Our method completes a partial 3D scan using a 3D Encoder-Predictor network that leverages semantic features from a 3D classification network. The predictions are correlated with a shape database, which we use in a multi-resolution 3D shape synthesis step. We obtain completed high-resolution meshes that are inferred from partial, low-resolution input scans.
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
我们呈现圆圈,基于本地隐式符号距离函数的大规模场景完成和几何精致的框架。它基于端到端的稀疏卷积网络,Circnet,共同模拟局部几何细节和全局场景结构背景,使其能够在传统3D场景数据中恢复通常产生的缺失区域的同时保留细粒度的对象细节。一种新颖的可分解渲染模块,可以进行测试时间精制以获得更好的重建质量。对现实世界和合成数据集的广泛实验表明,我们的简明框架是高效且有效的,实现比最接近竞争对手更好的重建质量,同时速度更快。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
Implicit fields have been very effective to represent and learn 3D shapes accurately. Signed distance fields and occupancy fields are the preferred representations, both with well-studied properties, despite their restriction to closed surfaces. Several other variations and training principles have been proposed with the goal to represent all classes of shapes. In this paper, we develop a novel and yet fundamental representation by considering the unit vector field defined on 3D space: at each point in $\mathbb{R}^3$ the vector points to the closest point on the surface. We theoretically demonstrate that this vector field can be easily transformed to surface density by applying the vector field divergence. Unlike other standard representations, it directly encodes an important physical property of the surface, which is the surface normal. We further show the advantages of our vector field representation, specifically in learning general (open, closed, or multi-layered) surfaces as well as piecewise planar surfaces. We compare our method on several datasets including ShapeNet where the proposed new neural implicit field shows superior accuracy in representing any type of shape, outperforming other standard methods. The code will be released at https://github.com/edomel/ImplicitVF
translated by 谷歌翻译
我们提出了GO-SURF,这是一种直接特征网格优化方法,可从RGB-D序列进行准确和快速的表面重建。我们用学习的分层特征素网格对基础场景进行建模,该网络封装了多级几何和外观本地信息。特征向量被直接优化,使得三线性插值后,由两个浅MLP解码为签名的距离和辐射度值,并通过表面体积渲染渲染,合成和观察到的RGB/DEPTH值之间的差异最小化。我们的监督信号-RGB,深度和近似SDF可以直接从输入图像中获得,而无需融合或后处理。我们制定了一种新型的SDF梯度正则化项,该项鼓励表面平滑度和孔填充,同时保持高频细节。 GO-SURF可以优化$ 1 $ - $ 2 $ K框架的序列,价格为$ 15 $ - $ 45 $分钟,$ \ times60 $的速度超过了NeuralRGB-D,这是基于MLP表示的最相关的方法,同时保持PAR性能在PAR上的性能标准基准。项目页面:https://jingwenwang95.github.io/go_surf/
translated by 谷歌翻译
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fullyconnected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
translated by 谷歌翻译
The recent neural implicit representation-based methods have greatly advanced the state of the art for solving the long-standing and challenging problem of reconstructing a discrete surface from a sparse point cloud. These methods generally learn either a binary occupancy or signed/unsigned distance field (SDF/UDF) as surface representation. However, all the existing SDF/UDF-based methods use neural networks to implicitly regress the distance in a purely data-driven manner, thus limiting the accuracy and generalizability to some extent. In contrast, we propose the first geometry-guided method for UDF and its gradient estimation that explicitly formulates the unsigned distance of a query point as the learnable affine averaging of its distances to the tangent planes of neighbouring points. Besides, we model the local geometric structure of the input point clouds by explicitly learning a quadratic polynomial for each point. This not only facilitates upsampling the input sparse point cloud but also naturally induces unoriented normal, which further augments UDF estimation. Finally, to extract triangle meshes from the predicted UDF we propose a customized edge-based marching cube module. We conduct extensive experiments and ablation studies to demonstrate the significant advantages of our method over state-of-the-art methods in terms of reconstruction accuracy, efficiency, and generalizability. The source code is publicly available at https://github.com/rsy6318/GeoUDF.
translated by 谷歌翻译
从嘈杂,不均匀和无知点云中的表面重建是计算机视觉和图形中的一个令人迷人但具有挑战性的问题。随着3D扫描技术的创新,强烈希望直接转换原始扫描数据,通常具有严重噪声,进入歧管三角网格。现有的基于学习的方法旨在学习零级曲面对底层形状进行的隐式功能。然而,大多数人都无法获得嘈杂和稀疏点云的理想结果,限制在实践中。在本文中,我们介绍了神经IML,一种新的方法,它直接从未引起的原始点云学习抗噪声符号距离功能(SDF)。通过最大限度地减少由隐式移动最小二乘函数获得的损耗,我们的方法通过最小化了自我监督的方式,从原始点云中从原始点云中的底层SDF,而不是明确地学习前提。 (IML)和我们的神经网络另一个,我们的预测器的梯度定义了便于计算IML的切线束。我们证明,当几个SDFS重合时,我们的神经网络可以预测符号隐式功能,其零电平集用作底层表面的良好近似。我们对各种基准进行广泛的实验,包括合成扫描和现实世界扫描,以表现出从各种投入重建忠实形状的能力,特别是对于具有噪音或间隙的点云。
translated by 谷歌翻译