Neural Radiance Fields (NeRFs) are emerging as a ubiquitous scene representation that allows for novel view synthesis. Increasingly, NeRFs will be shareable with other people. Before sharing a NeRF, though, it might be desirable to remove personal information or unsightly objects. Such removal is not easily achieved with the current NeRF editing frameworks. We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence. Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask. Our algorithm is underpinned by a confidence based view selection procedure. It chooses which of the individual 2D inpainted images to use in the creation of the NeRF, so that the resulting inpainted NeRF is 3D consistent. We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner. We validate our approach using a new and still-challenging dataset for the task of NeRF inpainting.
translated by 谷歌翻译
With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: https://fangjinhuawang.github.io/VolRecon.
translated by 谷歌翻译
Line segments are ubiquitous in our human-made world and are increasingly used in vision tasks. They are complementary to feature points thanks to their spatial extent and the structural information they provide. Traditional line detectors based on the image gradient are extremely fast and accurate, but lack robustness in noisy images and challenging conditions. Their learned counterparts are more repeatable and can handle challenging images, but at the cost of a lower accuracy and a bias towards wireframe lines. We propose to combine traditional and learned approaches to get the best of both worlds: an accurate and robust line detector that can be trained in the wild without ground truth lines. Our new line segment detector, DeepLSD, processes images with a deep network to generate a line attraction field, before converting it to a surrogate image gradient magnitude and angle, which is then fed to any existing handcrafted line detector. Additionally, we propose a new optimization tool to refine line segments based on the attraction field and vanishing points. This refinement improves the accuracy of current deep detectors by a large margin. We demonstrate the performance of our method on low-level line detection metrics, as well as on several downstream tasks using multiple challenging datasets. The source code and models are available at https://github.com/cvg/DeepLSD.
translated by 谷歌翻译
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space. This zero-shot approach enables task-agnostic training and open-vocabulary queries. For example, to perform SOTA zero-shot 3D semantic segmentation it first infers CLIP features for every 3D point and later classifies them based on similarities to embeddings of arbitrary class labels. More interestingly, it enables a suite of open-vocabulary scene understanding applications that have never been done before. For example, it allows a user to enter an arbitrary text query and then see a heat map indicating which parts of a scene match. Our approach is effective at identifying objects, materials, affordances, activities, and room types in complex 3D scenes, all using a single model trained without any labeled 3D data.
translated by 谷歌翻译
特征形式的图像补丁的独特表示是许多计算机视觉和机器人任务的关键组成部分,例如图像匹配,图像检索和视觉定位。最先进的描述符,来自手工制作的描述符,例如SIFT到诸如HardNet之类的学习者,通常是高维的; 128个维度甚至更多。维度越高,使用此类描述符的方法的内存消耗和计算时间越大。在本文中,我们研究了多层感知器(MLP),以提取低维但高质量的描述符。我们在无监督,自我监督和监督的设置中彻底分析了我们的方法,并评估了四个代表性描述符的降维结果。我们考虑不同的应用程序,包括视觉定位,补丁验证,图像匹配和检索。实验表明,我们的轻量级MLP比PCA获得了更好的尺寸降低。我们的方法生成的较低维描述符在下游任务中的原始高维描述符,尤其是对于手工制作的任务。该代码将在https://github.com/prbonn/descriptor-dr上找到。
translated by 谷歌翻译
从部分扫描中进行的3D纹理形状恢复对于许多现实世界应用至关重要。现有的方法证明了隐性功能表示的功效,但它们患有严重阻塞和不同物体类型的部分输入,这极大地阻碍了其在现实世界中的应用价值。该技术报告介绍了我们通过合并学习的几何先验来解决这些局限性的方法。为此,我们从学习的姿势预测中生成一个SMPL模型,并将其融合到部分输入中,以增加对人体的先验知识。我们还提出了一种新颖的完整性界限框适应,以处理不同级别的尺度和部分扫描的部分性。
translated by 谷歌翻译
视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
我们引入了一个可扩展的框架,用于从RGB-D图像中具有很大不完整的场景覆盖率的新型视图合成。尽管生成的神经方法在2D图像上表现出了惊人的结果,但它们尚未达到相似的影像学结果,并结合了场景完成,在这种情况下,空间3D场景的理解是必不可少的。为此,我们提出了一条在基于网格的神经场景表示上执行的生成管道,通过以2.5D-3D-2.5D方式进行场景的分布来完成未观察到的场景部分。我们在3D空间中处理编码的图像特征,并具有几何完整网络和随后的纹理镶嵌网络,以推断缺失区域。最终可以通过与一致性的可区分渲染获得感性图像序列。全面的实验表明,我们方法的图形输出优于最新技术,尤其是在未观察到的场景部分中。
translated by 谷歌翻译
自RANSAC以来,大量研究一直致力于提高其准确性和运行时间。尽管如此,在完成通常昂贵的模型估计和质量计算之前,只有少数方法旨在识别无效的最小样品。为此,我们提出了NEFSAC,这是一种有效的算法,用于对运动不一致和条件不足的最小样品的神经过滤。我们仅基于图像对应关系的像素坐标来预测最小样品的最小样品的概率。我们的神经滤波模型学习了导致不稳定姿势的样品的典型运动模式,并以可能的动作进行规律性,以偏爱条件良好的样品。新颖的轻量级体系结构实现了最小样本的主要不变性,以进行姿势估计,而新颖的培训方案解决了极端阶级失衡的问题。 NEFSAC可以插入任何现有的基于RANSAC的管道中。我们将其集成到USAC中,并表明即使在极端的火车测试域间隙下,它也会始终如一地提供强大的加速度 - 例如,该模型也训练了用于拍照库的自主驾驶场景。我们从三个公开可用的现实世界数据集中测试了超过100k图像对的NEFSAC,发现它导致了一个数量级的速度,同时通常比单独使用USAC更准确。源代码可从https://github.com/cavalli1234/nefsac获得。
translated by 谷歌翻译
建立新型观点综合的最近进展后,我们提出了改善单眼深度估计的应用。特别是,我们提出了一种在三个主要步骤中分开的新颖训练方法。首先,单眼深度网络的预测结果被扭转到额外的视点。其次,我们应用一个额外的图像综合网络,其纠正并提高了翘曲的RGB图像的质量。通过最小化像素-WISE RGB重建误差,该网络的输出需要尽可能类似地查看地面真实性视图。第三,我们将相同的单眼深度估计重新应用于合成的第二视图点,并确保深度预测与相关的地面真理深度一致。实验结果证明,我们的方法在Kitti和Nyu-Deaft-V2数据集上实现了最先进的或可比性,具有轻量级和简单的香草U-Net架构。
translated by 谷歌翻译