在本文中,我们串联串联一个实时单手抄语和密集的测绘框架。对于姿势估计,串联基于关键帧的滑动窗口执行光度束调整。为了增加稳健性,我们提出了一种新颖的跟踪前端,使用从全局模型中呈现的深度图来执行密集的直接图像对齐,该模型从密集的深度预测逐渐构建。为了预测密集的深度映射,我们提出了通过分层构造具有自适应视图聚合的3D成本卷来平衡关键帧之间的不同立体声基线的3D成本卷来使用整个活动密钥帧窗口的级联视图 - 聚合MVSNet(CVA-MVSNET)。最后,将预测的深度映射融合到表示为截短的符号距离函数(TSDF)体素网格的一致的全局映射中。我们的实验结果表明,在相机跟踪方面,串联优于其他最先进的传统和学习的单眼视觉径管(VO)方法。此外,串联示出了最先进的实时3D重建性能。
translated by 谷歌翻译
In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods. Our approach is inspired by the recently developed implicit mapping and positioning system and further extends the idea so that it can be freely applied to practical scenarios. Specifically, we leverage a voxel-based neural implicit surface representation to encode and optimize the scene inside each voxel. Furthermore, we adopt an octree-based structure to divide the scene and support dynamic expansion, enabling our system to track and map arbitrary scenes without knowing the environment like in previous works. Moreover, we proposed a high-performance multi-process framework to speed up the method, thus supporting some applications that require real-time performance. The evaluation results show that our methods can achieve better accuracy and completeness than previous methods. We also show that our Vox-Fusion can be used in augmented reality and virtual reality applications. Our source code is publicly available at https://github.com/zju3dv/Vox-Fusion.
translated by 谷歌翻译
许多手持或混合现实设备与单个传感器一起用于3D重建,尽管它们通常包含多个传感器。多传感器深度融合能够实质上提高3D重建方法的鲁棒性和准确性,但是现有技术不够强大,无法处理具有不同值范围以及噪声范围以及噪声和离群统计数据的传感器。为此,我们介绍了Senfunet,这是一种深度融合方法,它可以学习传感器特定的噪声和离群统计数据,并以在线方式将深度框架的数据流组合在一起。我们的方法融合了多传感器深度流,而不论时间同步和校准如何,并且在很少的训练数据中概括了。我们在现实世界中和scene3D数据集以及副本数据集上使用各种传感器组合进行实验。实验表明,我们的融合策略表现优于传统和最新的在线深度融合方法。此外,多个传感器的组合比使用单个传感器更加可靠的离群处理和更精确的表面重建。源代码和数据可在https://github.com/tfy14esa/senfunet上获得。
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
Various datasets have been proposed for simultaneous localization and mapping (SLAM) and related problems. Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images. We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera. Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms. Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions. We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model. As a unique experiment, we benchmark visual-inertial odometry methods using both color and active infrared images.
translated by 谷歌翻译
事件摄像机是由生物启发的传感器,比传统摄像机具有优势。它们不同步,用微秒的分辨率对场景进行采样,并产生亮度变化。这种非常规的输出引发了新型的计算机视觉方法,以释放相机的潜力。我们解决了SLAM的基于事件的立体3D重建问题。大多数基于事件的立体声方法都试图利用相机跨相机的高时间分辨率和事件同时性,以建立匹配和估计深度。相比之下,我们研究了如何通过融合有效的单眼方法来融合差异空间图像(DSIS)来估计深度。我们开发融合理论,并将其应用于设计产生最先进结果的多相机3D重建算法,正如我们通过与四种基线方法进行比较并在各种可用数据集上进行测试的确认。
translated by 谷歌翻译
传统上,来自摆姿势的图像的3D室内场景重建分为两个阶段:人均深度估计,然后进行深度合并和表面重建。最近,出现了一个直接在最终3D体积特征空间中进行重建的方法家族。尽管这些方法显示出令人印象深刻的重建结果,但它们依赖于昂贵的3D卷积层,从而限制了其在资源受限环境中的应用。在这项工作中,我们回到了传统的路线,并展示着专注于高质量的多视图深度预测如何使用简单的现成深度融合来高度准确的3D重建。我们提出了一个简单的最先进的多视图深度估计器,其中有两个主要贡献:1)精心设计的2D CNN,该2D CNN利用强大的图像先验以及平面扫描特征量和几何损失,并结合2)将密钥帧和几何元数据集成到成本量中,这允许知情的深度平面评分。我们的方法在当前的最新估计中获得了重要的领先优势,以进行深度估计,并在扫描仪和7个镜头上进行3D重建,但仍允许在线实时实时低音重建。代码,模型和结果可在https://nianticlabs.github.io/simplerecon上找到
translated by 谷歌翻译
In this paper, we present a learning-based approach for multi-view stereo (MVS), i.e., estimate the depth map of a reference frame using posed multi-view images. Our core idea lies in leveraging a "learning-to-optimize" paradigm to iteratively index a plane-sweeping cost volume and regress the depth map via a convolutional Gated Recurrent Unit (GRU). Since the cost volume plays a paramount role in encoding the multi-view geometry, we aim to improve its construction both in pixel- and frame- levels. In the pixel level, we propose to break the symmetry of the Siamese network (which is typically used in MVS to extract image features) by introducing a transformer block to the reference image (but not to the source images). Such an asymmetric volume allows the network to extract global features from the reference image to predict its depth map. In view of the inaccuracy of poses between reference and source images, we propose to incorporate a residual pose network to make corrections to the relative poses, which essentially rectifies the cost volume in the frame-level. We conduct extensive experiments on real-world MVS datasets and show that our method achieves state-of-the-art performance in terms of both within-dataset evaluation and cross-dataset generalization.
translated by 谷歌翻译
Figure 1: Example output from our system, generated in real-time with a handheld Kinect depth camera and no other sensing infrastructure. Normal maps (colour) and Phong-shaded renderings (greyscale) from our dense reconstruction system are shown. On the left for comparison is an example of the live, incomplete, and noisy data from the Kinect sensor (used as input to our system).
translated by 谷歌翻译
We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.
translated by 谷歌翻译
密集的深度和姿势估计是各种视频应用的重要先决条件。传统的解决方案遭受了稀疏特征跟踪的鲁棒性和视频中相机基线不足。因此,最近的方法利用基于学习的光流和深度在估计密集深度之前。但是,以前的作品需要大量的计算时间或产量亚最佳深度结果。我们提出了GCVD,这是本文中从运动(SFM)中基于学习的视频结构的全球一致方法。 GCVD将紧凑型姿势图集成到基于CNN的优化中,以从有效的密钥帧选择机制中实现全球一致的估计。它可以通过流动引导的密钥帧和完善的深度提高基于学习的方法的鲁棒性。实验结果表明,GCVD在深度和姿势估计上都优于最先进的方法。此外,运行时实验表明,它在提供全球一致性的短期和长期视频中都提供了强大的效率。
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
获取房间规模场景的高质量3D重建对于即将到来的AR或VR应用是至关重要的。这些范围从混合现实应用程序进行电话会议,虚拟测量,虚拟房间刨,到机器人应用。虽然使用神经辐射场(NERF)的基于卷的视图合成方法显示有希望再现对象或场景的外观,但它们不会重建实际表面。基于密度的表面的体积表示在使用行进立方体提取表面时导致伪影,因为在优化期间,密度沿着射线累积,并且不在单个样本点处于隔离点。我们建议使用隐式函数(截短的签名距离函数)来代表表面来代表表面。我们展示了如何在NERF框架中纳入此表示,并将其扩展为使用来自商品RGB-D传感器的深度测量,例如Kinect。此外,我们提出了一种姿势和相机细化技术,可提高整体重建质量。相反,与集成NERF的深度前瞻性的并发工作,其专注于新型视图合成,我们的方法能够重建高质量的韵律3D重建。
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
我们呈现3DVNET,一种新型多视图立体声(MVS)深度预测方法,该方法结合了基于深度和体积的MVS方法的优点。我们的关键思想是使用3D场景建模网络,可迭代地更新一组粗略深度预测,从而产生高度准确的预测,它达成底层场景几何形状。与现有的深度预测技术不同,我们的方法使用体积3D卷积神经网络(CNN),该网络(CNN)在所有深度图中共同地在世界空间上运行。因此,网络可以学习有意义的场景级别。此外,与现有的体积MVS技术不同,我们的3D CNN在特征增强点云上运行,允许有效地聚合多视图信息和灵活的深度映射的迭代细化。实验结果表明,我们的方法超过了Scannet DataSet的深度预测和3D重建度量的最先进的准确性,以及来自Tum-RGBD和ICL-Nuim数据集的一系列场景。这表明我们的方法既有效又推广到新设置。
translated by 谷歌翻译
同时定位和映射(SLAM)对于自主机器人(例如自动驾驶汽车,自动无人机),3D映射系统和AR/VR应用至关重要。这项工作提出了一个新颖的LIDAR惯性 - 视觉融合框架,称为R $^3 $ LIVE ++,以实现强大而准确的状态估计,同时可以随时重建光线体图。 R $^3 $ LIVE ++由LIDAR惯性探针(LIO)和视觉惯性探测器(VIO)组成,均为实时运行。 LIO子系统利用从激光雷达的测量值重建几何结构(即3D点的位置),而VIO子系统同时从输入图像中同时恢复了几何结构的辐射信息。 r $^3 $ live ++是基于r $^3 $ live开发的,并通过考虑相机光度校准(例如,非线性响应功能和镜头渐滴)和相机的在线估计,进一步提高了本地化和映射的准确性和映射接触时间。我们对公共和私人数据集进行了更广泛的实验,以将我们提出的系统与其他最先进的SLAM系统进行比较。定量和定性结果表明,我们所提出的系统在准确性和鲁棒性方面对其他系统具有显着改善。此外,为了证明我们的工作的可扩展性,{我们基于重建的辐射图开发了多个应用程序,例如高动态范围(HDR)成像,虚拟环境探索和3D视频游戏。}最后,分享我们的发现和我们的发现和为社区做出贡献,我们在GitHub上公开提供代码,硬件设计和数据集:github.com/hku-mars/r3live
translated by 谷歌翻译
我们提出了PlanarRecon-从摆姿势的单眼视频中对3D平面进行全球连贯检测和重建的新型框架。与以前的作品从单个图像中检测到2D的平面不同,PlanarRecon逐步检测每个视频片段中的3D平面,该片段由一组关键帧组成,由一组关键帧组成,使用神经网络的场景体积表示。基于学习的跟踪和融合模块旨在合并以前片段的平面以形成连贯的全球平面重建。这种设计使PlanarRecon可以在每个片段中的多个视图中整合观察结果,并在不同的信息中整合了时间信息,从而使场景抽象的准确且相干地重建具有低聚合物的几何形状。实验表明,所提出的方法在实时时可以在扫描仪数据集上实现最先进的性能。
translated by 谷歌翻译
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10,14,16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.
translated by 谷歌翻译
虽然稀疏单眼同时定位和映射(SLAM)系统创建的基于按键的地图对于相机跟踪很有用,但对于许多机器人任务,可能需要密集的3D重建。涉及深度摄像机的解决方案在范围内和室内空间受到限制,并且基于最小化帧之间的光度误差的密集重建系统通常受到限制很差,并且遭受了规模歧义。为了解决这些问题,我们提出了一个3D重建系统,该系统利用卷积神经网络(CNN)的输出来生成包括度量标准量表的密钥帧的完全密集的深度图。我们的系统DeepFusion能够在GPU上产生实时密集的重建。它使用网络产生的学习不确定性,以概率方式将半密度的多视频立体算法与CNN的深度和梯度预测融合在一起。虽然网络只需要每个键帧一次,但我们能够使用每个新帧对深度图进行优化,以便不断利用新的几何约束。根据其在合成和现实世界数据集上的性能,我们证明了DeepLusion至少能够和其他可比较的系统执行。
translated by 谷歌翻译
a) Stereo input: trajectory and sparse reconstruction of an urban environment with multiple loop closures. (b) RGB-D input: keyframes and dense pointcloud of a room scene with one loop closure. The pointcloud is rendered by backprojecting the sensor depth maps from estimated keyframe poses. No fusion is performed.
translated by 谷歌翻译