我们提出了深斑视觉探光仪(DPVO),这是一种新的单眼视觉探光度(VO)的深度学习系统。DPVO在单个RTX-3090 GPU上仅使用4GB存储器以2x-5X实时速度运行时,是准确且健壮的。我们对标准基准测试进行评估,并以准确性和速度均优于所有先前的工作(经典或学习)。代码可在https://github.com/princeton-vl/dpvo上找到。
translated by 谷歌翻译
我们提出了场景运动的新颖双流表示,将光流分​​解为由摄像机运动引起的静态流场和另一个由场景中对象的运动引起的动态流场。基于此表示形式,我们提出了一个动态的大满贯,称为Deflowslam,它利用图像中的静态和动态像素来求解相机的姿势,而不是像其他动态SLAM系统一样简单地使用静态背景像素。我们提出了一个动态更新模块,以一种自我监督的方式训练我们的Deflowslam,其中密集的束调节层采用估计的静态流场和由动态掩码控制的权重,并输出优化的静态流动场的残差,相机姿势的残差,和反度。静态和动态流场是通过将当前图像翘曲到相邻图像来估计的,并且可以通过将两个字段求和来获得光流。广泛的实验表明,在静态场景和动态场景中,Deflowslam可以很好地推广到静态和动态场景,因为它表现出与静态和动态较小的场景中最先进的Droid-Slam相当的性能,同时在高度动态的环境中表现出明显优于Droid-Slam。代码和数据可在项目网页上找到:\ urlstyle {tt} \ textColor {url_color} {\ url {https://zju3dv.github.io/deflowslam/}}}。
translated by 谷歌翻译
This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.
translated by 谷歌翻译
在本文中,我们串联串联一个实时单手抄语和密集的测绘框架。对于姿势估计,串联基于关键帧的滑动窗口执行光度束调整。为了增加稳健性,我们提出了一种新颖的跟踪前端,使用从全局模型中呈现的深度图来执行密集的直接图像对齐,该模型从密集的深度预测逐渐构建。为了预测密集的深度映射,我们提出了通过分层构造具有自适应视图聚合的3D成本卷来平衡关键帧之间的不同立体声基线的3D成本卷来使用整个活动密钥帧窗口的级联视图 - 聚合MVSNet(CVA-MVSNET)。最后,将预测的深度映射融合到表示为截短的符号距离函数(TSDF)体素网格的一致的全局映射中。我们的实验结果表明,在相机跟踪方面,串联优于其他最先进的传统和学习的单眼视觉径管(VO)方法。此外,串联示出了最先进的实时3D重建性能。
translated by 谷歌翻译
从单眼视频中估算移动摄像头的姿势是一个具有挑战性的问题,尤其是由于动态环境中移动对象的存在,在动态环境中,现有摄像头姿势估计方法的性能易于几何一致的像素。为了应对这一挑战,我们为视频提供了一种强大的密度间接结构,该结构是基于由成对光流初始化的致密对应的。我们的关键想法是将远程视频对应性优化为密集的点轨迹,并使用它来学习对运动分割的强大估计。提出了一种新型的神经网络结构来处理不规则的点轨迹数据。然后,在远程点轨迹的一部分中,通过全局捆绑式调整估算和优化摄像头姿势,这些轨迹被归类为静态。 MPI Sintel数据集的实验表明,与现有最新方法相比,我们的系统产生的相机轨迹明显更准确。此外,我们的方法能够在完全静态的场景上保留相机姿势的合理准确性,该场景始终优于端到端深度学习的强大最新密度对应方法,这证明了密集间接方法的潜力基于光流和点轨迹。由于点轨迹表示是通用的,因此我们进一步介绍了具有动态对象的复杂运动的野外单眼视频的比较。代码可在https://github.com/bytedance/particle-sfm上找到。
translated by 谷歌翻译
由于其对环境变化的鲁棒性,视觉猛感的间接方法是受欢迎的。 ORB-SLAM2 \ CITE {ORBSLM2}是该域中的基准方法,但是,除非选择帧作为关键帧,否则它会消耗从未被重用的描述符。轻量级和高效,因为它跟踪相邻帧之间的关键点而不计算描述符。为此,基于稀疏光流提出了一种两个级粗到微小描述符独立的Keypoint匹配方法。在第一阶段,我们通过简单但有效的运动模型预测初始关键点对应,然后通过基于金字塔的稀疏光流跟踪鲁棒地建立了对应关系。在第二阶段,我们利用运动平滑度和末端几何形状的约束来改进对应关系。特别是,我们的方法仅计算关键帧的描述符。我们在\ texit {tum}和\ texit {icl-nuim} RGB-D数据集上测试Fastorb-Slam,并将其准确性和效率与九种现有的RGB-D SLAM方法进行比较。定性和定量结果表明,我们的方法实现了最先进的准确性,并且大约是ORB-SLAM2的两倍。
translated by 谷歌翻译
We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts perpixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves stateof-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT.
translated by 谷歌翻译
我们提出了一个新颖的圆锥视觉探针仪框架,称为PVO,以对场景的运动,几何形状和泛型分割信息进行更全面的建模。 PVO在统一的视图中模拟视觉探光仪(VO)和视频全景分割(VPS),从而使这两个任务能够相互促进。具体来说,我们将一个泛型更新模块引入VO模块,该模块在图像泛型分段上运行。该泛型增强的VO模块可以通过调整优化的相机姿势的权重来修剪相机姿势估计中动态对象的干扰。另一方面,使用摄像头姿势,深度和光流,通过将当前帧的圆形分割结果融合到相邻框架中,从而提高了VO-增强VPS模块,从而提高了分割精度。模块。这两个模块通过反复的迭代优化互相贡献。广泛的实验表明,PVO在视觉景观和视频综合分割任务中的最先进方法均优于最先进的方法。代码和数据可在项目网页上找到:\ urlstyle {tt} \ textColor {url_color} {\ url {https://zju3dv.github.io/pvo/pvo/}}}。
translated by 谷歌翻译
This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models.The first main novelty is a feature-based tightly-integrated visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization phase. The result is a system that operates robustly in real time, in small and large, indoor and outdoor environments, and is two to ten times more accurate than previous approaches.The second main novelty is a multiple map system that relies on a new place recognition method with improved recall. Thanks to it, ORB-SLAM3 is able to survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting mapped areas. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information. This allows to include in bundle adjustment co-visible keyframes, that provide high parallax observations boosting accuracy, even if they are widely separated in time or if they come from a previous mapping session.Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting representative of AR/VR scenarios. For the benefit of the community we make public the source code.
translated by 谷歌翻译
a) Stereo input: trajectory and sparse reconstruction of an urban environment with multiple loop closures. (b) RGB-D input: keyframes and dense pointcloud of a room scene with one loop closure. The pointcloud is rendered by backprojecting the sensor depth maps from estimated keyframe poses. No fusion is performed.
translated by 谷歌翻译
我们提出了一个基于深度神经网络深度预测的比例感知直接单眼遗传学的通用框架。与以前的深度信息仅部分利用的方法相反,我们制定了一种新颖的深度预测残差,使我们能够合并多视图深度信息。此外,我们建议使用截短的稳健成本函数,以防止考虑不一致的深度估计。光度法和深度预测测量值集成到紧密耦合的优化中,从而导致尺度感知的单眼系统,该系统不会累积尺度漂移。我们的建议没有针对具体的神经网络的特殊性,能够与绝大多数现有的深度预测解决方案一起工作。我们使用两个公开可用的神经网络在Kitti Odometry数据集上评估该提案的有效性和普遍性,并将其与类似方法进行比较,以及单眼和立体声猛击的最新方法。实验表明,我们的提议在很大程度上要优于经典的单眼大满贯,更精确的5至9倍,击败了类似的方法,并且具有更接近立体系统的精度。
translated by 谷歌翻译
We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on sim(3), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.
translated by 谷歌翻译
农业行业不断寻求农业生产中涉及的不同过程的自动化,例如播种,收获和杂草控制。使用移动自主机器人执行这些任务引起了极大的兴趣。耕地面向同时定位和映射(SLAM)系统(移动机器人技术的关键)面临着艰巨的挑战,这是由于视觉上的难度,这是由于高度重复的场景而引起的。近年来,已经开发了几种视觉惯性遗传(VIO)和SLAM系统。事实证明,它们在室内和室外城市环境中具有很高的准确性。但是,在农业领域未正确评估它们。在这项工作中,我们从可耕地上的准确性和处理时间方面评估了最相关的最新VIO系统,以便更好地了解它们在这些环境中的行为。特别是,该评估是在我们的车轮机器人记录的大豆领域记录的传感器数据集中进行的,该田间被公开发行为Rosario数据集。评估表明,环境的高度重复性外观,崎terrain的地形产生的强振动以及由风引起的叶子的运动,暴露了当前最新的VIO和SLAM系统的局限性。我们分析了系统故障并突出观察到的缺点,包括初始化故障,跟踪损失和对IMU饱和的敏感性。最后,我们得出的结论是,即使某些系统(例如Orb-Slam3和S-MSCKF)在其他系统方面表现出良好的结果,但应采取更多改进,以使其在某些申请中的农业领域可靠,例如作物行的土壤耕作和农药喷涂。 。
translated by 谷歌翻译
在本文中,引入了一种新颖的解决方案,用于由深度学习组件构建的视觉同时定位和映射(VSLAM)。所提出的体系结构是一个高度模块化的框架,在该框架中,每个组件在基于视觉的深度学习解决方案的领域中提供了最新的最新技术。该论文表明,通过这些单个构建基块的协同整合,可以创建一个功能高效,有效的全直神经(ATDN)VSLAM系统。引入了嵌入距离损耗函数并使用ATDN体系结构进行了训练。最终的系统在Kitti数据集的子集上设法实现了4.4%的翻译和0.0176 ver/m的旋转误差。所提出的体系结构可用于有效,低延迟的自主驾驶(AD)协助数据库创建以及自动驾驶汽车(AV)控制的基础。
translated by 谷歌翻译
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10,14,16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.
translated by 谷歌翻译
视觉惯性进程(VIO)是当今大多数AR/VR和自主机器人系统的姿势估计主链,无论是学术界和工业的。但是,这些系统对关键参数的初始化高度敏感,例如传感器偏见,重力方向和度量标准。在实际场景中,很少满足高parallax或可变加速度假设(例如,悬停空中机器人,智能手机AR用户不使用电话打手机的智能手机AR),经典的视觉惯性初始化配方通常会变得不良条件和/或未能有意义地融合。在本文中,我们专门针对这些低兴奋的场景针对野生用法至关重要的视觉惯性初始化。我们建议通过将新的基于学习的测量作为高级输入来规避经典视觉惯性结构(SFM)初始化的局限性。我们利用学到的单眼深度图像(单深度)来限制特征的相对深度,并通过共同优化其尺度和偏移来将单深度升级到度量标尺。我们的实验显示出与视觉惯性初始化的经典配方相比,问题条件有显着改善,并且相对于公共基准的最先进,尤其是在低兴奋的情况下,相对于最先进的表现,具有显着的准确性和鲁棒性提高。我们进一步将这种改进扩展到现有的探射系统中的实现,以说明我们改进的初始化方法对产生跟踪轨迹的影响。
translated by 谷歌翻译
我们提出了Theseus,这是一个有效的应用程序不合时宜的开源库,用于在Pytorch上构建的可区分非线性最小二乘(DNL)优化,为机器人技术和视觉中的端到端结构化学习提供了一个共同的框架。现有的DNLS实施是特定应用程序的,并且并不总是纳入许多对效率重要的成分。 Theseus是应用程序不可静止的,正如我们使用的几个示例应用程序所用的,这些应用程序是使用相同的基础可区分组件构建的,例如二阶优化器,标准成本功能和Lie组。为了提高效率,TheseUS纳入了对稀疏求解器,自动矢量化,批处理,GPU加速度和梯度计算的支持,并具有隐式分化和直接损耗最小化。我们在一组应用程序中进行了广泛的性能评估,显示出这些功能时显示出明显的效率提高和更好的可扩展性。项目页面:https://sites.google.com/view/theseus-ai
translated by 谷歌翻译
A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degreesof-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead. We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs 1 and iOS mobile devices 2 .
translated by 谷歌翻译
时间一致的深度估计对于诸如增强现实之类的实时应用至关重要。虽然立体声深度估计已经接受了显着的注意,导致逐帧的改进,虽然相对较少的工作集中在跨越帧的时间一致性。实际上,基于我们的分析,当前立体声深度估计技术仍然遭受不良时间一致性。由于并发对象和摄像机运动,在动态场景中稳定深度是挑战。在在线设置中,此过程进一步加剧,因为只有过去的帧可用。在本文中,我们介绍了一种技术,在线设置中的动态场景中产生时间一致的深度估计。我们的网络增强了具有新颖运动和融合网络的当前每帧立体声网络。通过预测每个像素SE3变换,运动网络占对象和相机运动。融合网络通过用回归权重聚合当前和先前预测来提高预测的一致性。我们在各种数据集中进行广泛的实验(合成,户外,室内和医疗)。在零射泛化和域微调中,我们证明我们所提出的方法在数量和定性的时间稳定和每个帧精度方面优于竞争方法。我们的代码将在线提供。
translated by 谷歌翻译
Simultaneous Localization & Mapping (SLAM) is the process of building a mutual relationship between localization and mapping of the subject in its surrounding environment. With the help of different sensors, various types of SLAM systems have developed to deal with the problem of building the relationship between localization and mapping. A limitation in the SLAM process is the lack of consideration of dynamic objects in the mapping of the environment. We propose the Dynamic Object Tracking SLAM (DyOb-SLAM), which is a Visual SLAM system that can localize and map the surrounding dynamic objects in the environment as well as track the dynamic objects in each frame. With the help of a neural network and a dense optical flow algorithm, dynamic objects and static objects in an environment can be differentiated. DyOb-SLAM creates two separate maps for both static and dynamic contents. For the static features, a sparse map is obtained. For the dynamic contents, a trajectory global map is created as output. As a result, a frame to frame real-time based dynamic object tracking system is obtained. With the pose calculation of the dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic objects with time. The performance of DyOb-SLAM is observed by comparing it with a similar Visual SLAM system, VDO-SLAM and the performance is measured by calculating the camera and object pose errors as well as the object speed error.
translated by 谷歌翻译