近年来,无监督的单眼深度和自我运动估计引起了广泛的研究关注。尽管当前的方法达到了高度最高的准确性,但由于训练单眼序列的训练,由于固有的规模模棱两可,它们通常无法学习真实的度量标准。在这项工作中,我们解决了这个问题,并提出了Dynadepth,这是一个新颖的规模感知框架,该框架整合了Vision和IMU运动动力学的信息。具体而言,我们首先提出IMU光度损失和交叉传感器光度一致性损失,以提供密集的监督和绝对尺度。为了完全利用两个传感器的互补信息,我们进一步驱动以相机为中心的扩展Kalman滤波器(EKF),以更新IMU预先整合运动时,在观察视觉测量时。此外,EKF公式可以学习一种自我运动不确定性度量,这对于无监督方法是不平凡的。通过在训练过程中利用IMU,Dynadepth不仅学习了绝对规模,而且还提供了更好的概括能力和稳健性,以防止视力退化,例如照明变化和移动对象。我们通过对Kitti和Make3D数据集进行大量实验和模拟来验证Dynadepth的有效性。
translated by 谷歌翻译
在本文中,我们提出了一个旨在进行探测估计的学习动机方法的统一信息理论框架,这是许多机器人技术和视觉任务的关键组成部分,例如导航和虚拟现实,其中需要相对摄像头姿势。我们将此问题提出来优化变分信息瓶颈的目标函数,从而消除了潜在表示中的姿势 - 呈现信息。拟议的框架为信息理论语言中的性能评估和理解提供了优雅的工具。具体而言,我们绑定了深度信息瓶颈框架的概括错误和潜在表示的可预测性。这些不仅提供了绩效保证,还提供了模型设计,样本收集和传感器选择的实用指导。此外,随机潜在表示提供了一种自然的不确定性度量,而无需进行额外的结构或计算。在两个众所周知的探测数据集上进行的实验证明了我们方法的有效性。
translated by 谷歌翻译
In this paper, we introduce a novel approach for ground plane normal estimation of wheeled vehicles. In practice, the ground plane is dynamically changed due to braking and unstable road surface. As a result, the vehicle pose, especially the pitch angle, is oscillating from subtle to obvious. Thus, estimating ground plane normal is meaningful since it can be encoded to improve the robustness of various autonomous driving tasks (e.g., 3D object detection, road surface reconstruction, and trajectory planning). Our proposed method only uses odometry as input and estimates accurate ground plane normal vectors in real time. Particularly, it fully utilizes the underlying connection between the ego pose odometry (ego-motion) and its nearby ground plane. Built on that, an Invariant Extended Kalman Filter (IEKF) is designed to estimate the normal vector in the sensor's coordinate. Thus, our proposed method is simple yet efficient and supports both camera- and inertial-based odometry algorithms. Its usability and the marked improvement of robustness are validated through multiple experiments on public datasets. For instance, we achieve state-of-the-art accuracy on KITTI dataset with the estimated vector error of 0.39{\deg}. Our code is available at github.com/manymuch/ground_normal_filter.
translated by 谷歌翻译
在本文中,通过以自我监督的方式将基于几何的方法纳入深度学习架构来实现强大的视觉测量(VO)的基本问题。通常,基于纯几何的算法与特征点提取和匹配中的深度学习不那么稳健,但由于其成熟的几何理论,在自我运动估计中表现良好。在这项工作中,首先提出了一种新颖的光学流量网络(PANET)内置于位置感知机构。然后,提出了一种在没有典型网络的情况下共同估计深度,光学流动和自我运动来学习自我运动的新系统。所提出的系统的关键组件是一种改进的束调节模块,其包含多个采样,初始化的自我运动,动态阻尼因子调整和Jacobi矩阵加权。另外,新颖的相对光度损耗函数先进以提高深度估计精度。该实验表明,所提出的系统在基于基于基于基于基于基于基于基于学习的基于学习的方法之间的深度,流量和VO估计方面不仅优于其他最先进的方法,而且与几何形状相比,也显着提高了鲁棒性 - 基于,基于学习和混合VO系统。进一步的实验表明,我们的模型在挑战室内(TMU-RGBD)和室外(KAIST)场景中实现了出色的泛化能力和性能。
translated by 谷歌翻译
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10,14,16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degreesof-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead. We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs 1 and iOS mobile devices 2 .
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
视觉内径(VO)估计是车辆状态估计和自主驾驶的重要信息来源。最近,基于深度学习的方法已经开始出现在文献中。但是,在驾驶的背景下,由于环境因素,摄像机放置等因素而导致的图像质量降低,单个传感器的方法通常容易出现故障。要解决这个问题,我们提出了一个深度传感器融合框架,其使用两者估计车辆运动来自多个板上摄像头的姿势和不确定性估计。我们使用混合CNN - RNN模型从一组连续图像中提取短时间形特征表示。然后,我们利用混合密度网络(MDN)来估计作为分布的混合和融合模块的6-DOF姿势,以使用来自多摄像机的MDN输出来估计最终姿势。我们在公开的大规模自动车辆数据集,Nuscenes上评估我们的方法。结果表明,与基于相机的估计相比,所提出的融合方法超越了最先进的,并提供了坚固的估计和准确的轨迹。
translated by 谷歌翻译
在接受高质量的地面真相(如LiDAR数据)培训时,监督的学习深度估计方法可以实现良好的性能。但是,LIDAR只能生成稀疏的3D地图,从而导致信息丢失。每个像素获得高质量的地面深度数据很难获取。为了克服这一限制,我们提出了一种新颖的方法,将有前途的平面和视差几何管道与深度信息与U-NET监督学习网络相结合的结构信息结合在一起,与现有的基于流行的学习方法相比,这会导致定量和定性的改进。特别是,该模型在两个大规模且具有挑战性的数据集上进行了评估:Kitti Vision Benchmark和CityScapes数据集,并在相对错误方面取得了最佳性能。与纯深度监督模型相比,我们的模型在薄物体和边缘的深度预测上具有令人印象深刻的性能,并且与结构预测基线相比,我们的模型的性能更加强大。
translated by 谷歌翻译
结合同时定位和映射(SLAM)估计和动态场景建模可以高效地在动态环境中获得机器人自主权。机器人路径规划和障碍避免任务依赖于场景中动态对象运动的准确估计。本文介绍了VDO-SLAM,这是一种强大的视觉动态对象感知SLAM系统,用于利用语义信息,使得能够在场景中进行准确的运动估计和跟踪动态刚性物体,而无需任何先前的物体形状或几何模型的知识。所提出的方法识别和跟踪环境中的动态对象和静态结构,并将这些信息集成到统一的SLAM框架中。这导致机器人轨迹的高度准确估计和对象的全部SE(3)运动以及环境的时空地图。该系统能够从对象的SE(3)运动中提取线性速度估计,为复杂的动态环境中的导航提供重要功能。我们展示了所提出的系统对许多真实室内和室外数据集的性能,结果表明了对最先进的算法的一致和实质性的改进。可以使用源代码的开源版本。
translated by 谷歌翻译
共同监督的深度学习方法的关节深度和自我运动估计可以产生准确的轨迹,而无需地面真相训练数据。但是,由于通常会使用光度损失,因此当这些损失所产生的假设(例如时间照明一致性,静态场景以及缺少噪声和遮挡)时,它们的性能会显着降解。这限制了它们用于例如夜间序列倾向于包含许多点光源(包括在动态对象上)和较暗图像区域中的低信噪比(SNR)。在本文中,我们展示了如何使用三种技术的组合来允许现有的光度损失在白天和夜间图像中起作用。首先,我们引入了每个像素神经强度转化,以补偿连续帧之间发生的光变化。其次,我们预测了每个像素的残差流图,我们用来纠正由网络估计的自我运动和深度引起的重新注入对应关系。第三,我们将训练图像降低,以提高方法的鲁棒性和准确性。这些更改使我们可以在白天和夜间图像中训练单个模型,而无需单独的编码器或诸如现有方法(例如现有方法)的额外功能网络。我们对具有挑战性的牛津机器人数据集进行了广泛的实验和消融研究,以证明我们方法对白天和夜间序列的疗效。
translated by 谷歌翻译
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. †
translated by 谷歌翻译
We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.
translated by 谷歌翻译
通过探索跨视图一致性,例如,光度计一致性和3D点云的一致性,在自我监督的单眼深度估计(SS-MDE)中取得了显着进步。但是,它们非常容易受到照明差异,遮挡,无纹理区域以及移动对象的影响,使它们不够强大,无法处理各种场景。为了应对这一挑战,我们在本文中研究了两种强大的跨视图一致性。首先,相邻帧之间的空间偏移场是通过通过可变形对齐来从其邻居重建参考框架来获得的,该比对通过深度特征对齐(DFA)损失来对齐时间深度特征。其次,计算每个参考框架及其附近框架的3D点云并转换为体素空间,在其中计算每个体素中的点密度并通过体素密度比对(VDA)损耗对齐。通过这种方式,我们利用了SS-MDE的深度特征空间和3D体素空间的时间连贯性,将“点对点”对齐范式转移到“区域到区域”。与光度一致性损失以及刚性点云对齐损失相比,由于深度特征的强大代表能力以及对上述挑战的素密度的高公差,提出的DFA和VDA损失更加强大。几个户外基准的实验结果表明,我们的方法的表现优于当前最新技术。广泛的消融研究和分析验证了拟议损失的有效性,尤其是在具有挑战性的场景中。代码和型号可在https://github.com/sunnyhelen/rcvc-depth上找到。
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
基于学习的视觉自我运动估计是有希望的,但尚未准备好在现实世界中浏览敏捷的移动机器人。在本文中,我们提出了Cuahn-Vio,这是一款适用于配备了向下式摄像头的微型航空车(MAVS)的强大而有效的单眼视觉惯性镜(VIO)。视觉前端是一个内容和不确定性的同型同构网络(CUAHN),它对非主体摄影图像内容和网络预测的故障案例非常有力。它不仅可以预测截然变换,还可以估计其不确定性。培训是自学的,因此它不需要通常难以获得的地面真理。该网络具有良好的概括,可以在不进行微调的情况下在新环境中部署“插件”。轻巧的扩展卡尔曼过滤器(EKF)用作VIO后端,并利用网络中的平均预测和方差估计进行视觉测量更新。 Cuahn-Vio在高速公共数据集上进行了评估,并显示出与最先进(SOTA)VIO方法的竞争精度。由于运动模糊,低网络推理时间(〜23ms)和稳定的处理延迟(〜26ms),Cuahn-Vio成功运行了NVIDIA JETSON TX2嵌入式处理器,以导航快速自动驾驶MAV。
translated by 谷歌翻译
自我监督的单眼方法可以有效地了解弱纹理表面或反射性对象的深度信息。但是,由于单眼几何建模的固有歧义,深度精度受到限制。相反,由于多视图立体声(MVS)的成功,多帧深度估计方法提高了深度准确性,后者直接使用几何约束。不幸的是,MV经常患有无纹理区域,非斜角表面和移动物体,尤其是在没有已知的相机运动和深度监督的现实世界视频序列中。因此,我们提出了MoveEpth,它利用了单眼线索和速度指导来改善多帧深度学习。与现有的MVS深度和单眼深度之间一致性的方法不同,MoveEpth通过直接解决MV的固有问题来增强多帧深度学习。我们方法的关键是利用单眼深度作为几何优先级来构建MVS成本量,并根据预测的相机速度的指导来调整成本量的深度候选。我们通过学习成本量的不确定性来进一步融合单眼深度和MVS深度,从而导致深度估计多视图几何形状的歧义。广泛的实验表明,移动eptth达到了最先进的性能:与monodepth2和packnet相比,我们的方法相对地将深度准确性提高了20 \%和19.8 \%,而Kitti基准测试的方法则提高了。 MoveEpth还推广到更具挑战性的DDAD基准测试,相对超过7.2 \%。该代码可在https://github.com/jeffwang987/movedepth上获得。
translated by 谷歌翻译
使用从未标识的视频培训的CNNS的单视深度估计显示了重要的承诺。然而,街头场景驾驶场景中主要获得了优异的结果,并且此类方法通常在其他设置中失败,特别是手持设备采取的室内视频。在这项工作中,我们建立了手持式环境中展出的复杂自我动作是学习深度的关键障碍。我们的基本分析表明,旋转在训练期间的噪声表现在训练期间,而不是提供监督信号的翻译(基线)。为了解决挑战,我们提出了一种数据预处理方法,可以通过去除其有效学习的相对旋转来整流训练图像。显着提高的性能验证了我们的动机。在不需要预处理的情况下,我们向端到端学习,我们提出了一种具有新型损失功能的自动整流网络,可以自动学习在训练期间纠正图像。因此,我们的结果在挑战NYUV2数据集中的大幅度上以较大的余量优于先前的无监督的SOTA方法。我们还展示了我们在Scannet和Make3D中培训模型的概括,以及我们提出的7场景和基蒂数据集的建议学习方法的普遍性。
translated by 谷歌翻译
Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.
translated by 谷歌翻译