Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage.We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Exploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
本文提出了一个开放而全面的框架,以系统地评估对自我监督单眼估计的最新贡献。这包括训练,骨干,建筑设计选择和损失功能。该领域的许多论文在建筑设计或损失配方中宣称新颖性。但是,简单地更新历史系统的骨干会导致25%的相对改善,从而使其胜过大多数现有系统。对该领域论文的系统评估并不直接。在以前的论文中比较类似于类似的需要,这意味着评估协议中的长期错误在现场无处不在。许多论文可能不仅针对特定数据集进行了优化,而且还针对数据和评估标准的错误。为了帮助该领域的未来研究,我们发布了模块化代码库,可以轻松评估针对校正的数据和评估标准的替代设计决策。我们重新实施,验证和重新评估16个最先进的贡献,并引入一个新的数据集(SYNS-Patches),其中包含各种自然和城市场景中的密集室外深度地图。这允许计算复杂区域(例如深度边界)的信息指标。
translated by 谷歌翻译
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. †
translated by 谷歌翻译
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10,14,16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.
translated by 谷歌翻译
A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation. 1 1 Find the model and other imformation on the project github page: https://github. com/Ravi-Garg/Unsupervised_Depth_Estimation
translated by 谷歌翻译
这些年来,展示技术已经发展。开发实用的HDR捕获,处理和显示解决方案以将3D技术提升到一个新的水平至关重要。多曝光立体声图像序列的深度估计是开发成本效益3D HDR视频内容的重要任务。在本文中,我们开发了一种新颖的深度体系结构,以进行多曝光立体声深度估计。拟议的建筑有两个新颖的组成部分。首先,对传统立体声深度估计中使用的立体声匹配技术进行了修改。对于我们体系结构的立体深度估计部分,部署了单一到stereo转移学习方法。拟议的配方规避了成本量构造的要求,该要求由基于重新编码的单码编码器CNN取代,具有不同的重量以进行功能融合。基于有效网络的块用于学习差异。其次,我们使用强大的视差特征融合方法组合了从不同暴露水平上从立体声图像获得的差异图。使用针对不同质量度量计算的重量图合并在不同暴露下获得的差异图。获得的最终预测差异图更强大,并保留保留深度不连续性的最佳功能。提出的CNN具有使用标准动态范围立体声数据或具有多曝光低动态范围立体序列的训练的灵活性。在性能方面,所提出的模型超过了最新的单眼和立体声深度估计方法,无论是定量还是质量地,在具有挑战性的场景流以及暴露的Middlebury立体声数据集上。该体系结构在复杂的自然场景中表现出色,证明了其对不同3D HDR应用的有用性。
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
建立新型观点综合的最近进展后,我们提出了改善单眼深度估计的应用。特别是,我们提出了一种在三个主要步骤中分开的新颖训练方法。首先,单眼深度网络的预测结果被扭转到额外的视点。其次,我们应用一个额外的图像综合网络,其纠正并提高了翘曲的RGB图像的质量。通过最小化像素-WISE RGB重建误差,该网络的输出需要尽可能类似地查看地面真实性视图。第三,我们将相同的单眼深度估计重新应用于合成的第二视图点,并确保深度预测与相关的地面真理深度一致。实验结果证明,我们的方法在Kitti和Nyu-Deaft-V2数据集上实现了最先进的或可比性,具有轻量级和简单的香草U-Net架构。
translated by 谷歌翻译
自我监督的学习已经为单眼深度估计显示出非常有希望的结果。场景结构和本地细节都是高质量深度估计的重要线索。最近的作品遭受了场景结构的明确建模,并正确处理细节信息,这导致了预测结果中的性能瓶颈和模糊人工制品。在本文中,我们提出了具有两个有效贡献的通道 - 明智的深度估计网络(Cadepth-Net):1)结构感知模块采用自我关注机制来捕获远程依赖性并聚合在信道中的识别特征尺寸,明确增强了场景结构的感知,获得了更好的场景理解和丰富的特征表示。 2)细节强调模块重新校准通道 - 方向特征映射,并选择性地强调信息性功能,旨在更有效地突出至关重要的本地细节信息和熔断器不同的级别功能,从而更精确,更锐化深度预测。此外,广泛的实验验证了我们方法的有效性,并表明我们的模型在基蒂基准和Make3D数据集中实现了最先进的结果。
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures.We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists.We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the stateof-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself. 1
translated by 谷歌翻译
最近,以自我监督的方式从单个图像中学习场景深度,最近受到了很多关注,旨在从单一图像中学习场景深度。尽管最近在这一领域做出了努力,但如何学习准确的场景深度并减轻闭塞对自我监督深度估计的负面影响仍然是一个空旷的问题。在解决这个问题时,我们首先凭经验分析了连续和离散深度约束的影响,这些约束在许多现有作品的培训过程中广泛使用。然后受到上述经验分析的启发,我们提出了一个新型网络,以学习一个自我监督的单眼深度估计,称为ocfd-net的咬合意识到的粗到细深度图。给定任意训练的立体声图像对,提议的OCFD-NET不仅在学习粗级深度图上采用离散的深度约束,而且还采用连续的深度约束来学习场景深度残差,从而导致罚款。 - 级别的深度图。此外,在建议的OCFD-NET下设计了一个遮挡感知模块,该模块能够提高学习闭塞的精细级别深度图的能力。 Kitti的实验结果表明,在大多数情况下,所提出的方法在七个常用指标下的比较最先进方法优于比较的最先进方法。此外,对Make3D的实验结果证明了该方法在四个常用指标下的跨数据集泛化能力方面的有效性。该代码可在https://github.com/zm-zhou/ocfd-net_pytorch上找到。
translated by 谷歌翻译
Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.
translated by 谷歌翻译
深度估计是在机器人手术和腹腔镜成像系统中进行图像引导干预的关键步骤。由于对于腹腔镜图像数据很难获得人均深度地面真相,因此很少将监督深度估计应用于手术应用。作为替代方案,已经引入了仅使用同步的立体图像对来训练深度估计器。但是,最近的工作集中在2D中的左右一致性上,而忽略了现实世界坐标中对象的宝贵固有3D信息,这意味着左右3D几何结构一致性尚未得到充分利用。为了克服这一限制,我们提出了M3Depth,这是一种自我监督的深度估计器,以利用3D几何结构信息隐藏在立体声对中,同时保持单眼推理。该方法还消除了在至少一个立体声图像中通过掩码看不见的边界区域的影响,以增强重叠区域中的左图和右图像之间的对应关系。密集实验表明,我们的方法在公共数据集和新获取的数据集上的以前的自我监督方法都大大优先,这表明在不同的样品和腹腔镜上都有良好的概括。
translated by 谷歌翻译
除了学习基于外观的特征外,多框架深度估计还通过特征匹配利用图像之间的几何关系来改善单帧方法。在本文中,我们重新访问了与自我监督的单眼深度估计的匹配,并提出了一种新颖的变压器体系结构,以生成成本量。我们使用深度污染的表现采样来选择匹配的候选者,并通过一系列自我和跨注意层来完善预测。这些层增强了像素特征之间的匹配概率,改善了容易歧义和局部最小值的标准相似性指标。精制的成本量被解码为深度估计,整个管道仅使用光度目标从视频端到端训练。 Kitti和DDAD数据集的实验表明,我们的深度构建体在自我监督的单眼估计中建立了新的最新技术,甚至具有高度专业的监督单帧体系结构竞争。我们还表明,我们学到的跨意义网络产生可以在数据集中转移的表示形式,从而提高了训练策略的有效性。项目页面:https://sites.google.com/tri.global/depthformer
translated by 谷歌翻译
A recent strand of work in view synthesis uses deep learning to generate multiplane images-a camera-centric, layered 3D representation-given two or more input images at known viewpoints. We apply this representation to singleview view synthesis, a problem which is more challenging but has potentially much wider application. Our method learns to predict a multiplane image directly from a single image input, and we introduce scale-invariant view synthesis for supervision, enabling us to train on online video. We show this approach is applicable to several different datasets, that it additionally generates reasonable depth maps, and that it learns to fill in content behind the edges of foreground objects in background layers.Project page at https://single-view-mpi.github.io/.
translated by 谷歌翻译
We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.
translated by 谷歌翻译
深度估计是3D重建的具有挑战性的任务,以提高环境意识的准确性感测。这项工作带来了一系列改进的新解决方案,与现有方法相比,增加了一系列改进,这增加了对深度图的定量和定性理解。最近,卷积神经网络(CNN)展示了估计单眼图象的深度图的非凡能力。然而,传统的CNN不支持拓扑结构,它们只能在具有确定尺寸和重量的常规图像区域上工作。另一方面,图形卷积网络(GCN)可以处理非欧几里德数据的卷积,并且它可以应用于拓扑结构内的不规则图像区域。因此,在这项工作中为了保护对象几何外观和分布,我们的目的是利用GCN进行自我监督的深度估计模型。我们的模型包括两个并行自动编码器网络:第一个是一个自动编码器,它取决于Reset-50,并从输入图像和多尺度GCN上提取功能以估计深度图。反过来,第二网络将用于基于Reset-18的两个连续帧之间估计自我运动矢量(即3D姿势)。估计的3D姿势和深度图都将用于构建目标图像。使用与光度,投影和平滑度相关的损耗函数的组合用于应对不良深度预测,并保持对象的不连续性。特别是,我们的方法提供了可比性和有前途的结果,在公共基准和Make3D数据集中的高预测精度为89%,与最先进的解决方案相比,培训参数的数量减少了40%。源代码在https://github.com/arminmasoumian/gcndepth.git上公开可用
translated by 谷歌翻译
当不可能使用深度传感器时,估计与物体的距离对于自动驾驶至关重要。在这种情况下,必须从车载安装的RGB摄像机估算距离,这是一项复杂的任务,尤其是在天然室外景观等环境中。在本文中,我们提出了一种名为M4Depth的新方法,以进行深度估计。首先,我们建立了两个连续帧的深度与视觉差异之间的徒关系,并展示了如何利用它以执行运动不变的像素深度估计。然后,我们详细介绍了基于金字塔卷积神经网络体系结构的M4DEPTH,每个级别通过使用两个定制的成本量来完善输入差异图估计。我们使用这些成本量来利用运动施加的视觉时空约束,并为各种场景增强网络的稳健性。我们在公共数据集上基准了我们的测试和概括模式的方法,其中包含在各种室外场景中记录的合成相机轨迹。结果表明,我们的网络在这些数据集上的表现优于最新技术,同时在标准深度估计基准上表现良好。我们方法的代码可在https://github.com/michael-fonder/m4depth上公开获得。
translated by 谷歌翻译