虽然在驾驶场景中自我监督的单眼深度估计已经取得了可比性的性能,但违反了静态世界假设的行为仍然可以导致交通参与者的错误深度预测,造成潜在的安全问题。在本文中,我们呈现R4DYN,这是一种新颖的技术,用于在自我监督深度估计框架之上使用成本高效的雷达数据。特别是,我们展示如何在培训期间使用雷达,以及额外的输入,以增强推理时间的估计稳健性。由于汽车雷达很容易获得,这允许从各种现有车辆中收集培训数据。此外,通过过滤和扩展信号以使其与基于学习的方法兼容,我们地满地雷达固有问题,例如噪声和稀疏性。通过R4DYN,我们能够克服自我监督深度估计的一个主要限制,即交通参与者的预测。我们大大提高了动态物体的估计,例如汽车在挑战的NUSCENES数据集中达到37%,因此证明雷达是用于自主车辆中单眼深度估计的有价值的额外传感器。
translated by 谷歌翻译
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. †
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
我们提出了一种自动生成语义标签的方法,以实现汽车范围多普勒(RD)雷达光谱的真实记录。当训练神经网络从雷达数据中识别对象识别时,需要此类标签。自动标记方法除了雷达频谱之外,还取决于相机和激光雷达数据的同时记录。通过将雷达光谱翘曲到相机图像中,可以将最新的对象识别算法应用于相机图像中相关对象(例如汽车)。翘曲操作设计为完全可区分,它允许通过翘曲操作在相机图像上计算出的梯度到雷达数据上运行的神经网络。随着翘曲操作依赖于准确的场景流估计,我们进一步提出了一种新颖的场景流估计算法,该算法利用了相机,激光雷达和雷达传感器的信息。将所提出的场景流估计方法与最新场景流量算法进行比较,并且优于大约30%的W.R.T.平均平均误差。通过评估通过提出的框架以实现到达方向估计的训练的神经网络的性能,可以验证自动标签生成的整体框架的整体框架的可行性。
translated by 谷歌翻译
在接受高质量的地面真相(如LiDAR数据)培训时,监督的学习深度估计方法可以实现良好的性能。但是,LIDAR只能生成稀疏的3D地图,从而导致信息丢失。每个像素获得高质量的地面深度数据很难获取。为了克服这一限制,我们提出了一种新颖的方法,将有前途的平面和视差几何管道与深度信息与U-NET监督学习网络相结合的结构信息结合在一起,与现有的基于流行的学习方法相比,这会导致定量和定性的改进。特别是,该模型在两个大规模且具有挑战性的数据集上进行了评估:Kitti Vision Benchmark和CityScapes数据集,并在相对错误方面取得了最佳性能。与纯深度监督模型相比,我们的模型在薄物体和边缘的深度预测上具有令人印象深刻的性能,并且与结构预测基线相比,我们的模型的性能更加强大。
translated by 谷歌翻译
本文提出了一个开放而全面的框架,以系统地评估对自我监督单眼估计的最新贡献。这包括训练,骨干,建筑设计选择和损失功能。该领域的许多论文在建筑设计或损失配方中宣称新颖性。但是,简单地更新历史系统的骨干会导致25%的相对改善,从而使其胜过大多数现有系统。对该领域论文的系统评估并不直接。在以前的论文中比较类似于类似的需要,这意味着评估协议中的长期错误在现场无处不在。许多论文可能不仅针对特定数据集进行了优化,而且还针对数据和评估标准的错误。为了帮助该领域的未来研究,我们发布了模块化代码库,可以轻松评估针对校正的数据和评估标准的替代设计决策。我们重新实施,验证和重新评估16个最先进的贡献,并引入一个新的数据集(SYNS-Patches),其中包含各种自然和城市场景中的密集室外深度地图。这允许计算复杂区域(例如深度边界)的信息指标。
translated by 谷歌翻译
近年来,由于它在机器人技术和自主驾驶中的应用,因此自我监督的单眼深度估计已成为一项激烈研究的主题。最近的许多工作都集中在通过提高体系结构复杂性来改善深度估计。本文表明,也可以通过改善学习过程而不是提高模型复杂性来实现最先进的绩效。更具体地说,我们建议(i)仅在训练期间对前几个时期使用不变姿势损失,(ii)训练时忽略小的潜在动态物体,(iii)采用基于外观的方法分别估算物体姿势,以实现真正动态的姿势对象。我们证明这些简化将GPU的内存使用量减少了29%,并导致定性和定量改进的深度图
translated by 谷歌翻译
估计物体的距离是自动驾驶的一项安全至关重要的任务。专注于短距离对象,现有方法和数据集忽略了同样重要的远程对象。在本文中,我们引入了一项具有挑战性且探索不足的任务,我们将其称为长距离距离估计,以及两个数据集,以验证为此任务开发的新方法。然后,我们提出了第一个框架,即通过使用场景中已知距离的引用来准确估算远程对象的距离。从人类感知中汲取灵感,R4D通过将目标对象连接到所有引用来构建图形。图中的边缘编码一对目标和参考对象之间的相对距离信息。然后使用注意模块权衡参考对象的重要性,并将它们组合到一个目标对象距离预测中。与现有基准相比,这两个数据集的实验通过显示出显着改善,证明了R4D的有效性和鲁棒性。我们正在寻求制作提出的数据集,Waymo OpenDataSet-远程标签,可在Waymo.com/open/download上公开可用。
translated by 谷歌翻译
门控相机作为扫描LIDAR传感器的替代方案,具有高分辨率的3D深度,在雾,雪和雨中稳健。不是通过光子飞行时间顺序地扫描场景并直接记录深度,如在脉冲激光雷达传感器中,所设定的成像器编码在百万像素分辨率的少量门控切片中的相对强度的深度。尽管现有方法表明,可以从这些测量中解码高分辨率深度,但这些方法需要同步和校准的LIDAR来监督门控深度解码器 - 禁止在地理位置上快速采用,在大型未配对数据集上培训,以及探索替代应用程序外面的汽车用例。在这项工作中,我们填补了这个差距并提出了一种完全自我监督的深度估计方法,它使用门控强度配置文件和时间一致性作为训练信号。所提出的模型从门控视频序列培训结束到结束,不需要LIDAR或RGB数据,并学会估计绝对深度值。我们将门控切片作为输入和解散估计场景,深度和环境光,然后用于学习通过循环损耗来重建输入切片。我们依赖于给定帧和相邻门控切片之间的时间一致性,以在具有阴影和反射的区域中估计深度。我们通过实验验证,所提出的方法优于基于单眼RGB和立体图像的现有监督和自我监督的深度估计方法,以及基于门控图像的监督方法。
translated by 谷歌翻译
我们介绍了MGNET,这是一个多任务框架,用于单眼几何场景。我们将单眼几何场景的理解定义为两个已知任务的组合:全景分割和自我监管的单眼深度估计。全景分段不仅在语义上,而且在实例的基础上捕获完整场景。自我监督的单眼深度估计使用摄像机测量模型得出的几何约束,以便从单眼视频序列中测量深度。据我们所知,我们是第一个在一个模型中提出这两个任务的组合的人。我们的模型专注于低潜伏期,以实时在单个消费级GPU上实时提供快速推断。在部署过程中,我们的模型将产生密集的3D点云,其中具有来自单个高分辨率摄像头图像的实例意识到语义标签。我们对两个流行的自动驾驶基准(即CityScapes and Kitti)评估了模型,并在其他能够实时的方法中表现出竞争性能。源代码可从https://github.com/markusschoen/mgnet获得。
translated by 谷歌翻译
Determining accurate bird's eye view (BEV) positions of objects and tracks in a scene is vital for various perception tasks including object interactions mapping, scenario extraction etc., however, the level of supervision required to accomplish that is extremely challenging to procure. We propose a light-weight, weakly supervised method to estimate 3D position of objects by jointly learning to regress the 2D object detections and scene's depth prediction in a single feed-forward pass of a network. Our proposed method extends a center-point based single-shot object detector \cite{zhou2019objects}, and introduces a novel object representation where each object is modeled as a BEV point spatio-temporally, without the need of any 3D or BEV annotations for training and LiDAR data at query time. The approach leverages readily available 2D object supervision along with LiDAR point clouds (used only during training) to jointly train a single network, that learns to predict 2D object detection alongside the whole scene's depth, to spatio-temporally model object tracks as points in BEV. The proposed method is computationally over $\sim$10x efficient compared to recent SOTA approaches [1, 38] while achieving comparable accuracies on KITTI tracking benchmark.
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
估计神经网络的不确定性在安全关键环境中起着基本作用。在对自主驾驶的感知中,测量不确定性意味着向下游任务提供额外的校准信息,例如路径规划,可以将其用于安全导航。在这项工作中,我们提出了一种用于对象检测的新型采样的不确定性估计方法。我们称之为特定网络,它是第一个为每个输出信号提供单独的不确定性:Objectness,类,位置和大小。为实现这一点,我们提出了一种不确定性感知的热图,并利用检测器提供的相邻边界框在推理时间。我们分别评估了不同不确定性估计的检测性能和质量,也具有具有挑战性的域名样本:BDD100K和肾上腺素训练在基蒂培训。此外,我们提出了一种新的指标来评估位置和大小的不确定性。当转移到看不见的数据集时,某些基本上概括了比以前的方法和集合更好,同时是实时和提供高质量和全面的不确定性估计。
translated by 谷歌翻译
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms. Source code and data will be released at https://github.com/JiawangBian/sc_depth_pl
translated by 谷歌翻译
Monocular depth estimation has been actively studied in fields such as robot vision, autonomous driving, and 3D scene understanding. Given a sequence of color images, unsupervised learning methods based on the framework of Structure-From-Motion (SfM) simultaneously predict depth and camera relative pose. However, dynamically moving objects in the scene violate the static world assumption, resulting in inaccurate depths of dynamic objects. In this work, we propose a new method to address such dynamic object movements through monocular 3D object detection. Specifically, we first detect 3D objects in the images and build the per-pixel correspondence of the dynamic pixels with the detected object pose while leaving the static pixels corresponding to the rigid background to be modeled with camera motion. In this way, the depth of every pixel can be learned via a meaningful geometry model. Besides, objects are detected as cuboids with absolute scale, which is used to eliminate the scale ambiguity problem inherent in monocular vision. Experiments on the KITTI depth dataset show that our method achieves State-of-The-Art performance for depth estimation. Furthermore, joint training of depth, camera motion and object pose also improves monocular 3D object detection performance. To the best of our knowledge, this is the first work that allows a monocular 3D object detection network to be fine-tuned in a self-supervised manner.
translated by 谷歌翻译
尽管现有的单眼深度估计方法取得了长足的进步,但由于网络的建模能力有限和规模歧义问题,预测单个图像的准确绝对深度图仍然具有挑战性。在本文中,我们介绍了一个完全视觉上的基于注意力的深度(Vadepth)网络,在该网络中,将空间注意力和通道注意都应用于所有阶段。通过在远距离沿空间和通道维度沿空间和通道维度的特征的依赖关系连续提取,Vadepth网络可以有效地保留重要的细节并抑制干扰特征,以更好地感知场景结构,以获得更准确的深度估计。此外,我们利用几何先验来形成规模约束,以进行比例感知模型培训。具体而言,我们使用摄像机和由地面点拟合的平面之间的距离构建了一种新颖的规模感知损失,该平面与图像底部中间的矩形区域的像素相对应。 Kitti数据集的实验结果表明,该体系结构达到了最新性能,我们的方法可以直接输出绝对深度而无需后处理。此外,我们在Seasondepth数据集上的实验还证明了我们模型对多个看不见的环境的鲁棒性。
translated by 谷歌翻译
建立新型观点综合的最近进展后,我们提出了改善单眼深度估计的应用。特别是,我们提出了一种在三个主要步骤中分开的新颖训练方法。首先,单眼深度网络的预测结果被扭转到额外的视点。其次,我们应用一个额外的图像综合网络,其纠正并提高了翘曲的RGB图像的质量。通过最小化像素-WISE RGB重建误差,该网络的输出需要尽可能类似地查看地面真实性视图。第三,我们将相同的单眼深度估计重新应用于合成的第二视图点,并确保深度预测与相关的地面真理深度一致。实验结果证明,我们的方法在Kitti和Nyu-Deaft-V2数据集上实现了最先进的或可比性,具有轻量级和简单的香草U-Net架构。
translated by 谷歌翻译
共同监督的深度学习方法的关节深度和自我运动估计可以产生准确的轨迹,而无需地面真相训练数据。但是,由于通常会使用光度损失,因此当这些损失所产生的假设(例如时间照明一致性,静态场景以及缺少噪声和遮挡)时,它们的性能会显着降解。这限制了它们用于例如夜间序列倾向于包含许多点光源(包括在动态对象上)和较暗图像区域中的低信噪比(SNR)。在本文中,我们展示了如何使用三种技术的组合来允许现有的光度损失在白天和夜间图像中起作用。首先,我们引入了每个像素神经强度转化,以补偿连续帧之间发生的光变化。其次,我们预测了每个像素的残差流图,我们用来纠正由网络估计的自我运动和深度引起的重新注入对应关系。第三,我们将训练图像降低,以提高方法的鲁棒性和准确性。这些更改使我们可以在白天和夜间图像中训练单个模型,而无需单独的编码器或诸如现有方法(例如现有方法)的额外功能网络。我们对具有挑战性的牛津机器人数据集进行了广泛的实验和消融研究,以证明我们方法对白天和夜间序列的疗效。
translated by 谷歌翻译
与LIDAR相比,相机和雷达传感器在成本,可靠性和维护方面具有显着优势。现有的融合方法通常融合了结果级别的单个模式的输出,称为后期融合策略。这可以通过使用现成的单传感器检测算法受益,但是晚融合无法完全利用传感器的互补特性,因此尽管相机雷达融合的潜力很大,但性能有限。在这里,我们提出了一种新颖的提案级早期融合方法,该方法有效利用了相机和雷达的空间和上下文特性,用于3D对象检测。我们的融合框架首先将图像建议与极坐标系中的雷达点相关联,以有效处理坐标系和空间性质之间的差异。将其作为第一阶段,遵循连续的基于交叉注意的特征融合层在相机和雷达之间自适应地交换时尚信息,从而导致强大而专心的融合。我们的摄像机雷达融合方法可在Nuscenes测试集上获得最新的41.1%地图,而NDS则达到52.3%,比仅摄像机的基线高8.7和10.8点,并在竞争性能上提高竞争性能LIDAR方法。
translated by 谷歌翻译