通过使用立体声相机或3D摄像机估计深度图像,确定场景中的对象和来自2D图像的相机传感器之间的距离。深度估计的结果是相对距离,可用于计算实际上适用的绝对距离。然而,距离估计非常具有挑战性,使用2D单手套相机。本文介绍了深度学习框架,由两个深度网络组成,用于使用单个图像进行深度估计和对象检测。首先,使用您只有一次(yolov5)网络,检测和本地化场景中的对象。并行地,使用深度自动统计器网络计算估计的深度图像以检测相对距离。基于对象检测的基于对象的Yolo使用监督学习技术训练,又逆转,深度估计网络是自我监督的培训。呈现距离估计框架是在室外场景的真实图像上进行评估。所达到的结果表明,该框架具有前景,其含量为96%,RMSE为0.203的正确绝对距离。
translated by 谷歌翻译
深度估计是3D重建的具有挑战性的任务,以提高环境意识的准确性感测。这项工作带来了一系列改进的新解决方案,与现有方法相比,增加了一系列改进,这增加了对深度图的定量和定性理解。最近,卷积神经网络(CNN)展示了估计单眼图象的深度图的非凡能力。然而,传统的CNN不支持拓扑结构,它们只能在具有确定尺寸和重量的常规图像区域上工作。另一方面,图形卷积网络(GCN)可以处理非欧几里德数据的卷积,并且它可以应用于拓扑结构内的不规则图像区域。因此,在这项工作中为了保护对象几何外观和分布,我们的目的是利用GCN进行自我监督的深度估计模型。我们的模型包括两个并行自动编码器网络:第一个是一个自动编码器,它取决于Reset-50,并从输入图像和多尺度GCN上提取功能以估计深度图。反过来,第二网络将用于基于Reset-18的两个连续帧之间估计自我运动矢量(即3D姿势)。估计的3D姿势和深度图都将用于构建目标图像。使用与光度,投影和平滑度相关的损耗函数的组合用于应对不良深度预测,并保持对象的不连续性。特别是,我们的方法提供了可比性和有前途的结果,在公共基准和Make3D数据集中的高预测精度为89%,与最先进的解决方案相比,培训参数的数量减少了40%。源代码在https://github.com/arminmasoumian/gcndepth.git上公开可用
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
准确估计深度信息的能力对于许多自主应用来识别包围环境并预测重要对象的深度至关重要。最近使用的技术之一是单眼深度估计,其中深度图从单个图像推断出深度图。本文提高了自我监督的深度学习技术,以进行准确的广义单眼深度估计。主要思想是训练深层模型要考虑不同帧的序列,每个帧都是地理标记的位置信息。这使得模型能够增强给定区域语义的深度估计。我们展示了我们模型改善深度估计结果的有效性。该模型在现实环境中受过培训,结果显示在将位置数据添加到模型训练阶段之后的深度图中的改进。
translated by 谷歌翻译
3D object detection is vital as it would enable us to capture objects' sizes, orientation, and position in the world. As a result, we would be able to use this 3D detection in real-world applications such as Augmented Reality (AR), self-driving cars, and robotics which perceive the world the same way we do as humans. Monocular 3D Object Detection is the task to draw 3D bounding box around objects in a single 2D RGB image. It is localization task but without any extra information like depth or other sensors or multiple images. Monocular 3D object detection is an important yet challenging task. Beyond the significant progress in image-based 2D object detection, 3D understanding of real-world objects is an open challenge that has not been explored extensively thus far. In addition to the most closely related studies.
translated by 谷歌翻译
自治机器人目前是最受欢迎的人工智能问题之一,在过去十年中,从自动驾驶汽车和人形系统到交付机器人和无人机,这是一项最受欢迎的智能问题。部分问题是获得一个机器人,以模仿人类的感知,我们的视觉感,用诸如神经网络等数学模型用相机和大脑的眼睛替换眼睛。开发一个能够在没有人为干预的情况下驾驶汽车的AI和一个小型机器人在城市中递送包裹可能看起来像不同的问题,因此来自感知和视觉的观点来看,这两个问题都有几种相似之处。我们目前的主要解决方案通过使用计算机视觉技术,机器学习和各种算法来实现对环境感知的关注,使机器人理解环境或场景,移动,调整其轨迹并执行其任务(维护,探索,等。)无需人为干预。在这项工作中,我们从头开始开发一个小型自动车辆,能够仅使用视觉信息理解场景,通过工业环境导航,检测人员和障碍,或执行简单的维护任务。我们审查了基本问题的最先进问题,并证明了小规模采用的许多方法类似于来自特斯拉或Lyft等公司的真正自动驾驶汽车中使用的方法。最后,我们讨论了当前的机器人和自主驾驶状态以及我们在这一领域找到的技术和道德限制。
translated by 谷歌翻译
Visual perception plays an important role in autonomous driving. One of the primary tasks is object detection and identification. Since the vision sensor is rich in color and texture information, it can quickly and accurately identify various road information. The commonly used technique is based on extracting and calculating various features of the image. The recent development of deep learning-based method has better reliability and processing speed and has a greater advantage in recognizing complex elements. For depth estimation, vision sensor is also used for ranging due to their small size and low cost. Monocular camera uses image data from a single viewpoint as input to estimate object depth. In contrast, stereo vision is based on parallax and matching feature points of different views, and the application of deep learning also further improves the accuracy. In addition, Simultaneous Location and Mapping (SLAM) can establish a model of the road environment, thus helping the vehicle perceive the surrounding environment and complete the tasks. In this paper, we introduce and compare various methods of object detection and identification, then explain the development of depth estimation and compare various methods based on monocular, stereo, and RDBG sensors, next review and compare various methods of SLAM, and finally summarize the current problems and present the future development trends of vision technologies.
translated by 谷歌翻译
3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies -a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that it is not the quality of the data but its representation that accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert image-based depth maps to pseudo-LiDAR representations -essentially mimicking the LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing state-of-the-art in image-based performance -raising the detection accuracy of objects within the 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo-image-based approaches. Our code is publicly available at https: //github.com/mileyan/pseudo_lidar.
translated by 谷歌翻译
自我监督的单眼深度估计是一种有吸引力的解决方案,不需要难以供应的深度标签进行训练。卷积神经网络(CNN)最近在这项任务中取得了巨大成功。但是,他们的受欢迎的领域有限地限制了现有的网络体系结构,以便在本地进行推理,从而抑制了自我监督范式的有效性。鉴于Vision Transformers(VIT)最近取得的成功,我们提出了Monovit,这是一个崭新的框架,结合了VIT模型支持的全球推理以及自我监督的单眼深度估计的灵活性。通过将普通的卷积与变压器块相结合,我们的模型可以在本地和全球范围内推理,从而在较高的细节和准确性上产生深度预测,从而使MonoVit可以在已建立的Kitti数据集中实现最先进的性能。此外,Monovit证明了其在其他数据集(例如Make3D和Drivingstereo)上的出色概括能力。
translated by 谷歌翻译
近年来,尤其是在户外环境中,自我监督的单眼深度估计已取得了重大进展。但是,在大多数现有数据被手持设备捕获的室内场景中,深度预测结果无法满足。与室外环境相比,使用自我监督的方法估算室内环境的单眼视频深度,导致了两个额外的挑战:(i)室内视频序列的深度范围在不同的框架上有很大变化,使深度很难进行。网络以促进培训的一致深度线索; (ii)用手持设备记录的室内序列通常包含更多的旋转运动,这使姿势网络难以预测准确的相对摄像头姿势。在这项工作中,我们通过对这些挑战进行特殊考虑并巩固了一系列良好实践,以提高自我监督的单眼深度估计室内环境的表现,从而提出了一种新颖的框架单声道++。首先,提出了具有基于变压器的比例回归网络的深度分解模块,以明确估算全局深度尺度因子,预测的比例因子可以指示最大深度值。其次,我们不像以前的方法那样使用单阶段的姿势估计策略,而是建议利用残留姿势估计模块来估计相对摄像机在连续迭代的跨帧中构成。第三,为了为我们的残留姿势估计模块纳入广泛的坐标指南,我们建议直接在输入上执行坐标卷积编码,以实现姿势网络。提出的方法在各种基准室内数据集(即Euroc Mav,Nyuv2,扫描仪和7片)上进行了验证,证明了最先进的性能。
translated by 谷歌翻译
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.
translated by 谷歌翻译
Monocular depth estimation has been actively studied in fields such as robot vision, autonomous driving, and 3D scene understanding. Given a sequence of color images, unsupervised learning methods based on the framework of Structure-From-Motion (SfM) simultaneously predict depth and camera relative pose. However, dynamically moving objects in the scene violate the static world assumption, resulting in inaccurate depths of dynamic objects. In this work, we propose a new method to address such dynamic object movements through monocular 3D object detection. Specifically, we first detect 3D objects in the images and build the per-pixel correspondence of the dynamic pixels with the detected object pose while leaving the static pixels corresponding to the rigid background to be modeled with camera motion. In this way, the depth of every pixel can be learned via a meaningful geometry model. Besides, objects are detected as cuboids with absolute scale, which is used to eliminate the scale ambiguity problem inherent in monocular vision. Experiments on the KITTI depth dataset show that our method achieves State-of-The-Art performance for depth estimation. Furthermore, joint training of depth, camera motion and object pose also improves monocular 3D object detection performance. To the best of our knowledge, this is the first work that allows a monocular 3D object detection network to be fine-tuned in a self-supervised manner.
translated by 谷歌翻译
Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.
translated by 谷歌翻译
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. †
translated by 谷歌翻译
深度估计的自我监督学习在图像序列中使用几何体进行监督,并显示有前途的结果。与许多计算机视觉任务一样,深度网络性能是通过从图像中学习准确的空间和语义表示的能力来确定。因此,利用用于深度估计的语义分割网络是自然的。在这项工作中,基于一个发达的语义分割网络HRNET,我们提出了一种新颖的深度估计网络差异,可以利用下式采样过程和上采样过程。通过应用特征融合和注意机制,我们所提出的方法优于基准基准测试的最先进的单眼深度估计方法。我们的方法还展示了更高分辨率培训数据的潜力。我们通过建立一个挑战性案件的测试集,提出了一个额外的扩展评估策略,经验从标准基准源于标准基准。
translated by 谷歌翻译
在接受高质量的地面真相(如LiDAR数据)培训时,监督的学习深度估计方法可以实现良好的性能。但是,LIDAR只能生成稀疏的3D地图,从而导致信息丢失。每个像素获得高质量的地面深度数据很难获取。为了克服这一限制,我们提出了一种新颖的方法,将有前途的平面和视差几何管道与深度信息与U-NET监督学习网络相结合的结构信息结合在一起,与现有的基于流行的学习方法相比,这会导致定量和定性的改进。特别是,该模型在两个大规模且具有挑战性的数据集上进行了评估:Kitti Vision Benchmark和CityScapes数据集,并在相对错误方面取得了最佳性能。与纯深度监督模型相比,我们的模型在薄物体和边缘的深度预测上具有令人印象深刻的性能,并且与结构预测基线相比,我们的模型的性能更加强大。
translated by 谷歌翻译
这些年来,展示技术已经发展。开发实用的HDR捕获,处理和显示解决方案以将3D技术提升到一个新的水平至关重要。多曝光立体声图像序列的深度估计是开发成本效益3D HDR视频内容的重要任务。在本文中,我们开发了一种新颖的深度体系结构,以进行多曝光立体声深度估计。拟议的建筑有两个新颖的组成部分。首先,对传统立体声深度估计中使用的立体声匹配技术进行了修改。对于我们体系结构的立体深度估计部分,部署了单一到stereo转移学习方法。拟议的配方规避了成本量构造的要求,该要求由基于重新编码的单码编码器CNN取代,具有不同的重量以进行功能融合。基于有效网络的块用于学习差异。其次,我们使用强大的视差特征融合方法组合了从不同暴露水平上从立体声图像获得的差异图。使用针对不同质量度量计算的重量图合并在不同暴露下获得的差异图。获得的最终预测差异图更强大,并保留保留深度不连续性的最佳功能。提出的CNN具有使用标准动态范围立体声数据或具有多曝光低动态范围立体序列的训练的灵活性。在性能方面,所提出的模型超过了最新的单眼和立体声深度估计方法,无论是定量还是质量地,在具有挑战性的场景流以及暴露的Middlebury立体声数据集上。该体系结构在复杂的自然场景中表现出色,证明了其对不同3D HDR应用的有用性。
translated by 谷歌翻译
尽管现有的单眼深度估计方法取得了长足的进步,但由于网络的建模能力有限和规模歧义问题,预测单个图像的准确绝对深度图仍然具有挑战性。在本文中,我们介绍了一个完全视觉上的基于注意力的深度(Vadepth)网络,在该网络中,将空间注意力和通道注意都应用于所有阶段。通过在远距离沿空间和通道维度沿空间和通道维度的特征的依赖关系连续提取,Vadepth网络可以有效地保留重要的细节并抑制干扰特征,以更好地感知场景结构,以获得更准确的深度估计。此外,我们利用几何先验来形成规模约束,以进行比例感知模型培训。具体而言,我们使用摄像机和由地面点拟合的平面之间的距离构建了一种新颖的规模感知损失,该平面与图像底部中间的矩形区域的像素相对应。 Kitti数据集的实验结果表明,该体系结构达到了最新性能,我们的方法可以直接输出绝对深度而无需后处理。此外,我们在Seasondepth数据集上的实验还证明了我们模型对多个看不见的环境的鲁棒性。
translated by 谷歌翻译
Monocular depth estimation can play an important role in addressing the issue of deriving scene geometry from 2D images. It has been used in a variety of industries, including robots, self-driving cars, scene comprehension, 3D reconstructions, and others. The goal of our method is to create a lightweight machine-learning model in order to predict the depth value of each pixel given only a single RGB image as input with the Unet structure of the image segmentation network. We use the NYU Depth V2 dataset to test the structure and compare the result with other methods. The proposed method achieves relatively high accuracy and low rootmean-square error.
translated by 谷歌翻译
鉴于其经济性与多传感器设置相比,从单眼输入中感知的3D对象对于机器人系统至关重要。它非常困难,因为单个图像无法提供预测绝对深度值的任何线索。通过双眼方法进行3D对象检测,我们利用了相机自我运动提供的强几何结构来进行准确的对象深度估计和检测。我们首先对此一般的两视案例进行了理论分析,并注意两个挑战:1)来自多个估计的累积错误,这些估计使直接预测棘手; 2)由静态摄像机和歧义匹配引起的固有难题。因此,我们建立了具有几何感知成本量的立体声对应关系,作为深度估计的替代方案,并以单眼理解进一步补偿了它,以解决第二个问题。我们的框架(DFM)命名为深度(DFM),然后使用已建立的几何形状将2D图像特征提升到3D空间并检测到其3D对象。我们还提出了一个无姿势的DFM,以使其在摄像头不可用时可用。我们的框架在Kitti基准测试上的优于最先进的方法。详细的定量和定性分析也验证了我们的理论结论。该代码将在https://github.com/tai-wang/depth-from-motion上发布。
translated by 谷歌翻译