现代对象检测体系结构正朝着采用自我监督的学习(SSL)来通过相关的借口任务来提高性能检测。文献中尚未探讨单眼3D对象检测的借口任务。本文研究了已建立的自我监督边界框的应用,通过将随机窗口标记为借口任务来回收。训练了3D检测器的分类器头,以对包含不同比例的地面真相对象的随机窗口进行分类,从而处理前后背景的不平衡。我们使用RTM3D检测模型作为基线评估借口任务,并在应用数据增强的情况下评估。我们证明,在基线得分上,使用SSL在MAP 3D中的2-3%和0.9-1.5%的BEV得分之间的提高。我们提出了反向类频率重新加权(ICFW)地图分数,该分数突出显示了具有长尾巴的类不平衡数据集中低频类检测的改进。我们证明了ICFW的改进MAP 3D和BEV分数,以考虑Kitti验证数据集中的类不平衡。通过借口任务,我们看到ICFW指标增加了4-5%。
translated by 谷歌翻译
单眼3D对象检测是自动驾驶的重要感知任务。但是,对大型标记数据的高度依赖使其在模型优化过程中昂贵且耗时。为了减少对人类注释的过度依赖,我们提出了混合教学,这是一个有效的半监督学习框架,适用于在训练阶段采用标签和未标记的图像。教学首先通过自我训练生成用于未标记图像的伪标记。然后,通过将实例级图像贴片合并到空背景或标记的图像中,对学生模型进行了更密集和精确的标签的混合图像训练。这是第一个打破图像级限制并将高质量的伪标签从多帧放入一个图像进行半监督训练的图像。此外,由于置信度评分和本地化质量之间的错位,很难仅使用基于置信度的标准将高质量的伪标签与嘈杂的预测区分开。为此,我们进一步引入了一个基于不确定性的过滤器,以帮助选择可靠的伪框来进行上述混合操作。据我们所知,这是单眼3D对象检测的第一个统一SSL框架。在KITTI数据集上的各种标签比下,混合教学始终通过大幅度的边缘改善了单支持者和GUPNET。例如,我们的方法在仅使用10%标记的数据时,在验证集上对GUPNET基线的改进约为 +6.34%ap@0.7。此外,通过利用完整的训练套件和Kitti的另外48K RAW图像,它可以进一步提高单声道 +4.65%的ap@0.7,以提高汽车检测,达到18.54%ap@0.7基于Kitti测试排行榜的方法。代码和预估计的模型将在https://github.com/yanglei18/mix-teaching上发布。
translated by 谷歌翻译
We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod
translated by 谷歌翻译
The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain highquality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.
translated by 谷歌翻译
由于LIDAR传感器捕获的精确深度信息缺乏准确的深度信息,单眼3D对象检测是一个关键而挑战的自主驾驶任务。在本文中,我们提出了一种立体引导的单目3D对象检测网络,称为SGM3D,其利用立体图像提取的鲁棒3D特征来增强从单眼图像中学到的特征。我们创新地研究了多粒度域适配模块(MG-DA)以利用网络的能力,以便仅基于单手套提示产生立体模拟功能。利用粗均衡特征级以及精细锚级域适配,以引导单眼分支。我们介绍了一个基于IOO匹配的对齐模块(iou-ma),用于立体声和单眼域之间的对象级域适应,以减轻先前阶段中的不匹配。我们对最具挑战性的基蒂和Lyft数据集进行了广泛的实验,并实现了新的最先进的性能。此外,我们的方法可以集成到许多其他单眼的方法中以提高性能而不引入任何额外的计算成本。
translated by 谷歌翻译
Compared to typical multi-sensor systems, monocular 3D object detection has attracted much attention due to its simple configuration. However, there is still a significant gap between LiDAR-based and monocular-based methods. In this paper, we find that the ill-posed nature of monocular imagery can lead to depth ambiguity. Specifically, objects with different depths can appear with the same bounding boxes and similar visual features in the 2D image. Unfortunately, the network cannot accurately distinguish different depths from such non-discriminative visual features, resulting in unstable depth training. To facilitate depth learning, we propose a simple yet effective plug-and-play module, One Bounding Box Multiple Objects (OBMO). Concretely, we add a set of suitable pseudo labels by shifting the 3D bounding box along the viewing frustum. To constrain the pseudo-3D labels to be reasonable, we carefully design two label scoring strategies to represent their quality. In contrast to the original hard depth labels, such soft pseudo labels with quality scores allow the network to learn a reasonable depth range, boosting training stability and thus improving final performance. Extensive experiments on KITTI and Waymo benchmarks show that our method significantly improves state-of-the-art monocular 3D detectors by a significant margin (The improvements under the moderate setting on KITTI validation set are $\mathbf{1.82\sim 10.91\%}$ mAP in BEV and $\mathbf{1.18\sim 9.36\%}$ mAP in 3D}. Codes have been released at https://github.com/mrsempress/OBMO.
translated by 谷歌翻译
培训神经网络以执行3D对象检测进行自主驾驶需要大量的注释数据。但是,以足够的质量和数量获得培训数据是昂贵的,有时由于人类和传感器的限制是不可能的。因此,需要一种新的解决方案来扩展当前训练方法以克服此限制并启用准确的3D对象检测。我们对上述问题的解决方案结合了半伪标记和新颖的3D增强。为了证明所提出的方法的适用性,我们为3D对象检测设计了一个卷积神经网络,与训练数据分布相比,可以显着增加检测范围。
translated by 谷歌翻译
基于最新的激光痛的3D对象检测方法依赖于监督学习和大型标记数据集。但是,注释LiDAR数据是资源消耗的,仅取决于监督的学习限制了训练有素的模型的适用性。自我监督的培训策略可以通过学习下游3D视觉任务的通用点云主链模型来减轻这些问题。在此背景下,我们显示了自我监督的多帧流程表示与单帧3D检测假设之间的关系。我们的主要贡献利用了流动和运动表示,并将自我保护的主链与有监督的3D检测头结合在一起。首先,自我监督的场景流估计模型通过循环一致性进行了训练。然后,该模型的点云编码器用作单帧3D对象检测头模型的骨干。第二个3D对象检测模型学会利用运动表示来区分表现出不同运动模式的动态对象。 Kitti和Nuscenes基准的实验表明,提出的自我监管的预训练可显着提高3D检测性能。 https://github.com/emecercelik/ssl-3d-detection.git
translated by 谷歌翻译
使用3D激光点云数据的对象检测和语义分割需要昂贵的注释。我们提出了一种数据增强方法,该方法多次利用已经注释的数据。我们提出了一个重用真实数据的增强框架,自动在场景中找到合适的位置要增加,并明确地处理遮挡。由于使用真实数据,新插入的物体在增强中的扫描点维持了激光雷达的物理特征,例如强度和射线表。该管道证明在训练3D对象检测和语义分割的最佳模型中具有竞争力。新的增强为稀有和基本类别提供了显着的性能增长,尤其是在Kitti对象检测中“硬”行人级的平均精度增益为6.65%,或者2.14表示在Semantickitti细分挑战中获得的iOU在艺术状态下的增益。
translated by 谷歌翻译
近年来,自主驾驶LIDAR数据的3D对象检测一直在迈出卓越的进展。在最先进的方法中,已经证明了将点云进行编码为鸟瞰图(BEV)是有效且有效的。与透视图不同,BEV在物体之间保留丰富的空间和距离信息;虽然在BEV中相同类型的更远物体不会较小,但它们包含稀疏点云特征。这一事实使用共享卷积神经网络削弱了BEV特征提取。为了解决这一挑战,我们提出了范围感知注意网络(RAANET),提取更强大的BEV功能并产生卓越的3D对象检测。范围感知的注意力(RAA)卷曲显着改善了近距离的特征提取。此外,我们提出了一种新的辅助损耗,用于密度估计,以进一步增强覆盖物体的Raanet的检测精度。值得注意的是,我们提出的RAA卷积轻量级,并兼容,以集成到用于BEV检测的任何CNN架构中。 Nuscenes DataSet上的广泛实验表明,我们的提出方法优于基于LIDAR的3D对象检测的最先进的方法,具有16 Hz的实时推断速度,为LITE版本为22 Hz。该代码在匿名GitHub存储库HTTPS://github.com/Anonymous0522 / ange上公开提供。
translated by 谷歌翻译
尽管收集了越来越多的数据集用于培训3D对象检测模型,但在LiDar扫描上注释3D盒仍然需要大量的人类努力。为了自动化注释并促进了各种自定义数据集的生产,我们提出了一个端到端的多模式变压器(MTRANS)自动标签器,该标签既利用LIDAR扫描和图像,以生成来自弱2D边界盒的精确的3D盒子注释。为了减轻阻碍现有自动标签者的普遍稀疏性问题,MTRAN通过基于2D图像信息生成新的3D点来致密稀疏点云。凭借多任务设计,MTRANS段段前景/背景片段,使LIDAR POINT CLUENS云密布,并同时回归3D框。实验结果验证了MTRAN对提高生成标签质量的有效性。通过丰富稀疏点云,我们的方法分别在Kitti中度和硬样品上获得了4.48 \%和4.03 \%更好的3D AP,而不是最先进的自动标签器。也可以扩展Mtrans以提高3D对象检测的准确性,从而在Kitti硬样品上产生了显着的89.45 \%AP。代码位于\ url {https://github.com/cliu2/mtrans}。
translated by 谷歌翻译
单眼3D对象检测旨在将3D边界框本地化在输入单个2D图像中。这是一个非常具有挑战性的问题并且仍然是开放的,特别是当没有额外的信息时(例如,深度,激光雷达和/或多帧)可以利用训练和/或推理。本文提出了一种对单眼3D对象检测的简单而有效的配方,而无需利用任何额外信息。它介绍了从训练中学习单眼背景的单片方法,以帮助单目3D对象检测。关键的想法是,通过图像中的对象的注释3D边界框,在训练中有一个丰富的良好的投影2D监控信号,例如投影的角键点及其相关联的偏移向量相对于中心在2D边界框中,应该被开发为培训中的辅助任务。拟议的单一的单一的机动在衡量标准理论中的克拉默 - Wold定理在高水平下。在实施中,它利用非常简单的端到端设计来证明学习辅助单眼环境的有效性,它由三个组成组成:基于深度神经网络(DNN)的特征骨干,一些回归头部分支用于学习用于3D边界框预测的基本参数,以及用于学习辅助上下文的许多回归头分支。在训练之后,丢弃辅助上下文回归分支以获得更好的推理效率。在实验中,拟议的单一组在基蒂基准(汽车,Pedestrain和骑自行车的人)中测试。它超越了汽车类别上排行榜中的所有现有技术,并在准确性方面获得了行人和骑自行车者的可比性。由于简单的设计,所提出的单控制方法在比较中获得了38.7 FP的最快推断速度
translated by 谷歌翻译
单眼3D对象检测对于自动驾驶具有重要意义,但仍然具有挑战性。核心挑战是在没有明确深度信息的情况下预测对象的距离。与在大多数现有方法中将距离作为单个变量回归不同,我们提出了一种基于几何几何距离的分解,以通过其因子恢复距离。分解因素因物体到最具代表性和稳定的变量的距离,即图像平面中的物理高度和投影视觉高度。此外,该分解保持了两个高度之间的自我矛盾,当两个预测高度不准确时,导致距离的距离预测可靠。分解还使我们能够追踪不同场景的距离不确定性的原因。这种分解使距离预测可解释,准确和稳健。我们的方法直接通过紧凑的体系结构从RGB图像中预测3D边界框,从而使训练和推理简单有效。实验结果表明,我们的方法在单眼3D对象检测上实现了最先进的性能,而鸟类视图Kitti数据集的眼睛视图任务,并且可以推广到具有不同摄像机内在的图像。
translated by 谷歌翻译
Determining accurate bird's eye view (BEV) positions of objects and tracks in a scene is vital for various perception tasks including object interactions mapping, scenario extraction etc., however, the level of supervision required to accomplish that is extremely challenging to procure. We propose a light-weight, weakly supervised method to estimate 3D position of objects by jointly learning to regress the 2D object detections and scene's depth prediction in a single feed-forward pass of a network. Our proposed method extends a center-point based single-shot object detector \cite{zhou2019objects}, and introduces a novel object representation where each object is modeled as a BEV point spatio-temporally, without the need of any 3D or BEV annotations for training and LiDAR data at query time. The approach leverages readily available 2D object supervision along with LiDAR point clouds (used only during training) to jointly train a single network, that learns to predict 2D object detection alongside the whole scene's depth, to spatio-temporally model object tracks as points in BEV. The proposed method is computationally over $\sim$10x efficient compared to recent SOTA approaches [1, 38] while achieving comparable accuracies on KITTI tracking benchmark.
translated by 谷歌翻译
Figure 1: Results obtained from our single image, monocular 3D object detection network MonoDIS on a KITTI3D test image with corresponding birds-eye view, showing its ability to estimate size and orientation of objects at different scales.
translated by 谷歌翻译
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
translated by 谷歌翻译
Monocular 3D object detection is a key problem for autonomous vehicles, as it provides a solution with simple configuration compared to typical multi-sensor systems. The main challenge in monocular 3D detection lies in accurately predicting object depth, which must be inferred from object and scene cues due to the lack of direct range measurement. Many methods attempt to directly estimate depth to assist in 3D detection, but show limited performance as a result of depth inaccuracy. Our proposed solution, Categorical Depth Distribution Network (CaDDN), uses a predicted categorical depth distribution for each pixel to project rich contextual feature information to the appropriate depth interval in 3D space. We then use the computationally efficient bird's-eye-view projection and single-stage detector to produce the final output detections. We design CaDDN as a fully differentiable end-to-end approach for joint depth estimation and object detection. We validate our approach on the KITTI 3D object detection benchmark, where we rank 1 st among published monocular methods. We also provide the first monocular 3D detection results on the newly released Waymo Open Dataset. We provide a code release for CaDDN which is made available here.
translated by 谷歌翻译
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird's eye view object detection, while being real-time. * Equal contribution.† Work done as part of Uber AI Residency program.
translated by 谷歌翻译
In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability. * Majority of the work done as an intern at Nuro, Inc. depth to point cloud 2D region (from CNN) to 3D frustum 3D box (from PointNet)
translated by 谷歌翻译
它得到了很好的认识到,从深度感知的LIDAR点云和语义富有的立体图像中融合互补信息将有利于3D对象检测。然而,探索稀疏3D点和密集2D像素之间固有的不自然相互作用并不重要。为了简化这种困难,最近的建议通常将3D点投影到2D图像平面上以对图像数据进行采样,然后聚合点处的数据。然而,这种方法往往遭受点云和RGB图像的分辨率之间的不匹配,导致次优性能。具体地,作为多模态数据聚合位置的稀疏点导致高分辨率图像的严重信息丢失,这反过来破坏了多传感器融合的有效性。在本文中,我们呈现VPFNET - 一种新的架构,可以在“虚拟”点处巧妙地对齐和聚合点云和图像数据。特别地,它们的密度位于3D点和2D像素的密度之间,虚拟点可以很好地桥接两个传感器之间的分辨率间隙,从而保持更多信息以进行处理。此外,我们还研究了可以应用于点云和RGB图像的数据增强技术,因为数据增强对迄今为止对3D对象探测器的贡献不可忽略。我们对Kitti DataSet进行了广泛的实验,与最先进的方法相比,观察到了良好的性能。值得注意的是,我们的VPFNET在KITTI测试集上实现了83.21 \%中等3D AP和91.86 \%适度的BEV AP,自2021年5月21日起排名第一。网络设计也考虑了计算效率 - 我们可以实现FPS 15对单个NVIDIA RTX 2080TI GPU。该代码将用于复制和进一步调查。
translated by 谷歌翻译