本文提出了一种从单个交通相机提取3D世界中车辆的位置和姿势的方法。从驾驶员的角度来看,大多数先前的单眼3D车辆检测算法集中在车辆上的摄像机上,并假定了已知的内在和外在校准。相反,本文侧重于使用未校准单眼交通摄像头的相同任务。我们观察到,道路平面和图像平面之间的相同特法对于3D车辆检测和该任务的数据合成至关重要,并且可以在没有相机内在和外部的情况下估计同字。通过在逆透视映射中产生的鸟瞰图(BEV)图像中估计旋转边界盒(R箱)进行3D车辆检测。我们提出了一个名为Daileed R-Box的新的回归目标和双视网架构,该架构促进了翘曲的BEV图像上的检测精度。实验表明,尽管在训练期间没有看到它们的成像,所提出的方法可以推广到新的相机和环境设置。
translated by 谷歌翻译
以视觉为中心的BEV感知由于其固有的优点,最近受到行业和学术界的关注,包括展示世界自然代表和融合友好。随着深度学习的快速发展,已经提出了许多方法来解决以视觉为中心的BEV感知。但是,最近没有针对这个小说和不断发展的研究领域的调查。为了刺激其未来的研究,本文对以视觉为中心的BEV感知及其扩展进行了全面调查。它收集并组织了最近的知识,并对常用算法进行了系统的综述和摘要。它还为几项BEV感知任务提供了深入的分析和比较结果,从而促进了未来作品的比较并激发了未来的研究方向。此外,还讨论了经验实现细节并证明有利于相关算法的开发。
translated by 谷歌翻译
计算机视觉在智能运输系统(ITS)和交通监视中发挥了重要作用。除了快速增长的自动化车辆和拥挤的城市外,通过实施深层神经网络的实施,可以使用视频监视基础架构进行自动和高级交通管理系统(ATM)。在这项研究中,我们为实时交通监控提供了一个实用的平台,包括3D车辆/行人检测,速度检测,轨迹估算,拥塞检测以及监视车辆和行人的相互作用,都使用单个CCTV交通摄像头。我们适应了定制的Yolov5深神经网络模型,用于车辆/行人检测和增强的排序跟踪算法。还开发了基于混合卫星的基于混合卫星的逆透视图(SG-IPM)方法,用于摄像机自动校准,从而导致准确的3D对象检测和可视化。我们还根据短期和长期的时间视频数据流开发了层次结构的交通建模解决方案,以了解脆弱道路使用者的交通流量,瓶颈和危险景点。关于现实世界情景和与最先进的比较的几项实验是使用各种交通监控数据集进行的,包括从高速公路,交叉路口和城市地区收集的MIO-TCD,UA-DETRAC和GRAM-RTM,在不同的照明和城市地区天气状况。
translated by 谷歌翻译
配备摄像机的无人机可以显着增强人类在3D空间中具有显着的可操作性,从而使人类感知世界的能力。具有讽刺意味的是,无人机的对象检测始终是在2D图像空间中进行的,这从根本上限制了其理解3D场景的能力。此外,由于缺乏变形模型,无法直接应用于为自动驾驶开发的现有3D对象检测方法,这对于具有敏感变形和小物体的遥远空中透视至关重要。为了填补空白,这项工作提出了一个名为DVDET的双视检测系统,以在2D图像空间和3D物理空间中实现空中单眼对象检测。为了解决严重的视图变形问题,我们提出了一个可训练的可训练的可训练的转换模块,该模块可以从无人机的角度正确地扭曲信息到BEV。与汽车的单眼方法相比,我们的转换包括一个可学习的可变形网络,可显式修改严重的偏差。为了应对数据集挑战,我们提出了一个名为AM3D-SIM的新的大规模模拟数据集,该数据集由AirSim和Carla的共模制成,以及一个名为AM3D-REAL的新的现实世界空中数据集,由DJI Matrice 300 RTK收集,在两个数据集中,都提供了3D对象检测的高质量注释。广泛的实验表明,i)空中单眼3D对象检测是可行的; ii)在仿真数据集中预先训练的模型受益于现实世界的性能,iii)DVDET也有益于汽车的单眼3D对象检测。为了鼓励更多的研究人员调查该领域,我们将在https://sjtu-magic.github.io/dataset/am3d/中发布数据集和相关代码。
translated by 谷歌翻译
我们为路边摄像机提出了一个针对交通现场的新颖务实框架。提出的框架涵盖了基础架构辅助自动驾驶的路边知觉管道的全堆,包括对象检测,对象定位,对象跟踪和多相机信息融合。与以前的基于视觉的感知框架依赖于深度偏移或训练中的3D注释不同,我们采用模块化解耦设计并引入基于具有里程碑意义的3D本地化方法,在此方法可以很好地解耦,以便可以轻松地训练该模型仅基于2D注释。所提出的框架适用于带有针孔或鱼眼镜的光相机或热摄像机。我们的框架部署在位于Ellsworth Rd的两车道回旋处。和美国密歇根州安阿伯市的State St.,提供7x24实时交通流量监测和高精度车辆轨迹提取。整个系统在低功率边缘计算设备上有效地运行,全部端到端延迟小于20ms。
translated by 谷歌翻译
Surround-view fisheye perception under valet parking scenes is fundamental and crucial in autonomous driving. Environmental conditions in parking lots perform differently from the common public datasets, such as imperfect light and opacity, which substantially impacts on perception performance. Most existing networks based on public datasets may generalize suboptimal results on these valet parking scenes, also affected by the fisheye distortion. In this article, we introduce a new large-scale fisheye dataset called Fisheye Parking Dataset(FPD) to promote the research in dealing with diverse real-world surround-view parking cases. Notably, our compiled FPD exhibits excellent characteristics for different surround-view perception tasks. In addition, we also propose our real-time distortion-insensitive multi-task framework Fisheye Perception Network (FPNet), which improves the surround-view fisheye BEV perception by enhancing the fisheye distortion operation and multi-task lightweight designs. Extensive experiments validate the effectiveness of our approach and the dataset's exceptional generalizability.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
最近已经提出了3D车道检测的方法,以解决许多自动驾驶场景(上坡/下坡,颠簸等)中不准确的车道布局问题。先前的工作在复杂的情况下苦苦挣扎,因为它们对前视图和鸟类视图(BEV)之间的空间转换以及缺乏现实数据集的简单设计。在这些问题上,我们介绍了Persformer:具有新型基于变压器的空间特征变换模块的端到端单眼3D车道检测器。我们的模型通过参考摄像头参数来参与相关的前视本地区域来生成BEV功能。 Persformer采用统一的2D/3D锚设计和辅助任务,以同时检测2D/3D车道,从而提高功能一致性并分享多任务学习的好处。此外,我们发布了第一个大型现实世界3D车道数据集之一:OpenLane,具有高质量的注释和场景多样性。 OpenLane包含200,000帧,超过880,000个实例级别的车道,14个车道类别,以及场景标签和封闭式对象注释,以鼓励开发车道检测和更多与工业相关的自动驾驶方法。我们表明,在新的OpenLane数据集和Apollo 3D Lane合成数据集中,Persformer在3D车道检测任务中的表现明显优于竞争基线,并且在OpenLane上的2D任务中也与最新的算法相当。该项目页面可在https://github.com/openperceptionx/persformer_3dlane上找到,OpenLane数据集可在https://github.com/openperceptionx/openlane上提供。
translated by 谷歌翻译
环绕视图相机是用于自动驾驶的主要传感器,用于近场感知。它是主要用于停车可视化和自动停车的商用车中最常用的传感器之一。四个带有190 {\ deg}视场覆盖车辆周围360 {\ deg}的鱼眼相机。由于其高径向失真,标准算法不容易扩展。以前,我们发布了第一个名为Woodscape的公共鱼眼环境视图数据集。在这项工作中,我们发布了环绕视图数据集的合成版本,涵盖了其许多弱点并扩展了它。首先,不可能获得像素光流和深度的地面真相。其次,为了采样不同的框架,木景没有同时注释的所有四个相机。但是,这意味着不能设计多相机算法以在新数据集中启用的鸟眼空间中获得统一的输出。我们在Carla模拟器中实现了环绕式鱼眼的几何预测,与木观的配置相匹配并创建了Synwoodscape。
translated by 谷歌翻译
鉴于其经济性与多传感器设置相比,从单眼输入中感知的3D对象对于机器人系统至关重要。它非常困难,因为单个图像无法提供预测绝对深度值的任何线索。通过双眼方法进行3D对象检测,我们利用了相机自我运动提供的强几何结构来进行准确的对象深度估计和检测。我们首先对此一般的两视案例进行了理论分析,并注意两个挑战:1)来自多个估计的累积错误,这些估计使直接预测棘手; 2)由静态摄像机和歧义匹配引起的固有难题。因此,我们建立了具有几何感知成本量的立体声对应关系,作为深度估计的替代方案,并以单眼理解进一步补偿了它,以解决第二个问题。我们的框架(DFM)命名为深度(DFM),然后使用已建立的几何形状将2D图像特征提升到3D空间并检测到其3D对象。我们还提出了一个无姿势的DFM,以使其在摄像头不可用时可用。我们的框架在Kitti基准测试上的优于最先进的方法。详细的定量和定性分析也验证了我们的理论结论。该代码将在https://github.com/tai-wang/depth-from-motion上发布。
translated by 谷歌翻译
标记级别的高清地图(HD地图)对自动驾驶汽车具有重要意义,尤其是在大规模,外观改变的情况下,自动驾驶汽车依靠标记来定位和车道来安全驾驶。在本文中,我们提出了一个高度可行的框架,用于使用简单的传感器设置(一个或多个单眼摄像机)自动构建标记级别的高清图。我们优化标记角的位置,以适合标记分割的结果,并同时优化相应摄像机的反视角映射(IPM)矩阵,以获得从前视图图像到鸟类视图(BEV)的准确转换。在定量评估中,构建的高清图几乎达到了百厘厘米级的准确性。优化的IPM矩阵的准确性与手动校准相似。该方法还可以概括以通过增加可识别标记的类型来从更广泛的意义上构建高清图。
translated by 谷歌翻译
LIDAR的准确3D对象检测对于自动驾驶至关重要。现有的研究全都基于平坦的假设。但是,实际的道路可能会在陡峭的部分中很复杂,从而打破了前提。在这种情况下,当前方法由于难以正确检测到倾斜的地形上的物体而受到性能降解。在这项工作中,我们提出了DET6D,这是第一个没有空间和姿势局限性的自由度3D对象检测器,以改善地形鲁棒性。我们通过建立在整个空间范围内检测对象的能力来选择基于点的框架。为了预测包括音高和滚动在内的全程姿势,我们设计了一个利用当地地面约束的地面方向分支。鉴于长尾非平板场景数据收集和6D姿势注释的难度,我们提出了斜坡,这是一种数据增强方法,用于从平面场景中记录的现有数据集中合成非平板地形。各种数据集的实验证明了我们方法在不同地形上的有效性和鲁棒性。我们进一步进行了扩展实验,以探索网络如何预测两个额外的姿势。提出的模块是现有基于点的框架的插件。该代码可在https://github.com/hitsz-nrsl/de6d上找到。
translated by 谷歌翻译
Figure 1: Results obtained from our single image, monocular 3D object detection network MonoDIS on a KITTI3D test image with corresponding birds-eye view, showing its ability to estimate size and orientation of objects at different scales.
translated by 谷歌翻译
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
translated by 谷歌翻译
Compared to typical multi-sensor systems, monocular 3D object detection has attracted much attention due to its simple configuration. However, there is still a significant gap between LiDAR-based and monocular-based methods. In this paper, we find that the ill-posed nature of monocular imagery can lead to depth ambiguity. Specifically, objects with different depths can appear with the same bounding boxes and similar visual features in the 2D image. Unfortunately, the network cannot accurately distinguish different depths from such non-discriminative visual features, resulting in unstable depth training. To facilitate depth learning, we propose a simple yet effective plug-and-play module, One Bounding Box Multiple Objects (OBMO). Concretely, we add a set of suitable pseudo labels by shifting the 3D bounding box along the viewing frustum. To constrain the pseudo-3D labels to be reasonable, we carefully design two label scoring strategies to represent their quality. In contrast to the original hard depth labels, such soft pseudo labels with quality scores allow the network to learn a reasonable depth range, boosting training stability and thus improving final performance. Extensive experiments on KITTI and Waymo benchmarks show that our method significantly improves state-of-the-art monocular 3D detectors by a significant margin (The improvements under the moderate setting on KITTI validation set are $\mathbf{1.82\sim 10.91\%}$ mAP in BEV and $\mathbf{1.18\sim 9.36\%}$ mAP in 3D}. Codes have been released at https://github.com/mrsempress/OBMO.
translated by 谷歌翻译
交通灯检测对于自动驾驶汽车在城市地区安全导航至关重要。公开可用的交通灯数据集不足以开发用于检测提供重要导航信息的遥远交通信号灯的算法。我们介绍了一个新颖的基准交通灯数据集,该数据集使用一对涵盖城市和半城市道路的狭窄角度和广角摄像机捕获。我们提供1032张训练图像和813个同步图像对进行测试。此外,我们提供同步视频对进行定性分析。该数据集包括第1920 $ \ times $ 1080的分辨率图像,覆盖10个不同类别。此外,我们提出了一种用于结合两个相机输出的后处理算法。结果表明,与使用单个相机框架的传统方法相比,我们的技术可以在速度和准确性之间取得平衡。
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
Autonomous driving requires efficient reasoning about the location and appearance of the different agents in the scene, which aids in downstream tasks such as object detection, object tracking, and path planning. The past few years have witnessed a surge in approaches that combine the different taskbased modules of the classic self-driving stack into an End-toEnd(E2E) trainable learning system. These approaches replace perception, prediction, and sensor fusion modules with a single contiguous module with shared latent space embedding, from which one extracts a human-interpretable representation of the scene. One of the most popular representations is the Birds-eye View (BEV), which expresses the location of different traffic participants in the ego vehicle frame from a top-down view. However, a BEV does not capture the chromatic appearance information of the participants. To overcome this limitation, we propose a novel representation that captures various traffic participants appearance and occupancy information from an array of monocular cameras covering 360 deg field of view (FOV). We use a learned image embedding of all camera images to generate a BEV of the scene at any instant that captures both appearance and occupancy of the scene, which can aid in downstream tasks such as object tracking and executing language-based commands. We test the efficacy of our approach on synthetic dataset generated from CARLA. The code, data set, and results can be found at https://rebrand.ly/APP OCC-results.
translated by 谷歌翻译
Aiming at highly accurate object detection for connected and automated vehicles (CAVs), this paper presents a Deep Neural Network based 3D object detection model that leverages a three-stage feature extractor by developing a novel LIDAR-Camera fusion scheme. The proposed feature extractor extracts high-level features from two input sensory modalities and recovers the important features discarded during the convolutional process. The novel fusion scheme effectively fuses features across sensory modalities and convolutional layers to find the best representative global features. The fused features are shared by a two-stage network: the region proposal network (RPN) and the detection head (DH). The RPN generates high-recall proposals, and the DH produces final detection results. The experimental results show the proposed model outperforms more recent research on the KITTI 2D and 3D detection benchmark, particularly for distant and highly occluded instances.
translated by 谷歌翻译
Visual perception plays an important role in autonomous driving. One of the primary tasks is object detection and identification. Since the vision sensor is rich in color and texture information, it can quickly and accurately identify various road information. The commonly used technique is based on extracting and calculating various features of the image. The recent development of deep learning-based method has better reliability and processing speed and has a greater advantage in recognizing complex elements. For depth estimation, vision sensor is also used for ranging due to their small size and low cost. Monocular camera uses image data from a single viewpoint as input to estimate object depth. In contrast, stereo vision is based on parallax and matching feature points of different views, and the application of deep learning also further improves the accuracy. In addition, Simultaneous Location and Mapping (SLAM) can establish a model of the road environment, thus helping the vehicle perceive the surrounding environment and complete the tasks. In this paper, we introduce and compare various methods of object detection and identification, then explain the development of depth estimation and compare various methods based on monocular, stereo, and RDBG sensors, next review and compare various methods of SLAM, and finally summarize the current problems and present the future development trends of vision technologies.
translated by 谷歌翻译