我们检查了生成的对抗性网络(GANS)的可行性,从激光乐队点云生成照片逼真图像。为此目的,我们创建了一个点云图像对的数据集,并训练了GaN,以预测包含反射率和距离信息的LiDAR点云的光电型图像。我们的模型学会了如何从只需点云数据,甚至是带黑色汽车的图像来预测现实看的图像。由于其较低的反射率,黑色汽车难以直接从点云中检测。该方法可能用于将来执行关于从LIDAR点云生成的照片型图像上的视觉对象识别。除了传统的LIDAR系统之外,第二系统还将从LIDAR点云产生的光电型图像的系统将在实时同时运行视觉对象识别。通过这种方式,我们可能会保留LIDAR的至高无上,并受益于使用光学 - 现实图像进行视觉对象识别,而不会使用任何相机。此外,这种方法可用于在不使用任何相机图像的情况下着色点云。
translated by 谷歌翻译
虚拟测试是确保自动驾驶安全性的至关重要的任务,而传感器仿真是该域中的重要任务。大多数当前的激光雷达模拟非常简单,主要用于执行初始测试,而大多数见解是在道路上收集的。在本文中,我们提出了一种轻巧的方法,以实现更现实的激光雷达模拟,该方法从测试驱动器数据中学习了真实传感器的行为,并将其转换为虚拟域。核心思想是将仿真施加到图像到图像翻译问题中。我们将基于PIX2PIX的架构训练两个现实世界数据集,即流行的Kitti数据集和提供RGB和LIDAR图像的Audi自动驾驶数据集。我们将该网络应用于合成渲染,并表明它从真实图像到模拟图像充分概括。该策略使我们可以在我们的合成世界中跳过传感器特异性,昂贵且复杂的LIDAR物理模拟,并避免过度简化和通过干净的合成环境较大的域间隙。
translated by 谷歌翻译
大量的研究与逼真的传感器数据的产生有关。激光点云是由复杂的模拟或学习的生成模型生成的。通常利用生成的数据来启用或改善下游感知算法。这些程序来自两个主要问题:首先,如何评估生成数据的现实主义?其次,更现实的数据还会导致更好的感知表现吗?本文解决了问题,并提出了一个新颖的指标,以量化LiDar Point Cloud的现实主义。通过训练代理分类任务,可以从现实世界和合成点云中学到相关功能。在一系列实验中,我们证明了我们的指标的应用来确定生成的LiDAR数据的现实主义,并将我们的度量的现实主义估计与分割模型的性能进行比较。我们确认我们的指标为下游细分性能提供了指示。
translated by 谷歌翻译
In this paper, we address semantic segmentation of road-objects from 3D LiDAR point clouds. In particular, we wish to detect and categorize instances of interest, such as cars, pedestrians and cyclists. We formulate this problem as a pointwise classification problem, and propose an end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Instance-level labels are then obtained by conventional clustering algorithms. Our CNN model is trained on LiDAR point clouds from the KITTI [1] dataset, and our point-wise segmentation labels are derived from 3D bounding boxes from KITTI. To obtain extra training data, we built a LiDAR simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize large amounts of realistic training data. Our experiments show that SqueezeSeg achieves high accuracy with astonishingly fast and stable runtime (8.7 ± 0.5 ms per frame), highly desirable for autonomous driving applications. Furthermore, additionally training on synthesized data boosts validation accuracy on real-world data. Our source code and synthesized data will be open-sourced.
translated by 谷歌翻译
我们提出了Lidargen,这是一种新型,有效且可控的生成模型,可产生逼真的LIDAR点云感觉读数。我们的方法利用强大的得分匹配基于能量的模型,并将点云生成过程作为随机降解过程在等应角视图中。该模型使我们能够采样具有保证的物理可行性和可控性的多样化和高质量点云样本。我们验证方法对挑战性Kitti-360和Nuscenes数据集的有效性。定量和定性结果表明,与其他生成模型相比,我们的方法产生的样本更现实。此外,LIDARGEN可以在不进行重新培训的情况下在输入上进行样本云。我们证明我们所提出的生成模型可直接用于致密激光点云。我们的代码可在以下网址找到:https://www.zyrianov.org/lidargen/
translated by 谷歌翻译
模拟逼真的传感器是自主系统数据生成的挑战,通常涉及精心手工的传感器设计,场景属性和物理建模。为了减轻这一点,我们引入了一条管道,用于对逼真的激光雷达传感器进行数据驱动的模拟。我们提出了一个模型,该模型可以在RGB图像和相应的LIDAR功能(例如Raydrop或每点强度)之间直接从真实数据集中进行映射。我们表明,我们的模型可以学会编码逼真的效果,例如透明表面上的掉落点或反射材料上的高强度回报。当应用于现成的模拟器软件提供的天真播放点云时,我们的模型通过根据场景的外观预测强度和删除点来增强数据,以匹配真实的激光雷达传感器。我们使用我们的技术来学习两个不同的LIDAR传感器的模型,并使用它们相应地改善模拟的LiDAR数据。通过车辆细分的示例任务,我们表明通过我们的技术增强模拟点云可以改善下游任务性能。
translated by 谷歌翻译
Segmentation of lidar data is a task that provides rich, point-wise information about the environment of robots or autonomous vehicles. Currently best performing neural networks for lidar segmentation are fine-tuned to specific datasets. Switching the lidar sensor without retraining on a big set of annotated data from the new sensor creates a domain shift, which causes the network performance to drop drastically. In this work we propose a new method for lidar domain adaption, in which we use annotated panoptic lidar datasets and recreate the recorded scenes in the structure of a different lidar sensor. We narrow the domain gap to the target data by recreating panoptic data from one domain in another and mixing the generated data with parts of (pseudo) labeled target domain data. Our method improves the nuScenes to SemanticKITTI unsupervised domain adaptation performance by 15.2 mean Intersection over Union points (mIoU) and by 48.3 mIoU in our semi-supervised approach. We demonstrate a similar improvement for the SemanticKITTI to nuScenes domain adaptation by 21.8 mIoU and 51.5 mIoU, respectively. We compare our method with two state of the art approaches for semantic lidar segmentation domain adaptation with a significant improvement for unsupervised and semi-supervised domain adaptation. Furthermore we successfully apply our proposed method to two entirely unlabeled datasets of two state of the art lidar sensors Velodyne Alpha Prime and InnovizTwo, and train well performing semantic segmentation networks for both.
translated by 谷歌翻译
已广泛研究从合成综合数据转移到实际数据,以减轻各种计算机视觉任务(如语义分割)中的数据注释约束。然而,由于缺乏大规模合成数据集和有效的转移方法,该研究专注于2D图像及其在3D点云分割的同行落后滞后。我们通过收集Synlidar来解决这个问题,这是一个大规模合成的LIDAR数据集,其中包含具有精确的几何形状和综合语义类的Point-Wise带注释点云。 Synlidar从​​具有丰富的场景和布局的多个虚拟环境中收集,该布局由超过190亿点的32个语义课程组成。此外,我们设计PCT,一种新型点云转换器,有效地减轻了合成和实点云之间的差距。具体地,我们将合成与实际间隙分解成外观部件和稀疏性分量,并单独处理它们,这会大大改善点云转换。我们在三次转移学习设置中进行了广泛的实验,包括数据增强,半监督域适应和无监督域适应。广泛的实验表明,Synlidar提供了用于研究3D转移的高质量数据源,所提出的PCT在三个设置上一致地实现了优越的点云平移。 Synlidar项目页面:\ url {https://github.com/xiaoaoran/synlidar}
translated by 谷歌翻译
为了训练一个表现出色的神经网络进行语义细分,至关重要的是,拥有一个具有可用地面真相的大数据集以供网络概括为看不见的数据。在本文中,我们提出了新颖的点云增强方法,以人为地使数据集多样化。我们以传感器为中心的方法保持数据结构与LIDAR传感器功能一致。由于这些新方法,我们能够通过高价值实例丰富低价值数据,并创建全新的场景。我们使用公共Semantickitti数据集验证了在多个神经网络上的方法,并证明与各自的基线相比,所有网络都会有所改善。此外,我们表明我们的方法能够使用非常小的数据集,节省注释时间,培训时间和相关成本。
translated by 谷歌翻译
最近的研究表明,MMWave雷达感测在低可见性环境中对象检测的有效性,这使其成为自主导航系统中的理想技术。在本文中,我们将雷达介绍给点云(R2P),这是一个深度学习模型,该模型基于具有不正确点的粗糙和稀疏点云,生成具有精细几何细节的3D对象的平滑,密集且高度准确的点云表示。来自mmwave雷达。这些输入点云是从由原始MMWave雷达传感器数据生成的2D深度图像转换的,其特征是不一致,方向和形状误差。 R2P利用两个顺序的深度学习编码器块的体系结构在从多个角度观察到对象的基于雷达的输入点云的基本特征,并确保生成的输出点云及其准确的内部一致性和原始对象的详细形状重建。我们实施R2P来替换我们最近提出的3DRIMR(通过MMWave Radar)系统的第2阶段。我们的实验证明了R2P在流行的现有方法(例如PointNet,PCN和原始3DRIMR设计)上的显着性能提高。
translated by 谷歌翻译
In this paper, we explore the feasibility of utilizing a mmWave radar sensor installed on a UAV to reconstruct the 3D shapes of multiple objects in a space. The UAV hovers at various locations in the space, and its onboard radar senor collects raw radar data via scanning the space with Synthetic Aperture Radar (SAR) operation. The radar data is sent to a deep neural network model, which outputs the point cloud reconstruction of the multiple objects in the space. We evaluate two different models. Model 1 is our recently proposed 3DRIMR/R2P model, and Model 2 is formed by adding a segmentation stage in the processing pipeline of Model 1. Our experiments have demonstrated that both models are promising in solving the multiple object reconstruction problem. We also show that Model 2, despite producing denser and smoother point clouds, can lead to higher reconstruction loss or even loss of objects. In addition, we find that both models are robust to the highly noisy radar data obtained by unstable SAR operation due to the instability or vibration of a small UAV hovering at its intended scanning point. Our exploratory study has shown a promising direction of applying mmWave radar sensing in 3D object reconstruction.
translated by 谷歌翻译
Our dataset provides dense annotations for each scan of all sequences from the KITTI Odometry Benchmark [19]. Here, we show multiple scans aggregated using pose information estimated by a SLAM approach.
translated by 谷歌翻译
准确的轨道位置是铁路支持驱动系统的重要组成部分,用于安全监控。激光雷达可以获得携带铁路环境的3D信息的点云,特别是在黑暗和可怕的天气条件下。在本文中,提出了一种基于3D点云的实时轨识别方法来解决挑战,如无序,不均匀的密度和大量点云的挑战。首先呈现Voxel Down-采样方法,用于铁路点云的密度平衡,并且金字塔分区旨在将3D扫描区域划分为具有不同卷的体素。然后,开发了一个特征编码模块以找到最近的邻点并聚合它们的局部几何特征。最后,提出了一种多尺度神经网络以产生每个体素和轨道位置的预测结果。该实验是在铁路的3D点云数据的9个序列下进行的。结果表明,该方法在检测直,弯曲和其他复杂的拓扑轨道方面具有良好的性能。
translated by 谷歌翻译
Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also use LidarCLIP as a tool to investigate fundamental lidar capabilities through natural language. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding. Code and trained models available at https://github.com/atonderski/lidarclip.
translated by 谷歌翻译
计算机图形技术的最新进展可以使汽车驾驶环境更现实。它们使自动驾驶汽车模拟器(例如DeepGTA-V和Carla(学习采取行动))能够生成大量的合成数据,这些数据可以补充现有的现实世界数据集中,以培训自动驾驶汽车感知。此外,由于自动驾驶汽车模拟器可以完全控制环境,因此它们可以产生危险的驾驶场景,而现实世界中数据集缺乏恶劣天气和事故情况。在本文中,我们将证明将从现实世界收集的数据与模拟世界中生成的数据相结合的有效性,以训练对象检测和本地化任务的感知系统。我们还将提出一个多层次的深度学习感知框架,旨在效仿人类的学习经验,其中在某个领域中学习了一系列从简单到更困难的任务。自动驾驶汽车感知器可以从易于驱动的方案中学习,以通过模拟软件定制的更具挑战性的方案。
translated by 谷歌翻译
Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to represent and generate 3D shapes, as well as a vast number of use cases. However, single-view reconstruction remains a challenging topic that can unlock various interesting use cases such as interactive design. In this work, we propose a novel framework that leverages the intermediate latent spaces of Vision Transformer (ViT) and a joint image-text representational model, CLIP, for fast and efficient Single View Reconstruction (SVR). More specifically, we propose a novel mapping network architecture that learns a mapping between deep features extracted from ViT and CLIP, and the latent space of a base 3D generative model. Unlike previous work, our method enables view-agnostic reconstruction of 3D shapes, even in the presence of large occlusions. We use the ShapeNetV2 dataset and perform extensive experiments with comparisons to SOTA methods to demonstrate our method's effectiveness.
translated by 谷歌翻译
Our result (c) Application: Edit object appearance (b) Application: Change label types (a) Synthesized resultFigure 1: We propose a generative adversarial framework for synthesizing 2048 × 1024 images from semantic label maps (lower left corner in (a)). Compared to previous work [5], our results express more natural textures and details. (b) We can change labels in the original label map to create new scenes, like replacing trees with buildings. (c) Our framework also allows a user to edit the appearance of individual objects in the scene, e.g. changing the color of a car or the texture of a road. Please visit our website for more side-by-side comparisons as well as interactive editing demos.
translated by 谷歌翻译
LIDAR传感器提供有关周围场景的丰富3D信息,并且对于自动驾驶汽车的任务(例如语义细分,对象检测和跟踪)变得越来越重要。模拟激光雷达传感器的能力将加速自动驾驶汽车的测试,验证和部署,同时降低成本并消除现实情况下的测试风险。为了解决以高保真度模拟激光雷达数据的问题,我们提出了一条管道,该管道利用移动映射系统获得的现实世界点云。基于点的几何表示,更具体地说,已经证明了它们能够在非常大点云中准确对基础表面进行建模的能力。我们引入了一种自适应夹层生成方法,该方法可以准确地对基础3D几何形状进行建模,尤其是对于薄结构。我们还通过在GPU上铸造Ray铸造的同时,在有效处理大点云的同时,我们还开发了更快的时间激光雷达模拟。我们在现实世界中测试了激光雷达的模拟,与基本的碎片和网格划分技术相比,表现出定性和定量结果,证明了我们的建模技术的优势。
translated by 谷歌翻译
3D autonomous driving semantic segmentation using deep learning has become, a well-studied subject, providing methods that can reach very high performance. Nonetheless, because of the limited size of the training datasets, these models cannot see every type of object and scenes found in real-world applications. The ability to be reliable in these various unknown environments is called domain generalization. Despite its importance, domain generalization is relatively unexplored in the case of 3D autonomous driving semantic segmentation. To fill this gap, this paper presents the first benchmark for this application by testing state-of-the-art methods and discussing the difficulty of tackling LiDAR domain shifts. We also propose the first method designed to address this domain generalization, which we call 3DLabelProp. This method relies on leveraging the geometry and sequentiality of the LiDAR data to enhance its generalization performances by working on partially accumulated point clouds. It reaches a mIoU of 52.6% on SemanticPOSS while being trained only on SemanticKITTI, making it state-of-the-art method for generalization (+7.4% better than the second best method). The code for this method will be available on Github.
translated by 谷歌翻译
了解场景是自主导航车辆的关键,以及在线将周围环境分段为移动和非移动物体的能力是这项任务的中央成分。通常,基于深度学习的方法用于执行移动对象分段(MOS)。然而,这些网络的性能强烈取决于标记培训数据的多样性和数量,可以获得昂贵的信息。在本文中,我们提出了一种自动数据标记管道,用于3D LIDAR数据,以节省广泛的手动标记工作,并通过自动生成标记的训练数据来提高现有的基于学习的MOS系统的性能。我们所提出的方法通过批量处理数据来实现数据。首先利用基于占用的动态对象拆除以粗略地检测可能的动态物体。其次,它提取了提案中的段,并使用卡尔曼滤波器跟踪它们。基于跟踪的轨迹,它标记了实际移动的物体,如驾驶汽车和行人。相反,非移动物体,例如,停放的汽车,灯,道路或建筑物被标记为静态。我们表明,这种方法允许我们高效地标记LIDAR数据,并将我们的结果与其他标签生成方法的结果进行比较。我们还使用自动生成的标签培训深度神经网络,并与在同一数据上的手动标签上接受过的手动标签的培训相比,实现了类似的性能,以及使用我们方法生成的标签的其他数据集时更好的性能。此外,我们使用不同的传感器评估我们在多个数据集上的方法,我们的实验表明我们的方法可以在各种环境中生成标签。
translated by 谷歌翻译