Scene understanding is a major challenge of today's computer vision. Center to this task is image segmentation, since scenes are often provided as a set of pictures. Nowadays, many such datasets also provide 3D geometry information given as a 3D point cloud acquired by a laser scanner or a depth camera. To exploit this geometric information, many current approaches rely on both a 2D loss and 3D loss, requiring not only 2D per pixel labels but also 3D per point labels. However obtaining a 3D groundtruth is challenging, time-consuming and error-prone. In this paper, we show that image segmentation can benefit from 3D geometric information without requiring any 3D groundtruth, by training the geometric feature extraction with a 2D segmentation loss in an end-to-end fashion. Our method starts by extracting a map of 3D features directly from the point cloud by using a lightweight and simple 3D encoder neural network. The 3D feature map is then used as an additional input to a classical image segmentation network. During training, the 3D features extraction is optimized for the segmentation task by back-propagation through the entire pipeline. Our method exhibits state-of-the-art performance with much lighter input dataset requirements, since no 3D groundtruth is required.
translated by 谷歌翻译
3D语义分割的最新作品建议通过使用专用网络处理每种模式并将学习的2D功能投射到3D点上,从而利用图像和点云之间的协同作用。合并大规模点云和图像会引起几个挑战,例如在点和像素之间构建映射,以及在多个视图之间汇总特征。当前方法需要网格重建或专门传感器来恢复闭塞,并使用启发式方法选择和汇总可用的图像。相比之下,我们提出了一个可端到端的可训练的多视图聚合模型,该模型利用3D点的观看条件从任意位置拍摄的图像中合并特征。我们的方法可以结合标准2D和3D网络,并优于在有色点云和混合2D/3D网络上运行的3D模型,而无需进行着色,网格融化或真实的深度图。我们为S3DIS(74.7 MIOU 6倍)和Kitti-360(58.3 MIOU)设置了大型室内/室外语义细分的新最先进的。我们的完整管道可以在https://github.com/drprojects/deepviewagg上访问,并且仅需要原始的3D扫描以及一组图像和姿势。
translated by 谷歌翻译
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
由于从输入方面互补的方式,RGB-D语义细分引发了研究的兴趣。现有作品通常采用两流体系结构,该体系结构并行处理光度法和几何信息,很少有方法明确利用深度线索的贡献来调整RGB图像上的采样位置。在本文中,我们提出了一个新颖的框架,以将深度信息纳入RGB卷积神经网络(CNN),称为Z-ACN(深度适应的CNN)。具体而言,我们的Z-ACN生成了一个2D适应的偏移量,该偏移完全受到低级功能的约束,以指导RGB图像上的特征提取。通过生成的偏移,我们引入了两个直观有效的操作,以取代基本的CNN操作员:深度适应的卷积和深度适应的平均池。对室内和室外语义分割任务的广泛实验证明了我们方法的有效性。
translated by 谷歌翻译
从预期的观点(例如范围视图(RV)和Bird's-eye-view(BEV))进行了云云语义细分。不同的视图捕获了点云的不同信息,因此彼此互补。但是,最近基于投影的点云语义分割方法通常会利用一种香草后期的融合策略来预测不同观点,因此未能从表示学习过程中从几何学角度探索互补信息。在本文中,我们引入了一个几何流动网络(GFNET),以探索以融合方式对准不同视图之间的几何对应关系。具体而言,我们设计了一个新颖的几何流量模块(GFM),以双向对齐并根据端到端学习方案下的几何关系跨不同观点传播互补信息。我们对两个广泛使用的基准数据集(Semantickitti和Nuscenes)进行了广泛的实验,以证明我们的GFNET对基于项目的点云语义分割的有效性。具体而言,GFNET不仅显着提高了每个单独观点的性能,而且还可以在所有基于投影的模型中取得最新的结果。代码可在\ url {https://github.com/haibo-qiu/gfnet}中获得。
translated by 谷歌翻译
我们建议在2D域中利用自我监督的技术来实现细粒度的3D形状分割任务。这是受到观察的启发:基于视图的表面表示比基于点云或体素占用率的3D对应物更有效地建模高分辨率表面细节和纹理。具体而言,给定3D形状,我们将其从多个视图中渲染,并在对比度学习框架内建立密集的对应学习任务。结果,与仅在2D或3D中使用自学的替代方案相比,学到的2D表示是视图不变和几何一致的,在对有限的标记形状进行培训时,可以更好地概括概括。对纹理(渲染peple)和未纹理(partnet)3D数据集的实验表明,我们的方法在细粒部分分割中优于最先进的替代方案。当仅一组稀疏的视图可供训练或形状纹理时,对基准的改进就会更大,这表明MVDecor受益于2D处理和3D几何推理。
translated by 谷歌翻译
我们介绍了PointConvormer,这是一个基于点云的深神经网络体系结构的新颖构建块。受到概括理论的启发,PointConvormer结合了点卷积的思想,其中滤波器权重仅基于相对位置,而变形金刚则利用了基于功能的注意力。在PointConvormer中,附近点之间的特征差异是重量重量卷积权重的指标。因此,我们从点卷积操作中保留了不变,而注意力被用来选择附近的相关点进行卷积。为了验证PointConvormer的有效性,我们在点云上进行了语义分割和场景流估计任务,其中包括扫描仪,Semantickitti,FlyingThings3D和Kitti。我们的结果表明,PointConvormer具有经典的卷积,常规变压器和Voxelized稀疏卷积方法的表现,具有较小,更高效的网络。可视化表明,PointConvormer的性能类似于在平面表面上的卷积,而邻域选择效果在物体边界上更强,表明它具有两全其美。
translated by 谷歌翻译
我们呈现NESF,一种用于单独从构成的RGB图像中生成3D语义场的方法。代替经典的3D表示,我们的方法在最近的基础上建立了隐式神经场景表示的工作,其中3D结构被点亮功能捕获。我们利用这种方法来恢复3D密度领域,我们然后在其中培训由构成的2D语义地图监督的3D语义分段模型。尽管仅在2D信号上培训,我们的方法能够从新颖的相机姿势生成3D一致的语义地图,并且可以在任意3D点查询。值得注意的是,NESF与产生密度场的任何方法兼容,并且随着密度场的质量改善,其精度可提高。我们的实证分析在复杂的实际呈现的合成场景中向竞争性2D和3D语义分割基线表现出可比的质量。我们的方法是第一个提供真正密集的3D场景分段,需要仅需要2D监督培训,并且不需要任何关于新颖场景的推论的语义输入。我们鼓励读者访问项目网站。
translated by 谷歌翻译
变压器在自然语言处理中的成功最近引起了计算机视觉领域的关注。由于能够学习长期依赖性,变压器已被用作广泛使用的卷积运算符的替代品。事实证明,这种替代者在许多任务中都取得了成功,其中几种最先进的方法依靠变压器来更好地学习。在计算机视觉中,3D字段还见证了使用变压器来增加3D卷积神经网络和多层感知器网络的增加。尽管许多调查都集中在视力中的变压器上,但由于与2D视觉相比,由于数据表示和处理的差异,3D视觉需要特别注意。在这项工作中,我们介绍了针对不同3D视觉任务的100多种变压器方法的系统和彻底审查,包括分类,细分,检测,完成,姿势估计等。我们在3D Vision中讨论了变形金刚的设计,该设计使其可以使用各种3D表示形式处理数据。对于每个应用程序,我们强调了基于变压器的方法的关键属性和贡献。为了评估这些方法的竞争力,我们将它们的性能与12个3D基准测试的常见非转化方法进行了比较。我们通过讨论3D视觉中变压器的不同开放方向和挑战来结束调查。除了提出的论文外,我们的目标是频繁更新最新的相关论文及其相应的实现:https://github.com/lahoud/3d-vision-transformers。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
农业部门的自动化和机器人被视为该行业面临的社会经济挑战的可行解决方案。该技术经常依赖于提供有关作物,植物和整个环境的信息的智能感知系统。传统的2D视觉系统面临的挑战可以由现代3D视觉系统解决,使物体,尺寸和形状估计的直接定位或闭塞的处理能够。到目前为止,使用3D感测主要限于室内或结构化环境。在本文中,我们评估了现代传感技术,包括立体声和飞行时间摄像机,用于在农业中的形状的3D感知,并根据其形状从背景中分割软果实的可用性。为此,我们提出了一种新颖的3D深度神经网络,其利用来自基于相机的3D传感器的信息的有组织性质。与最先进的3D网络相比,我们展示了所提出的体系结构的卓越性能和效率。通过模拟研究,我们还显示了农业中对象分割的3D感测范例的潜力,并提供了洞察力和分析所需的形状质量和预期作物的进一步分析。这项工作的结果应该鼓励研究人员和公司开发更准确和强大的3D传感技术,以确保他们在实际农业应用中更广泛的采用。
translated by 谷歌翻译
Segmenting humans in 3D indoor scenes has become increasingly important with the rise of human-centered robotics and AR/VR applications. In this direction, we explore the tasks of 3D human semantic-, instance- and multi-human body-part segmentation. Few works have attempted to directly segment humans in point clouds (or depth maps), which is largely due to the lack of training data on humans interacting with 3D scenes. We address this challenge and propose a framework for synthesizing virtual humans in realistic 3D scenes. Synthetic point cloud data is attractive since the domain gap between real and synthetic depth is small compared to images. Our analysis of different training schemes using a combination of synthetic and realistic data shows that synthetic data for pre-training improves performance in a wide variety of segmentation tasks and models. We further propose the first end-to-end model for 3D multi-human body-part segmentation, called Human3D, that performs all the above segmentation tasks in a unified manner. Remarkably, Human3D even outperforms previous task-specific state-of-the-art methods. Finally, we manually annotate humans in test scenes from EgoBody to compare the proposed training schemes and segmentation models.
translated by 谷歌翻译
多视图投影方法在3D理解任务等方面表现出有希望的性能,如3D分类和分割。然而,它仍然不明确如何将这种多视图方法与广泛可用的3D点云组合。以前的方法使用未受忘掉的启发式方法在点级别结合功能。为此,我们介绍了多视图点云(vinoint云)的概念,表示每个3D点作为从多个视图点提取的一组功能。这种新颖的3D Vintor云表示将3D点云表示的紧凑性与多视图表示的自然观。当然,我们可以用卷积和汇集操作配备这一新的表示。我们以理论上建立的功能形式部署了Voint神经网络(vointnet),以学习vinite空间中的表示。我们的小说代表在ScanObjectnn,ModelNet40和ShapEnet​​ Core55上实现了3D分类和检索的最先进的性能。此外,我们在ShapeNet零件上实现了3D语义细分的竞争性能。进一步的分析表明,与其他方法相比,求力提高了旋转和闭塞的鲁棒性。
translated by 谷歌翻译
We present an approach to semantic scene analysis using deep convolutional networks. Our approach is based on tangent convolutions -a new construction for convolutional networks on 3D data. In contrast to volumetric approaches, our method operates directly on surface geometry. Crucially, the construction is applicable to unstructured point clouds and other noisy real-world data. We show that tangent convolutions can be evaluated efficiently on large-scale point clouds with millions of points. Using tangent convolutions, we design a deep fully-convolutional network for semantic segmentation of 3D point clouds, and apply it to challenging real-world datasets of indoor and outdoor 3D environments. Experimental results show that the presented approach outperforms other recent deep network constructions in detailed analysis of large 3D scenes.
translated by 谷歌翻译
随着相机和激光雷达传感器捕获用于自主驾驶的互补信息,已经做出了巨大的努力,通过多模式数据融合来开发语义分割算法。但是,基于融合的方法需要配对的数据,即具有严格的点对像素映射的激光点云和相机图像,因为培训和推理的输入都严重阻碍了在实际情况下的应用。因此,在这项工作中,我们建议通过充分利用具有丰富外观的2D图像来提高对点云上的代表性学习的2D先验辅助语义分割(2DPass),以增强对点云的表示。实际上,通过利用辅助模态融合和多尺度融合到单个知识蒸馏(MSFSKD),2DAPS从多模式数据中获取更丰富的语义和结构信息,然后在线蒸馏到纯3D网络。结果,配备了2DAPS,我们的基线仅使用点云输入显示出显着的改进。具体而言,它在两个大规模的基准(即Semantickitti和Nuscenes)上实现了最先进的方法,其中包括TOP-1的semantickitti的单扫描和多次扫描竞赛。
translated by 谷歌翻译
Monoscene提出了3D语义场景完成(SSC)框架,其中从单眼RGB图像推断出场景的密集几何和语义。与SSC文献不同,依赖于2.5或3D输入,我们解决了2D到3D场景重建的复杂问题,同时联合推断了其语义。我们的框架依赖于由光学系统启发的新型2D-3D功能投影的连续2D和3D UNETS,并在强制执行时期 - 语义一致性之前引入3D上下文关系。随着建筑贡献,我们介绍了新的全球场景和本地截肢损失。实验表明,我们在所有指标和数据集上表达了文献,同时甚至在相机视野之外的幻觉风景。我们的代码和培训的型号可在https://github.com/cv-rits/monoscene获得
translated by 谷歌翻译
We present Kernel Point Convolution 1 (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KP-Conv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.
translated by 谷歌翻译
最近,通过单一或多个表示提出了许多方法,以提高点云语义分割的性能。但是,这些作品在性能,效率和记忆消耗中没有保持良好的平衡。为了解决这些问题,我们提出了Drinet ++,通过增强点云的点云与Voxel-Point原理来扩展Drinet。为了提高效率和性能,Drinet ++主要由两个模块组成:稀疏功能编码器和稀疏几何功能增强。稀疏特征编码器提取每个点的本地上下文信息,稀疏几何特征增强功能通过多尺度稀疏投影和细心的多尺度融合增强了稀疏点云​​的几何特性。此外,我们提出了在培训阶段的深度稀疏监督,以帮助收敛并减轻内存消耗问题。我们的Drinet ++在Semantickitti和Nuscenes数据集中实现了最先进的户外点云分段,同时运行得更快,更耗费较少的内存。
translated by 谷歌翻译
隐式3D表示的最新进展,即神经辐射场(NERFS),以可区分的方式使准确且具有逼真的3D重建成为可能。这种新的表示可以有效地以一种紧凑的格式传达数百个高分辨率图像的信息,并允许对新观点的逼真综合。在这项工作中,使用NERF的变体称为全体氧,我们为感知任务创建了第一个大规模隐式表示数据集,称为Fustection,该数据集由两个部分组成,这些部分既包含以对象为中心和场景为中心的扫描,用于分类和分段, 。它显示了原始数据集的显着内存压缩率(96.4 \%),同时以统一形式包含2D和3D信息。我们构建了直接作为输入这种隐式格式的分类和分割模型,并提出了一种新颖的增强技术,以避免在图像的背景上过度拟合。代码和数据可在https://postech-cvlab.github.io/perfception中公开获得。
translated by 谷歌翻译