Interoperability issue is a significant problem in Building Information Modeling (BIM). Object type, as a kind of critical semantic information needed in multiple BIM applications like scan-to-BIM and code compliance checking, also suffers when exchanging BIM data or creating models using software of other domains. It can be supplemented using deep learning. Current deep learning methods mainly learn from the shape information of BIM objects for classification, leaving relational information inherent in the BIM context unused. To address this issue, we introduce a two-branch geometric-relational deep learning framework. It boosts previous geometric classification methods with relational information. We also present a BIM object dataset IFCNet++, which contains both geometric and relational information about the objects. Experiments show that our framework can be flexibly adapted to different geometric methods. And relational features do act as a bonus to general geometric learning methods, obviously improving their classification performance, thus reducing the manual labor of checking models and improving the practical value of enriched BIM models.
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
作为3D对象的两个基本表示方式,2D多视图图像和3D点云反映了来自视觉外观和几何结构各个方面的形状信息。与基于深度学习的2D多视图图像建模不同,该模型在各种3D形状分析任务中展示了领先的性能,基于3D点云的几何建模仍然遭受学习能力不足。在本文中,我们创新地构建了一个统一的跨模式知识转移框架,该框架将2D图像的歧视性视觉描述器提炼成3D点云的几何描述符。从技术上讲,在经典的教师学习范式下,我们提出了多视觉愿景到几何的蒸馏,由深入的2D图像编码器作为老师和深层的3D点云编码器组成。为了实现异质特征对齐,我们进一步提出了可见性感知的特征投影,通过该投影可以通过该投影将每个点嵌入可以汇总到多视图几何描述符中。对3D形状分类,部分分割和无监督学习的广泛实验验证了我们方法的优势。我们将公开提供代码和数据。
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
translated by 谷歌翻译
学习地区内部背景和区域间关系是加强点云分析的特征表示的两项有效策略。但是,在现有方法中没有完全强调的统一点云表示的两种策略。为此,我们提出了一种名为点关系感知网络(PRA-NET)的小说框架,其由区域内结构学习(ISL)模块和区域间关系学习(IRL)模块组成。ISL模块可以通过可差的区域分区方案和基于代表的基于点的策略自适应和有效地将本地结构信息动态地集成到点特征中,而IRL模块可自适应和有效地捕获区域间关系。在涵盖形状分类,关键点估计和部分分割的几个3D基准测试中的广泛实验已经验证了PRA-Net的有效性和泛化能力。代码将在https://github.com/xiwuchen/pra-net上获得。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
translated by 谷歌翻译
深入学习云越来越发展。将点与其邻居分组并对它们进行卷积相同的操作可以了解点云的本地特征,但此方法薄弱以提取长距离全局功能。在整个点云上执行关注的变换器可以有效地学习它的全局特征,但此方法几乎不会提取本地详细功能。在本文中,我们提出了一种新颖的模块,可以同时提取和保险熔断本地和全局功能,该功能被命名为CT-Block。 CT-块由两个分支组成,其中字母C表示卷积分支,字母T表示变压器分支。卷积分支对分组邻点的卷积进行了卷积以提取本地功能。同时,变压器分支对整个点云执行偏移注意过程以提取全局功能。通过CT-块中的特征传输元件构造的桥梁,本地和全局特征在学习期间彼此引导并有效地融合。我们应用CT-Block构建点云分类和分段网络,并评估几个公共数据集的性能。实验结果表明,由于CT-Block学习的特征是多种表现力的,所以由CT-Block构成的网络的性能在点云分类和分割任务实现现有技术。
translated by 谷歌翻译
基于草图的3D形状检索(SBSR)是一项重要但艰巨的任务,近年来引起了越来越多的关注。现有方法在限制设置中解决了该问题,而无需适当模拟真实的应用程序方案。为了模仿现实的设置,在此曲目中,我们采用了不同级别的绘图技能的业余爱好者以及各种3D形状的大规模草图,不仅包括CAD型号,而且还可以从真实对象扫描的模型。我们定义了两个SBSR任务,并构建了两个基准,包括46,000多个CAD型号,1,700个现实型号和145,000个草图。四个团队参加了这一轨道,并为这两个任务提交了15次跑步,由7个常用指标评估。我们希望,基准,比较结果和开源评估法会在3D对象检索社区中促进未来的研究。
translated by 谷歌翻译
A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.
translated by 谷歌翻译
同时对象识别和姿势估计是机器人安全与人类和环境安全相互作用的两个关键功能。尽管对象识别和姿势估计都使用视觉输入,但大多数最先进的问题将它们作为两个独立的问题解决,因为前者需要视图不变的表示,而对象姿势估计需要一个与观点有关的描述。如今,多视图卷积神经网络(MVCNN)方法显示出最新的分类性能。尽管已广泛探索了MVCNN对象识别,但对多视图对象构成估计方法的研究很少,而同时解决这两个问题的研究更少。 MVCNN方法中虚拟摄像机的姿势通常是预先定义的,从而绑定了这种方法的应用。在本文中,我们提出了一种能够同时处理对象识别和姿势估计的方法。特别是,我们开发了一个深度的对象不合时宜的熵估计模型,能够预测给定3D对象的最佳观点。然后将对象的视图馈送到网络中,以同时预测目标对象的姿势和类别标签。实验结果表明,从此类位置获得的观点足以达到良好的精度得分。此外,我们设计了现实生活中的饮料场景,以证明拟议方法在真正的机器人任务中的运作效果如何。代码可在线获得:github.com/subhadityamukherjee/more_mvcnn
translated by 谷歌翻译
借助深度学习范式,许多点云网络已经发明了用于视觉分析。然而,由于点云数据的给定信息尚未完全利用,因此对这些网络的发展存在很大的潜力。为了提高现有网络在分析点云数据中的有效性,我们提出了一个即插即用模块,PNP-3D,旨在通过涉及更多来自显式3D空间的本地背景和全球双线性响应来改进基本点云特征表示隐含的功能空间。为了彻底评估我们的方法,我们对三个标准点云分析任务进行实验,包括分类,语义分割和对象检测,在那里我们从每个任务中选择三个最先进的网络进行评估。作为即插即用模块,PNP-3D可以显着提高已建立的网络的性能。除了在四个广泛使用的点云基准测试中实现最先进的结果,我们还提供了全面的消融研究和可视化,以展示我们的方法的优势。代码将在https://github.com/shiqiu0419/pnp-3d上获得。
translated by 谷歌翻译
点云完成任务旨在预测不完整的点云的缺失部分,并通过详细信息生成完整的点云。在本文中,我们提出了一个新颖的点云完成网络,即完成。具体而言,从具有不同分辨率的点云中学到了特征,该分辨率是从不完整输入中采样的,并根据几何结构转换为一系列\ textit {spots}。然后,提出了基于变压器的密集关系增强模块(DRA),以学习\ textit {spots}中的特征,并考虑这些\ textit {spots}之间的相关性。 DRA由点局部注意模块(PLA)和点密集的多尺度注意模块(PDMA)组成,其中PLA通过适应邻居的权重,PDMA Expolo the Local \ textit {spots}捕获本地信息。这些\ textit {spots}之间的全局关系以多尺度的密集连接方式。最后,由\ textit {spots}通过多分辨率点融合模块(MPF)预测完整形状,该模块(mpf)逐渐从\ textit {spots}中逐渐生成完整的点云,并基于这些生成的点进行更新\ textit {spots}云。实验结果表明,由于基于变压器的DRA可以从不完整的输入中学习表达性特征,并且MPF可以完全探索这些功能以预测完整的输入,因此我们的方法在很大程度上优于先进方法。
translated by 谷歌翻译
机载激光扫描(ALS)点云的分类是遥感和摄影测量场的关键任务。尽管最近基于深度学习的方法取得了令人满意的表现,但他们忽略了接受场的统一性,这使得ALS点云分类对于区分具有复杂结构和极端规模变化的区域仍然具有挑战性。在本文中,为了配置多受感受性的场特征,我们提出了一个新型的接受场融合和分层网络(RFFS-NET)。以新颖的扩张图卷积(DGCONV)及其扩展环形扩张卷积(ADCONV)作为基本的构建块,使用扩张和环形图融合(Dagfusion)模块实现了接受场融合过程,该模块获得了多受感染的场特征代表通过捕获带有各种接收区域的扩张和环形图。随着计算碱基的计算基础,使用嵌套在RFFS-NET中的多级解码器进行的接收场的分层,并由多层接受场聚集损失(MRFALOSS)驱动,以驱动网络驱动网络以学习在具有不同分辨率的监督标签的方向。通过接受场融合和分层,RFFS-NET更适应大型ALS点云中具有复杂结构和极端尺度变化区域的分类。在ISPRS Vaihingen 3D数据集上进行了评估,我们的RFFS-NET显着优于MF1的基线方法5.3%,而MIOU的基线方法的总体准确性为82.1%,MF1的总准确度为71.6%,MIOU的MF1和MIOU为58.2%。此外,LASDU数据集和2019 IEEE-GRSS数据融合竞赛数据集的实验显示,RFFS-NET可以实现新的最新分类性能。
translated by 谷歌翻译
点云正在获得突出的突出,作为代表3D形状的方法,但其不规则结构对深度学习方法构成了挑战。在本文中,我们提出了一种使用随机散步学习3D形状的新方法。以前的作品试图调整卷积神经网络(CNNS)或将网格或网格结构强加到3D点云。这项工作提出了一种不同的方法来表示和学习特定点集的形状。关键的想法是在多个随机散步通过云设置的点上施加结构,用于探索3D对象的不同区域。然后我们学习每次和每次步行代表,并在推理时聚合多个步行预测。我们的方法实现了两个3D形状分析任务的最先进结果:分类和检索。此外,我们提出了一种形状复杂性指示器功能,该函数使用交叉步道和步行间方差措施来细分形状空间。
translated by 谷歌翻译
我们提出CPT:卷积点变压器 - 一种用于处理3D点云数据的非结构化性质的新型深度学习架构。 CPT是对现有关注的卷曲神经网络以及以前的3D点云处理变压器的改进。由于其在创建基于新颖的基于注意力的点集合嵌入通过制作用于处理动态局部点设定的邻域的卷积投影层的嵌入来实现这一壮举。结果点设置嵌入对输入点的排列是强大的。我们的小说CPT块在网络结构中通过动态图计算获得的本地邻居构建。它是完全可差异的,可以像卷积层一样堆叠,以学习点的全局属性。我们评估我们的模型在ModelNet40,ShapEnet​​部分分割和S3DIS 3D室内场景语义分割数据集等标准基准数据集上,以显示我们的模型可以用作各种点云处理任务的有效骨干,与现有状态相比 - 艺术方法。
translated by 谷歌翻译
Point cloud analysis is challenging due to irregularity and unordered data structure. To capture the 3D geometries, prior works mainly rely on exploring sophisticated local geometric extractors using convolution, graph, or attention mechanisms. These methods, however, incur unfavorable latency during inference, and the performance saturates over the past few years. In this paper, we present a novel perspective on this task. We notice that detailed local geometrical information probably is not the key to point cloud analysis -- we introduce a pure residual MLP network, called PointMLP, which integrates no sophisticated local geometrical extractors but still performs very competitively. Equipped with a proposed lightweight geometric affine module, PointMLP delivers the new state-of-the-art on multiple datasets. On the real-world ScanObjectNN dataset, our method even surpasses the prior best method by 3.3% accuracy. We emphasize that PointMLP achieves this strong performance without any sophisticated operations, hence leading to a superior inference speed. Compared to most recent CurveNet, PointMLP trains 2x faster, tests 7x faster, and is more accurate on ModelNet40 benchmark. We hope our PointMLP may help the community towards a better understanding of point cloud analysis. The code is available at https://github.com/ma-xu/pointMLP-pytorch.
translated by 谷歌翻译
变压器在自然语言处理中的成功最近引起了计算机视觉领域的关注。由于能够学习长期依赖性,变压器已被用作广泛使用的卷积运算符的替代品。事实证明,这种替代者在许多任务中都取得了成功,其中几种最先进的方法依靠变压器来更好地学习。在计算机视觉中,3D字段还见证了使用变压器来增加3D卷积神经网络和多层感知器网络的增加。尽管许多调查都集中在视力中的变压器上,但由于与2D视觉相比,由于数据表示和处理的差异,3D视觉需要特别注意。在这项工作中,我们介绍了针对不同3D视觉任务的100多种变压器方法的系统和彻底审查,包括分类,细分,检测,完成,姿势估计等。我们在3D Vision中讨论了变形金刚的设计,该设计使其可以使用各种3D表示形式处理数据。对于每个应用程序,我们强调了基于变压器的方法的关键属性和贡献。为了评估这些方法的竞争力,我们将它们的性能与12个3D基准测试的常见非转化方法进行了比较。我们通过讨论3D视觉中变压器的不同开放方向和挑战来结束调查。除了提出的论文外,我们的目标是频繁更新最新的相关论文及其相应的实现:https://github.com/lahoud/3d-vision-transformers。
translated by 谷歌翻译