Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer.
translated by 谷歌翻译
我们提出了一种基于动态卷积的3D点云的实例分割方法。这使其能够在推断时适应变化的功能和对象尺度。这样做避免了一些自下而上的方法的陷阱,包括对超参数调整和启发式后处理管道的依赖,以弥补物体大小的不可避免的可变性,即使在单个场景中也是如此。通过收集具有相同语义类别并为几何质心进行仔细投票的均匀点,网络的表示能力大大提高了。然后通过几个简单的卷积层解码实例,其中参数是在输入上生成的。所提出的方法是无建议的,而是利用适应每个实例的空间和语义特征的卷积过程。建立在瓶颈层上的轻重量变压器使模型可以捕获远程依赖性,并具有有限的计算开销。结果是一种简单,高效且健壮的方法,可以在各种数据集上产生强大的性能:ScannETV2,S3DIS和Partnet。基于体素和点的体系结构的一致改进意味着提出的方法的有效性。代码可在以下网址找到:https://git.io/dyco3d
translated by 谷歌翻译
我们介绍了一个3D实例表示,称为实例内核,其中实例由一维向量表示,该向量编码3D实例的语义,位置和形状信息。我们显示了实例内核通过简单地在整个场景中扫描内核,避免对标准3D实例分段管道中的建议或启发式聚类算法的严重依赖,从而实现了简单的掩盖推理。实例内核的想法是受到2D/3D实例分割中动态卷积的最新成功的启发。但是,我们发现由于点云数据的无序和非结构化的性质,代表3D实例是非平凡的,例如,糟糕的实例定位可以显着降低实例表示。为了解决这个问题,我们构建了一个编码范式的新颖3D实例。首先,潜在的实例质心定位为候选。然后,设计了一个候选人合并方案,以同时汇总重复的候选人,并收集围绕合并的质心的背景,以形成实例内核。一旦实例内核可用,就可以通过在实例内核调节的动态卷积来重建实例掩码。整个管道是通过动态内核网络(DKNET)实例化的。结果表明,DKNET的表现优于ScannETV2和S3DIS数据集的艺术状态,并具有更好的实例本地化。可用代码:https://github.com/w1zheng/dknet。
translated by 谷歌翻译
尽管有启发式方法,贪婪的算法以及对数据统计变化的变化,但3D实例分割中的当前最新方法通常涉及聚类步骤。相比之下,我们提出了一种以每点预测方式起作用的全面3D点云实例分割方法。为此,它可以避免基于聚类的方法面临的挑战:在模型的不同任务之间引入依赖性。我们发现其成功的关键是为每个采样点分配一个合适的目标。我们建议使用最佳的传输方法来根据动态匹配成本最佳地将目标掩码分配给采样点。我们的方法在扫描仪和S3DIS基准测试方面取得了令人鼓舞的结果。所提出的方法消除了插入依赖性,因此比其他竞争方法代表了更简单,更灵活的3D实例分割框架,同时实现了提高的分割精度。
translated by 谷歌翻译
本文在3D Point Cloud中介绍了一个新问题:很少示例实例分割。给定一些带注释的点云举例说明了目标类,我们的目标是在查询点云中细分该目标类的所有实例。这个问题具有广泛的实用应用,在重点实例分段注释非常昂贵的收集中。为了解决此问题,我们提出了测量形式 - 第一个用于3D点云实例分割的地球引导变压器。关键的想法是利用大地距离来应对LIDAR 3D点云的密度不平衡。 LIDAR 3D点云在物体表面附近茂密,在其他地方稀疏或空,使欧几里得距离较差以区分不同的物体。另一方面,大地测量距离更合适,因为它编码了场景的几何形状,该几何形状可以用作变压器解码器中注意机制的指导信号,以生成代表实例的不同特征的内核。然后将这些内核用于动态卷积以获得最终实例掩模。为了评估新任务上的测量形式,我们提出了两个常见的3D点云实例分割数据集的新拆分:ScannETV2和S3DIS。地球形式始终优于根据最新的3D点云实例分割方法的强大基线,并具有明显的余量。代码可从https://github.com/vinairesearch/geoformer获得。
translated by 谷歌翻译
我们介绍了一种方法,例如针对3D点云的提案生成。现有技术通常直接在单个进料前进的步骤中回归建议,从而导致估计不准确。我们表明,这是一个关键的瓶颈,并提出了一种基于迭代双边滤波的方法。遵循双边滤波的精神,我们考虑了每个点的深度嵌入以及它们在3D空间中的位置。我们通过合成实验表明,在为给定的兴趣点生成实例建议时,我们的方法会带来巨大的改进。我们进一步验证了我们在挑战性扫描基准测试中的方法,从而在自上而下的方法的子类别中实现了最佳实例分割性能。
translated by 谷歌翻译
从3D点云中识别3D零件实例对于3D结构和场景理解至关重要。几种基于学习的方法使用语义细分和实例中心预测作为培训任务,并且无法进一步利用形状语义和部分实例之间的固有关系。在本文中,我们提出了一种用于3D份实例分割的新方法。我们的方法将语义分割利用为融合非本地实例特征(例如中心预测),并以多种和跨层次的方式进一步增强了融合方案。我们还提出了一个语义区域中心预测任务,以训练和利用预测结果来改善实例点的聚类。我们的方法优于现有方法,在Partnet基准测试方面有大幅度的改进。我们还证明,我们的功能融合方案可以应用于其他现有方法,以提高其在室内场景实例细分任务中的性能。
translated by 谷歌翻译
点云上的实例分割对于3D场景的理解至关重要。距离聚类通常用于最新方法(SOTA),该方法通常是有效的,但在用相同的语义标签(尤其是在共享相邻点)的相邻对象中表现不佳。由于偏移点的分布不均匀,这些现有方法几乎不能集中所有实例点。为此,我们设计了一种新颖的鸿沟和征服策略,并提出了一个名为PBNET的端到端网络,该网络将每个点二进制并分别将它们簇簇为细分实例。 PBNET将偏移实例点分为两类:高密度点(HPS vs.lps),然后分别征服。可以通过删除LPS清楚地分离相邻的对象,然后通过通过邻居投票方法分配LP来完成和完善。为了进一步减少聚类误差,我们根据平均大小开发迭代合并算法,以汇总片段实例。 ScannETV2和S3DIS数据集的实验表明了我们的模型的优势。尤其是,PBNET在ScannETV2官方基准挑战(验证集)上实现了迄今为止最好的AP50和AP25,同时证明了高效率。
translated by 谷歌翻译
本文介绍了端到端的实例分段框架,称为SOIT,该段具有实例感知变压器的段对象。灵感来自Detr〜\ Cite {carion2020end},我们的方法视图实例分段为直接设置预测问题,有效地消除了对ROI裁剪,一对多标签分配等许多手工制作组件的需求,以及非最大抑制( nms)。在SOIT中,通过在全局图像上下文下直接地将多个查询直接理解语义类别,边界框位置和像素 - WISE掩码的一组对象嵌入。类和边界盒可以通过固定长度的向量轻松嵌入。尤其是由一组参数嵌入像素方面的掩模以构建轻量级实例感知变压器。之后,实例感知变压器产生全分辨率掩码,而不涉及基于ROI的任何操作。总的来说,SOIT介绍了一个简单的单级实例分段框架,它是无乐和NMS的。 MS Coco DataSet上的实验结果表明,优于最先进的实例分割显着的优势。此外,在统一查询嵌入中的多个任务的联合学习还可以大大提高检测性能。代码可用于\ url {https://github.com/yuxiaodonghri/soit}。
translated by 谷歌翻译
现有的实例分割方法已经达到了令人印象深刻的表现,但仍遭受了共同的困境:一个实例推断出冗余表示(例如,多个框,网格和锚点),这导致了多个重复的预测。因此,主流方法通常依赖于手工设计的非最大抑制(NMS)后处理步骤来选择最佳预测结果,这会阻碍端到端训练。为了解决此问题,我们建议一个称为Uniinst的无盒和无端机实例分割框架,该框架仅对每个实例产生一个唯一的表示。具体而言,我们设计了一种实例意识到的一对一分配方案,即仅产生一个表示(Oyor),该方案根据预测和地面真相之间的匹配质量,动态地为每个实例动态分配一个独特的表示。然后,一种新颖的预测重新排列策略被优雅地集成到框架中,以解决分类评分和掩盖质量之间的错位,从而使学习的表示形式更具歧视性。借助这些技术,我们的Uniinst,第一个基于FCN的盒子和无NMS实例分段框架,实现竞争性能,例如,使用Resnet-50-FPN和40.2 mask AP使用Resnet-101-FPN,使用Resnet-50-FPN和40.2 mask AP,使用Resnet-101-FPN,对抗AP可可测试-DEV的主流方法。此外,提出的实例感知方法对于遮挡场景是可靠的,在重锁定的ochuman基准上,通过杰出的掩码AP优于公共基线。我们的代码将在出版后提供。
translated by 谷歌翻译
现有的最新3D点云实例分割方法依赖于基于分组的方法,该方法指向获得对象实例。尽管产生准确的分割结果方面有所改善,但这些方法缺乏可扩展性,通常需要将大量输入分为多个部分。为了处理数百万点的场景,现有的最快方法软组\ cite {vu2022222222222222222222222222222222222222ggroup}需要数十秒钟,这是满意的。我们的发现是,$ k $ neart的邻居($ k $ -nn)是分组的先决条件,是计算瓶颈。这种瓶颈严重使现场的推理时间恶化了很多。本文提出了软组++来解决此计算瓶颈,并进一步优化了整个网络的推理速度。 SoftGroup ++建立在软组上,这在三个重要方面有所不同:(1)执行OCTREE $ K $ -NN而不是Vanilla $ k $ -nn,以将时间复杂性从$ \ Mathcal {o}(n^2)缩短到$ \ Mathcal {o}(n \ log n)$,(2)执行金字塔缩放,适应性下降样本骨干输出以减少$ k $ -nn和分组的搜索空间,并且(3)执行后期的Devoxelization,延迟了Voxels的转换指向模型的结束,以使中间组件以低计算成本运行。在各种室内和室外数据集上进行了广泛的实验,证明了拟议的软组++的功效。值得注意的是,SoftGroup ++在一个前方的情况下通过单个前方进行了大量的场景,而无需将输入分为多个部分,从而丰富了上下文信息。特别是,SoftGroup ++达到2.4点AP $ _ {50} $改进,而$ 6 \ $ 6 \ times $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $。代码和训练有素的模型将公开可用。
translated by 谷歌翻译
Letting a deep network be aware of the quality of its own predictions is an interesting yet important problem. In the task of instance segmentation, the confidence of instance classification is used as mask quality score in most instance segmentation frameworks. However, the mask quality, quantified as the IoU between the instance mask and its ground truth, is usually not well correlated with classification score. In this paper, we study this problem and propose Mask Scoring R-CNN which contains a network block to learn the quality of the predicted instance masks. The proposed network block takes the instance feature and the corresponding predicted mask together to regress the mask IoU. The mask scoring strategy calibrates the misalignment between mask quality and mask score, and improves instance segmentation performance by prioritizing more accurate mask predictions during COCO AP evaluation. By extensive evaluations on the COCO dataset, Mask Scoring R-CNN brings consistent and noticeable gain with different models, and outperforms the state-of-the-art Mask R-CNN. We hope our simple and effective approach will provide a new direction for improving instance segmentation. The source code of our method is available at https:// github.com/zjhuang22/maskscoring_rcnn. * The work was done when Zhaojin Huang was an intern in Horizon Robotics Inc.
translated by 谷歌翻译
在本文中,我们提出了简单的关注机制,我们称之为箱子。它可以实现网格特征之间的空间交互,从感兴趣的框中采样,并提高变压器的学习能力,以获得几个视觉任务。具体而言,我们呈现拳击手,短暂的框变压器,通过从输入特征映射上的参考窗口预测其转换来参加一组框。通过考虑其网格结构,拳击手通过考虑其网格结构来计算这些框的注意力。值得注意的是,Boxer-2D自然有关于其注意模块内容信息的框信息的原因,使其适用于端到端实例检测和分段任务。通过在盒注意模块中旋转的旋转的不变性,Boxer-3D能够从用于3D端到端对象检测的鸟瞰图平面产生识别信息。我们的实验表明,拟议的拳击手-2D在Coco检测中实现了更好的结果,并且在Coco实例分割上具有良好的和高度优化的掩模R-CNN可比性。 Boxer-3D已经为Waymo开放的车辆类别提供了令人信服的性能,而无需任何特定的类优化。代码将被释放。
translated by 谷歌翻译
We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.
translated by 谷歌翻译
In this paper we present Mask DINO, a unified object detection and segmentation framework. Mask DINO extends DINO (DETR with Improved Denoising Anchor Boxes) by adding a mask prediction branch which supports all image segmentation tasks (instance, panoptic, and semantic). It makes use of the query embeddings from DINO to dot-product a high-resolution pixel embedding map to predict a set of binary masks. Some key components in DINO are extended for segmentation through a shared architecture and training process. Mask DINO is simple, efficient, and scalable, and it can benefit from joint large-scale detection and segmentation datasets. Our experiments show that Mask DINO significantly outperforms all existing specialized segmentation methods, both on a ResNet-50 backbone and a pre-trained model with SwinL backbone. Notably, Mask DINO establishes the best results to date on instance segmentation (54.5 AP on COCO), panoptic segmentation (59.4 PQ on COCO), and semantic segmentation (60.8 mIoU on ADE20K) among models under one billion parameters. Code is available at \url{https://github.com/IDEACVR/MaskDINO}.
translated by 谷歌翻译
In contrast to fully supervised methods using pixel-wise mask labels, box-supervised instance segmentation takes advantage of simple box annotations, which has recently attracted increasing research attention. This paper presents a novel single-shot instance segmentation approach, namely Box2Mask, which integrates the classical level-set evolution model into deep neural network learning to achieve accurate mask prediction with only bounding box supervision. Specifically, both the input image and its deep features are employed to evolve the level-set curves implicitly, and a local consistency module based on a pixel affinity kernel is used to mine the local context and spatial relations. Two types of single-stage frameworks, i.e., CNN-based and transformer-based frameworks, are developed to empower the level-set evolution for box-supervised instance segmentation, and each framework consists of three essential components: instance-aware decoder, box-level matching assignment and level-set evolution. By minimizing the level-set energy function, the mask map of each instance can be iteratively optimized within its bounding box annotation. The experimental results on five challenging testbeds, covering general scenes, remote sensing, medical and scene text images, demonstrate the outstanding performance of our proposed Box2Mask approach for box-supervised instance segmentation. In particular, with the Swin-Transformer large backbone, our Box2Mask obtains 42.4% mask AP on COCO, which is on par with the recently developed fully mask-supervised methods. The code is available at: https://github.com/LiWentomng/boxlevelset.
translated by 谷歌翻译
3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A 2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A 2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. Code is available at https://github.com/sshaoshuai/PointCloudDet3D.
translated by 谷歌翻译
图像分割是关于使用不同语义的分组像素,例如类别或实例成员身份,其中每个语义选择定义任务。虽然只有每个任务的语义不同,但目前的研究侧重于为每项任务设计专业架构。我们提出了蒙面关注掩模变压器(Mask2Former),这是一种能够寻址任何图像分段任务(Panoptic,实例或语义)的新架构。其关键部件包括屏蔽注意,通过限制预测掩模区域内的横向提取局部特征。除了将研究工作减少三次之外,它还优于四个流行的数据集中的最佳专业架构。最值得注意的是,Mask2Former为Panoptic semonation(Coco 57.8 PQ)设置了新的最先进的,实例分段(Coco上50.1 AP)和语义分割(ADE20K上的57.7 miou)。
translated by 谷歌翻译
Recently, Transformer has achieved great success in computer vision. However, it is constrained because the spatial and temporal complexity grows quadratically with the number of large points in 3D object detection applications. Previous point-wise methods are suffering from time consumption and limited receptive fields to capture information among points. In this paper, we propose a two-stage hyperbolic cosine transformer (ChTR3D) for 3D object detection from LiDAR point clouds. The proposed ChTR3D refines proposals by applying cosh-attention in linear computation complexity to encode rich contextual relationships among points. The cosh-attention module reduces the space and time complexity of the attention operation. The traditional softmax operation is replaced by non-negative ReLU activation and hyperbolic-cosine-based operator with re-weighting mechanism. Extensive experiments on the widely used KITTI dataset demonstrate that, compared with vanilla attention, the cosh-attention significantly improves the inference speed with competitive performance. Experiment results show that, among two-stage state-of-the-art methods using point-level features, the proposed ChTR3D is the fastest one.
translated by 谷歌翻译
In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. The code is available at https://github.com/sshaoshuai/PointRCNN.
translated by 谷歌翻译