在本文中,我们研究了深神经网络中的动态感知对抗攻击问题。大多数现有的对抗性攻击算法是在基本假设下设计的 - 网络架构在整个攻击过程中都是固定的。然而,这种假设不适用于许多最近提出的网络,例如最近提出的网络。 3D稀疏卷积网络,其中包含输入相关的执行,以提高计算效率。它导致严重问题的滞后梯度,由于架构之后的架构而导致当前步骤的学习攻击无效。为了解决这个问题,我们提出了一种带有铅梯度法(LGM)并显示出滞后梯度的显着影响。更具体地说,我们重新制定了梯度,以了解网络架构的潜在动态变化,使得学习攻击更好地“引导”的下一步,而是当网络架构动态变化时的动态 - 不知道方法。关于各种数据集的广泛实验表明,我们的LGM在语义细分和分类上实现了令人印象深刻的性能。与动态无知的方法相比,LGM在SCANNET和S3DIS数据集上均达到约20%的MIOU。 LGM还优于最近的点云攻击。
translated by 谷歌翻译
Deep learning-based 3D object detectors have made significant progress in recent years and have been deployed in a wide range of applications. It is crucial to understand the robustness of detectors against adversarial attacks when employing detectors in security-critical applications. In this paper, we make the first attempt to conduct a thorough evaluation and analysis of the robustness of 3D detectors under adversarial attacks. Specifically, we first extend three kinds of adversarial attacks to the 3D object detection task to benchmark the robustness of state-of-the-art 3D object detectors against attacks on KITTI and Waymo datasets, subsequently followed by the analysis of the relationship between robustness and properties of detectors. Then, we explore the transferability of cross-model, cross-task, and cross-data attacks. We finally conduct comprehensive experiments of defense for 3D detectors, demonstrating that simple transformations like flipping are of little help in improving robustness when the strategy of transformation imposed on input point cloud data is exposed to attackers. Our findings will facilitate investigations in understanding and defending the adversarial attacks against 3D object detectors to advance this field.
translated by 谷歌翻译
通过采用深层CNN(卷积神经网络)和GCN(图卷积网络),最近对3D点云语义分割的研究努力取得了出色的表现。然而,这些复杂模型的鲁棒性尚未得到系统地分析。鉴于在许多安全关键型应用中应用了语义分割(例如,自主驾驶,地质感测),特别是填补这种知识差距,特别是这些模型在对抗性样本下的影响。虽然已经研究了针对点云的对抗攻击,但我们发现所有这些都是针对单一物体识别的,并且在点坐标上进行扰动。我们认为,基于坐标的扰动不太可能在物理世界的限制下实现。因此,我们提出了一种名为Colper的新的无色扰动方法,并将其定制为语义分割。通过评估室内数据集(S3DIS)和室外数据集(语义3D)对三点云分割模型(PointNet ++,Deepgcns和Randla-Net)进行评估,我们发现只有颜色的扰动足以显着降低分割精度和AIOU ,在目标和非目标攻击设置下。
translated by 谷歌翻译
我们介绍了PointConvormer,这是一个基于点云的深神经网络体系结构的新颖构建块。受到概括理论的启发,PointConvormer结合了点卷积的思想,其中滤波器权重仅基于相对位置,而变形金刚则利用了基于功能的注意力。在PointConvormer中,附近点之间的特征差异是重量重量卷积权重的指标。因此,我们从点卷积操作中保留了不变,而注意力被用来选择附近的相关点进行卷积。为了验证PointConvormer的有效性,我们在点云上进行了语义分割和场景流估计任务,其中包括扫描仪,Semantickitti,FlyingThings3D和Kitti。我们的结果表明,PointConvormer具有经典的卷积,常规变压器和Voxelized稀疏卷积方法的表现,具有较小,更高效的网络。可视化表明,PointConvormer的性能类似于在平面表面上的卷积,而邻域选择效果在物体边界上更强,表明它具有两全其美。
translated by 谷歌翻译
Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-ofthe-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes.
translated by 谷歌翻译
Feedforward fully convolutional neural networks currently dominate in semantic segmentation of 3D point clouds. Despite their great success, they suffer from the loss of local information at low-level layers, posing significant challenges to accurate scene segmentation and precise object boundary delineation. Prior works either address this issue by post-processing or jointly learn object boundaries to implicitly improve feature encoding of the networks. These approaches often require additional modules which are difficult to integrate into the original architecture. To improve the segmentation near object boundaries, we propose a boundary-aware feature propagation mechanism. This mechanism is achieved by exploiting a multi-task learning framework that aims to explicitly guide the boundaries to their original locations. With one shared encoder, our network outputs (i) boundary localization, (ii) prediction of directions pointing to the object's interior, and (iii) semantic segmentation, in three parallel streams. The predicted boundaries and directions are fused to propagate the learned features to refine the segmentation. We conduct extensive experiments on the S3DIS and SensatUrban datasets against various baseline methods, demonstrating that our proposed approach yields consistent improvements by reducing boundary errors. Our code is available at https://github.com/shenglandu/PushBoundary.
translated by 谷歌翻译
We present an approach to semantic scene analysis using deep convolutional networks. Our approach is based on tangent convolutions -a new construction for convolutional networks on 3D data. In contrast to volumetric approaches, our method operates directly on surface geometry. Crucially, the construction is applicable to unstructured point clouds and other noisy real-world data. We show that tangent convolutions can be evaluated efficiently on large-scale point clouds with millions of points. Using tangent convolutions, we design a deep fully-convolutional network for semantic segmentation of 3D point clouds, and apply it to challenging real-world datasets of indoor and outdoor 3D environments. Experimental results show that the presented approach outperforms other recent deep network constructions in detailed analysis of large 3D scenes.
translated by 谷歌翻译
We present Kernel Point Convolution 1 (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KP-Conv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.
translated by 谷歌翻译
LIDAR传感器对于自动驾驶汽车和智能机器人的感知系统至关重要。为了满足现实世界应用程序中的实时要求,有必要有效地分割激光扫描。以前的大多数方法将3D点云直接投影到2D球形范围图像上,以便它们可以利用有效的2D卷积操作进行图像分割。尽管取得了令人鼓舞的结果,但在球形投影中,邻里信息尚未保存得很好。此外,在单个扫描分割任务中未考虑时间信息。为了解决这些问题,我们提出了一种新型的语义分割方法,用于元素rangeseg的激光雷达序列,其中引入了新的范围残差图像表示以捕获空间时间信息。具体而言,使用元内核来提取元特征,从而减少了2D范围图像坐标输入和3D笛卡尔坐标输出之间的不一致。有效的U-NET主链用于获得多尺度功能。此外,特征聚合模块(FAM)增强了范围通道的作用,并在不同级别上汇总特征。我们已经进行了广泛的实验,以评估semantickitti和semanticposs。有希望的结果表明,我们提出的元rangeseg方法比现有方法更有效。我们的完整实施可在https://github.com/songw-zju/meta-rangeseg上公开获得。
translated by 谷歌翻译
基于激光雷达的3D场景感知是自动驾驶的基本和重要任务。大多数基于激光雷达的3D识别任务的最新方法都集中在单帧3D点云数据上,并且这些方法在这些方法中被忽略。我们认为,整个框架的时间信息为3D场景感知提供了重要的知识,尤其是在驾驶场景中。在本文中,我们专注于空间和时间变化,以更好地探索3D帧的时间信息。我们设计了一个时间变化 - 意识到的插值模块和时间体素点炼油厂,以捕获4D点云中的时间变化。时间变化 - 意识插值通过捕获空间连贯性和时间变化信息来生成从上一个和当前帧的局部特征。时间体素点炼油厂在3D点云序列上构建了时间图,并使用图形卷积模块捕获时间变化。时间体素点炼油厂还将粗素级预测转换为精细的点级预测。通过我们提出的模块,新的网络TVSN在Semantickitti和Semantiposs上实现了最先进的性能。具体而言,我们的方法在MIOU中达到52.5 \%(以前的最佳方法+5.5%)在Semantickitti的多个扫描细分任务上,semanticposs的多个扫描分段任务(63.0%)(以前的最佳方法+2.8%)。
translated by 谷歌翻译
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
translated by 谷歌翻译
This paper presents PointWeb, a new approach to extract contextual features from local neighborhood in a point cloud. Unlike previous work, we densely connect each point with every other in a local neighborhood, aiming to specify feature of each point based on the local region characteristics for better representing the region. A novel module, namely Adaptive Feature Adjustment (AFA) module, is presented to find the interaction between points. For each local region, an impact map carrying element-wise impact between point pairs is applied to the feature difference map. Each feature is then pulled or pushed by other features in the same region according to the adaptively learned impact indicators. The adjusted features are well encoded with region information, and thus benefit the point cloud recognition tasks, such as point cloud segmentation and classification. Experimental results show that our model outperforms the state-of-the-arts on both semantic segmentation and shape classification datasets.
translated by 谷歌翻译
Our dataset provides dense annotations for each scan of all sequences from the KITTI Odometry Benchmark [19]. Here, we show multiple scans aggregated using pose information estimated by a SLAM approach.
translated by 谷歌翻译
场景完成是从场景的部分扫描中完成缺失几何形状的任务。大多数以前的方法使用3D网格上的截断签名距离函数(T-SDF)计算出隐式表示,作为神经网络的输入。截断限制,但不会删除由非关闭表面符号引入的模棱两可的案例。作为替代方案,我们提出了一个未签名的距离函数(UDF),称为未签名的加权欧几里得距离(UWED)作为场景完成神经网络的输入表示。 UWED作为几何表示是简单而有效的,并且可以在任何点云上计算,而与通常的签名距离函数(SDF)相比,UWED不需要正常的计算。为了获得明确的几何形状,我们提出了一种从常规网格上离散的UDF值提取点云的方法。我们比较了从RGB-D和LIDAR传感器收集的室内和室外点云上的场景完成任务的不同SDF和UDFS,并使用建议的UWED功能显示了改进的完成。
translated by 谷歌翻译
最近,3D深度学习模型已被证明易于对其2D对应物的对抗性攻击影响。大多数最先进的(SOTA)3D对抗性攻击对3D点云进行扰动。为了在物理场景中再现这些攻击,需要重建生成的对抗3D点云以网状,这导致其对抗效果显着下降。在本文中,我们提出了一个名为Mesh攻击的强烈的3D对抗性攻击,通过直接对3D对象的网格进行扰动来解决这个问题。为了利用最有效的基于梯度的攻击,介绍了一种可差异化的样本模块,其反向传播点云梯度以网格传播。为了进一步确保没有异常值和3D可打印的对抗性网状示例,采用了三种网格损耗。广泛的实验表明,所提出的方案优于SOTA 3D攻击,通过显着的保证金。我们还在各种防御下实现了SOTA表现。我们的代码可用于:https://github.com/cuge1995/mesh-attack。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
我们提出了一种新的注意机制,称为全球分层注意(GHA),用于3D点云分析。 GHA通过在多个层次结构上进行一系列粗化和插值操作,近似于常规的全局点产生关注。 GHA的优势是两个方面。首先,它相对于点数具有线性复杂性,从而使大点云的处理能够处理。其次,GHA固有地具有归纳性偏见,可以专注于空间接近点,同时保留所有点之间的全球连通性。与前馈网络相结合,可以将GHA插入许多现有的网络体系结构中。我们尝试多个基线网络,并表明添加GHA始终如一地提高不同任务和数据集的性能。对于语义分割的任务,GHA在扫描板上的Minkowskiengine基线增加了1.7%的MIOU。对于3D对象检测任务,GHA将CenterPoint基线提高了Nuscenes数据集上的 +0.5%地图,而3DETR基线将SCANNET上的基线提高到 +2.1%MAP25和 +1.5%MAP50。
translated by 谷歌翻译
利用3D点云数据已经成为在面部识别和自动驾驶等许多领域部署人工智能的迫切需要。然而,3D点云的深度学习仍然容易受到对抗的攻击,例如迭代攻击,点转换攻击和生成攻击。这些攻击需要在严格的界限内限制对抗性示例的扰动,导致不切实际的逆势3D点云。在本文中,我们提出了对普遍的图形 - 卷积生成的对抗网络(ADVGCGAN)从头开始产生视觉上现实的对抗3D点云。具体地,我们使用图形卷积发电机和带有辅助分类器的鉴别器来生成现实点云,从真实3D数据学习潜在分布。不受限制的对抗性攻击损失纳入GaN的特殊逆势训练中,使得发电机能够产生对抗实例来欺骗目标网络。与现有的最先进的攻击方法相比,实验结果表明了我们不受限制的对抗性攻击方法的有效性,具有更高的攻击成功率和视觉质量。此外,拟议的Advgcan可以实现更好的防御模型和比具有强烈伪装的现有攻击方法更好的转移性能。
translated by 谷歌翻译
最近的进展表明,可以通过像欧妮线方程等物理限制来实现半监督隐式表示学习。然而,由于其空间不同的稀疏性,该方案尚未成功地用于LiDAR点云数据。在本文中,我们开发了一种新颖的制定,条件在局部形状嵌入上的半监督隐式功能。它利用稀疏卷积网络的强大表示力,以产生形状感知密集特征卷,同时仍允许半监控符号函数学习,而不知道自由空间的确切值。具有广泛的定量和定性结果,我们证明了这种新的学习系统的内在属性及其在现实世界道路场景中的用途。值得注意的是,我们在Semantickitti将iou从26.3%到51.0%。此外,我们探索了两个范式来集成语义标签预测,实现隐式语义完成。可以在https://github.com/open-air-sun/sisc访问代码和模型。
translated by 谷歌翻译
我们在本文中重新审视语义场景(SSC),是预测3D场景的语义和占用表示的有用任务。此任务的许多方法始终基于用于保存本地场景结构的体蛋白化场景表示。然而,由于存在可见空体素,当网络更深时,这些方法总是遭受重型计算冗余,从而限制完成质量。为了解决这种困境,我们提出了我们为此任务的新型点体素聚集网络。首先,我们通过去除这些可见的空体素来将Voxized场景传输到点云,并采用深点流,以有效地从场景中捕获语义信息。同时,仅包含两个3D卷积层的轻重体素流保留了体蛋白化场景的局部结构。此外,我们设计一个各向异性体素聚合运算符,将结构细节从体素流融合到点流中,并通过语义标签来增强点流中的上采样过程的语义感知传播模块。我们展示了我们的模型在两个基准上超越了最先进的余量,只有深度图像作为输入。
translated by 谷歌翻译