最近,3D深度学习模型已被证明易于对其2D对应物的对抗性攻击影响。大多数最先进的(SOTA)3D对抗性攻击对3D点云进行扰动。为了在物理场景中再现这些攻击,需要重建生成的对抗3D点云以网状,这导致其对抗效果显着下降。在本文中,我们提出了一个名为Mesh攻击的强烈的3D对抗性攻击,通过直接对3D对象的网格进行扰动来解决这个问题。为了利用最有效的基于梯度的攻击,介绍了一种可差异化的样本模块,其反向传播点云梯度以网格传播。为了进一步确保没有异常值和3D可打印的对抗性网状示例,采用了三种网格损耗。广泛的实验表明,所提出的方案优于SOTA 3D攻击,通过显着的保证金。我们还在各种防御下实现了SOTA表现。我们的代码可用于:https://github.com/cuge1995/mesh-attack。
translated by 谷歌翻译
虽然近年来,在2D图像领域的攻击和防御中,许多努力已经探讨了3D模型的脆弱性。现有的3D攻击者通常在点云上执行点明智的扰动,从而导致变形的结构或异常值,这很容易被人类察觉。此外,它们的对抗示例是在白盒设置下产生的,当转移到攻击远程黑匣子型号时经常遭受低成功率。在本文中,我们通过提出一种新的难以察觉的转移攻击(ITA):1)难以察觉的3D点云攻击来自两个新的和具有挑战性的观点:1)难以察觉:沿着邻域表面的正常向量限制每个点的扰动方向,导致产生具有类似几何特性的示例,从而增强了难以察觉。 2)可转移性:我们开发了一个对抗性转变模型,以产生最有害的扭曲,并强制实施对抗性示例来抵抗它,从而提高其对未知黑匣子型号的可转移性。此外,我们建议通过学习更辨别的点云表示来培训更强大的黑盒3D模型来防御此类ITA攻击。广泛的评估表明,我们的ITA攻击比最先进的人更令人无法察觉和可转让,并验证我们的国防战略的优势。
translated by 谷歌翻译
3D动态点云提供了现实世界中的对象或运动场景的离散表示,这些对象已被广泛应用于沉浸式触发,自主驾驶,监视,\ textit {etc}。但是,从传感器中获得的点云通常受到噪声的扰动,这会影响下游任务,例如表面重建和分析。尽管为静态点云降级而做出了许多努力,但很少有作品解决动态点云降级。在本文中,我们提出了一种新型的基于梯度的动态点云降解方法,利用了梯度场估计的时间对应关系,这也是动态点云处理和分析中的基本问题。梯度场是嘈杂点云的对数概况函数的梯度,我们基于我们执行梯度上升,以使每个点收敛到下面的清洁表面。我们通过利用时间对应关系来估计每个表面斑块的梯度,在该时间对应关系中,在经典力学中搜索了在刚性运动的情况下搜索的时间对应贴片。特别是,我们将每个贴片视为一个刚性对象,它通过力在相邻框架的梯度场中移动,直到达到平衡状态,即当贴片上的梯度总和到达0时。由于梯度在该点更接近下面的表面,平衡贴片将适合下层表面,从而导致时间对应关系。最后,沿贴片中每个点的位置沿相邻帧中相应的贴片平均的梯度方向更新。实验结果表明,所提出的模型优于最先进的方法。
translated by 谷歌翻译
随着各种3D安全关键应用的关注,点云学习模型已被证明容易受到对抗性攻击的影响。尽管现有的3D攻击方法达到了很高的成功率,但它们会以明显的扰动来深入研究数据空间,这可能会忽略几何特征。取而代之的是,我们从新的角度提出了点云攻击 - 图谱域攻击,旨在在光谱域中扰动图形转换系数,该系数对应于改变某些几何结构。具体而言,利用图形信号处理,我们首先通过图形傅立叶变换(GFT)自适应地将点的坐标转换为光谱域,以进行紧凑的表示。然后,我们基于我们建议通过可学习的图形光谱滤波器扰动GFT系数的几何结构的影响。考虑到低频组件主要有助于3D对象的粗糙形状,我们进一步引入了低频约束,以限制不察觉到的高频组件中的扰动。最后,通过将扰动的光谱表示形式转换回数据域,从而生成对抗点云。实验结果证明了拟议攻击的有效性,这些攻击既有易经性和攻击成功率。
translated by 谷歌翻译
Point cloud completion, as the upstream procedure of 3D recognition and segmentation, has become an essential part of many tasks such as navigation and scene understanding. While various point cloud completion models have demonstrated their powerful capabilities, their robustness against adversarial attacks, which have been proven to be fatally malicious towards deep neural networks, remains unknown. In addition, existing attack approaches towards point cloud classifiers cannot be applied to the completion models due to different output forms and attack purposes. In order to evaluate the robustness of the completion models, we propose PointCA, the first adversarial attack against 3D point cloud completion models. PointCA can generate adversarial point clouds that maintain high similarity with the original ones, while being completed as another object with totally different semantic information. Specifically, we minimize the representation discrepancy between the adversarial example and the target point set to jointly explore the adversarial point clouds in the geometry space and the feature space. Furthermore, to launch a stealthier attack, we innovatively employ the neighbourhood density information to tailor the perturbation constraint, leading to geometry-aware and distribution-adaptive modifications for each point. Extensive experiments against different premier point cloud completion networks show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01. We conclude that existing completion models are severely vulnerable to adversarial examples, and state-of-the-art defenses for point cloud classification will be partially invalid when applied to incomplete and uneven point cloud data.
translated by 谷歌翻译
尽管最近在不同的应用程序方案中广泛部署了3D点云分类,但它仍然非常容易受到对抗攻击的影响。面对对抗性攻击,这增加了对3D模型的强大训练的重要性。基于我们对现有对抗性攻击的性能的分析,在输入数据的中和高频组件中发现了更多的对抗性扰动。因此,通过抑制训练阶段的高频含量,改善了针对对抗性示例的模型。实验表明,提出的防御方法降低了对PointNet,PointNet ++和DGCNN模型的六次攻击的成功率。特别是,与最先进的方法相比,Drop100攻击的平均分类精度在Drop100攻击中平均提高3.8%,而Drop200攻击的平均分类精度提高了3.8%。与其他可用方法相比,该方法还提高了原始数据集的模型精度。
translated by 谷歌翻译
利用3D点云数据已经成为在面部识别和自动驾驶等许多领域部署人工智能的迫切需要。然而,3D点云的深度学习仍然容易受到对抗的攻击,例如迭代攻击,点转换攻击和生成攻击。这些攻击需要在严格的界限内限制对抗性示例的扰动,导致不切实际的逆势3D点云。在本文中,我们提出了对普遍的图形 - 卷积生成的对抗网络(ADVGCGAN)从头开始产生视觉上现实的对抗3D点云。具体地,我们使用图形卷积发电机和带有辅助分类器的鉴别器来生成现实点云,从真实3D数据学习潜在分布。不受限制的对抗性攻击损失纳入GaN的特殊逆势训练中,使得发电机能够产生对抗实例来欺骗目标网络。与现有的最先进的攻击方法相比,实验结果表明了我们不受限制的对抗性攻击方法的有效性,具有更高的攻击成功率和视觉质量。此外,拟议的Advgcan可以实现更好的防御模型和比具有强烈伪装的现有攻击方法更好的转移性能。
translated by 谷歌翻译
长期以来,3D面部识别因其抵抗当前的物理对抗攻击(例如对抗斑块)而被认为是安全的。但是,本文表明,3D面部识别系统很容易受到攻击,从而导致逃避和模仿攻击。我们是第一个针对3D面部识别系统(称为结构化光成像攻击(SLIA)的)提出可实现的攻击的人,该系统利用了基于结构化的3D扫描设备的弱点。 Slia在结构化的光成像系统中利用投影仪来创建对抗性照明,以污染重建的点云。首先,我们提出了一个3D变换不变的损耗函数(3D-TI),以生成对逆动力的对抗扰动,这对头部运动更强大。然后,我们将3D成像过程集成到攻击优化中,从而最大程度地减少了流条纹模式的总像素转移。我们意识到对现实世界3D面部识别系统的躲避和模仿攻击。与倒角和基于倒角+KNN的方法相比,我们的方法对预计模式的修改需要较少,并且达到0.47(模拟)和0.89(躲避)的平均攻击成功率。本文揭示了当前结构化的光成像技术的不安全感,并阐明了设计安全的3D面部识别身份验证系统。
translated by 谷歌翻译
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving. Robust training and defend-by-denoise are typical strategies for defending adversarial perturbations, including adversarial training and statistical filtering, respectively. However, they either induce massive computational overhead or rely heavily upon specified noise priors, limiting generalized robustness against attacks of all kinds. This paper introduces a new defense mechanism based on denoising diffusion models that can adaptively remove diverse noises with a tailored intensity estimator. Specifically, we first estimate adversarial distortions by calculating the distance of the points to their neighborhood best-fit plane. Depending on the distortion degree, we choose specific diffusion time steps for the input point cloud and perform the forward diffusion to disrupt potential adversarial shifts. Then we conduct the reverse denoising process to restore the disrupted point cloud back to a clean distribution. This approach enables effective defense against adaptive attacks with varying noise budgets, achieving accentuated robustness of existing 3D deep recognition models.
translated by 谷歌翻译
已经证明了现代自动驾驶感知系统在处理互补输入之类的利用图像时,已被证明可以改善互补投入。在孤立中,已发现2D图像非常容易受到对抗性攻击的影响。然而,有有限的研究与图像特征融合的多模态模型的对抗鲁棒性。此外,现有的作品不考虑跨输入方式一致的物理上可实现的扰动。在本文中,我们通过将对抗物体放在主车辆的顶部上展示多传感器检测的实际敏感性。我们专注于身体上可实现的和输入 - 不可行的攻击,因为它们是在实践中执行的可行性,并且表明单个通用对手可以隐藏来自最先进的多模态探测器的不同主机。我们的实验表明,成功的攻击主要是由易于损坏的图像特征引起的。此外,我们发现,在将图像特征中的现代传感器融合方法中,对抗攻击可以利用投影过程来在3D中跨越区域产生误报。朝着更强大的多模态感知系统,我们表明,具有特征剥夺的对抗训练可以显着提高对这种攻击的鲁棒性。然而,我们发现标准的对抗性防御仍然努力防止由3D LIDAR点和2D像素之间不准确的关联引起的误报。
translated by 谷歌翻译
3D点云正在成为许多现实世界应用中的关键数据表示形式,例如自动驾驶,机器人技术和医学成像。尽管深度学习的成功进一步加速了物理世界中3D点云的采用,但深度学习因其易受对抗性攻击的脆弱性而臭名昭著。在这项工作中,我们首先确定最先进的经验防御,对抗性训练,由于梯度混淆,在适用于3D点云模型方面有一个重大限制。我们进一步提出了PointDP,这是一种纯化策略,利用扩散模型来防御3D对抗攻击。我们对六个代表性3D点云体系结构进行了广泛的评估,并利用10+强和适应性攻击来证明其较低的稳健性。我们的评估表明,在强烈攻击下,PointDP比最新的纯化方法实现了明显更好的鲁棒性。在不久的将来将包括与PointDP合并的随机平滑验证结果的结果。
translated by 谷歌翻译
Deep learning-based 3D object detectors have made significant progress in recent years and have been deployed in a wide range of applications. It is crucial to understand the robustness of detectors against adversarial attacks when employing detectors in security-critical applications. In this paper, we make the first attempt to conduct a thorough evaluation and analysis of the robustness of 3D detectors under adversarial attacks. Specifically, we first extend three kinds of adversarial attacks to the 3D object detection task to benchmark the robustness of state-of-the-art 3D object detectors against attacks on KITTI and Waymo datasets, subsequently followed by the analysis of the relationship between robustness and properties of detectors. Then, we explore the transferability of cross-model, cross-task, and cross-data attacks. We finally conduct comprehensive experiments of defense for 3D detectors, demonstrating that simple transformations like flipping are of little help in improving robustness when the strategy of transformation imposed on input point cloud data is exposed to attackers. Our findings will facilitate investigations in understanding and defending the adversarial attacks against 3D object detectors to advance this field.
translated by 谷歌翻译
尽管在各种应用中取得了突出的性能,但点云识别模型经常遭受自然腐败和对抗性扰动的困扰。在本文中,我们深入研究了点云识别模型的一般鲁棒性,并提出了点云对比对抗训练(PointCat)。 PointCat的主要直觉是鼓励目标识别模型缩小清洁点云和损坏点云之间的决策差距。具体而言,我们利用有监督的对比损失来促进识别模型提取的超晶体特征的对齐和均匀性,并设计一对带有动态原型指南的集中式损失,以避免这些特征与其属于其属于其归属类别群的偏离。为了提供更具挑战性的损坏点云,我们对噪声生成器以及从头开始的识别模型进行了对手训练,而不是将基于梯度的攻击用作内部循环,例如以前的对手训练方法。全面的实验表明,在包括各种损坏的情况下,所提出的PointCat优于基线方法,并显着提高不同点云识别模型的稳健性,包括各向同性点噪声,LIDAR模拟的噪声,随机点掉落和对抗性扰动。
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
深度学习大大提高了单眼深度估计(MDE)的性能,这是完全基于视觉的自主驾驶(AD)系统(例如特斯拉和丰田)的关键组成部分。在这项工作中,我们对基于学习的MDE产生了攻击。特别是,我们使用基于优化的方法系统地生成隐形的物理对象贴片来攻击深度估计。我们通过面向对象的对抗设计,敏感的区域定位和自然风格的伪装来平衡攻击的隐身和有效性。使用现实世界的驾驶场景,我们评估了对并发MDE模型的攻击和AD的代表下游任务(即3D对象检测)。实验结果表明,我们的方法可以为不同的目标对象和模型生成隐形,有效和健壮的对抗贴片,并在物体检测中以1/1/的斑点检测到超过6米的平均深度估计误差和93%的攻击成功率(ASR)车辆后部9个。具有实际车辆的三个不同驾驶路线上的现场测试表明,在连续视频帧中,我们导致超过6米的平均深度估计误差,并将对象检测率从90.70%降低到5.16%。
translated by 谷歌翻译
For saving cost, many deep neural networks (DNNs) are trained on third-party datasets downloaded from internet, which enables attacker to implant backdoor into DNNs. In 2D domain, inherent structures of different image formats are similar. Hence, backdoor attack designed for one image format will suite for others. However, when it comes to 3D world, there is a huge disparity among different 3D data structures. As a result, backdoor pattern designed for one certain 3D data structure will be disable for other data structures of the same 3D scene. Therefore, this paper designs a uniform backdoor pattern: NRBdoor (Noisy Rotation Backdoor) which is able to adapt for heterogeneous 3D data structures. Specifically, we start from the unit rotation and then search for the optimal pattern by noise generation and selection process. The proposed NRBdoor is natural and imperceptible, since rotation is a common operation which usually contains noise due to both the miss match between a pair of points and the sensor calibration error for real-world 3D scene. Extensive experiments on 3D mesh and point cloud show that the proposed NRBdoor achieves state-of-the-art performance, with negligible shape variation.
translated by 谷歌翻译
Recent studies reveal that deep neural network (DNN) based object detectors are vulnerable to adversarial attacks in the form of adding the perturbation to the images, leading to the wrong output of object detectors. Most current existing works focus on generating perturbed images, also called adversarial examples, to fool object detectors. Though the generated adversarial examples themselves can remain a certain naturalness, most of them can still be easily observed by human eyes, which limits their further application in the real world. To alleviate this problem, we propose a differential evolution based dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously. Specifically, we try to obtain the camouflage texture, which can be rendered over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Besides, we also study the performance of adaptive DE_DAC, which can be adapted to the environment. Experiments show that our proposed method could obtain a good trade-off between the fooling human eyes and object detectors under multiple specific scenes and objects.
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
尽管基于3D点云表示的基于自我监督的对比度学习模型最近取得了成功,但此类预训练模型的对抗性鲁棒性引起了人们的关注。对抗性对比学习(ACL)被认为是改善预训练模型的鲁棒性的有效方法。相比之下,投影仪被认为是在对比度预处理过程中删除不必要的特征信息的有效组成部分,并且大多数ACL作品还使用对比度损失,与预测的功能表示形式相比损失,在预处理中产生对抗性示例,而“未转移”的功能表征用于发电的对抗性输入。在推理期间。由于投影和“未投影”功能之间的分布差距,其模型受到限制,以获取下游任务的可靠特征表示。我们介绍了一种新方法,通过利用虚拟对抗性损失在对比度学习框架中使用“未重新注射”功能表示,以生成高质量的3D对抗示例,以进行对抗训练。我们介绍了强大的意识损失功能,以对抗自我监督对比度学习框架。此外,我们发现选择具有正常操作员(DON)操作员差异的高差异作为对抗性自学对比度学习的附加输入,可以显着提高预训练模型的对抗性鲁棒性。我们在下游任务上验证我们的方法,包括3D分类和使用多个数据集的3D分割。它在最先进的对抗性学习方法上获得了可比的鲁棒精度。
translated by 谷歌翻译