对抗性攻击可以迫使基于CNN的模型通过巧妙地操纵人类侵犯的输入来产生不正确的输出。探索这种扰动可以帮助我们更深入地了解神经网络的脆弱性,并为反对杂项对手提供深入学习的鲁棒性。尽管大量研究着重于图像,音频和NLP的鲁棒性,但仍缺乏视觉对象跟踪的对抗示例(尤其是以黑盒方式)的作品。在本文中,我们提出了一种新颖的对抗性攻击方法,以在黑色框设置下为单个对象跟踪产生噪音,其中仅在跟踪序列的初始框架上添加了扰动,从整个视频剪辑的角度来看,这很难注意到这一点。具体而言,我们将算法分为三个组件,并利用加固学习,以精确地定位重要的框架贴片,同时减少不必要的计算查询开销。与现有技术相比,我们的方法需要在视频的初始化框架上进行更少的查询,以操纵竞争性甚至更好的攻击性能。我们在长期和短期数据集中测试我们的算法,包括OTB100,DOCT2018,UAV123和LASOT。广泛的实验证明了我们方法对三种主流类型的跟踪器类型的有效性:歧视,基于暹罗和强化学习的跟踪器。
translated by 谷歌翻译
Adversarial robustness assessment for video recognition models has raised concerns owing to their wide applications on safety-critical tasks. Compared with images, videos have much high dimension, which brings huge computational costs when generating adversarial videos. This is especially serious for the query-based black-box attacks where gradient estimation for the threat models is usually utilized, and high dimensions will lead to a large number of queries. To mitigate this issue, we propose to simultaneously eliminate the temporal and spatial redundancy within the video to achieve an effective and efficient gradient estimation on the reduced searching space, and thus query number could decrease. To implement this idea, we design the novel Adversarial spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on the simultaneously focused key frames and key regions from the inter-frames and intra-frames in the video. AstFocus attack is based on the cooperative Multi-Agent Reinforcement Learning (MARL) framework. One agent is responsible for selecting key frames, and another agent is responsible for selecting key regions. These two agents are jointly trained by the common rewards received from the black-box threat models to perform a cooperative prediction. By continuously querying, the reduced searching space composed of key frames and key regions is becoming precise, and the whole query number becomes less than that on the original video. Extensive experiments on four mainstream video recognition models and three widely used action recognition datasets demonstrate that the proposed AstFocus attack outperforms the SOTA methods, which is prevenient in fooling rate, query number, time, and perturbation magnitude at the same.
translated by 谷歌翻译
最近的研究表明,深神经网络(DNN)易受对抗的对抗性斑块,这引入了对输入的可察觉而且局部化的变化。尽管如此,现有的方法都集中在图像上产生对抗性补丁,视频中的对应于视频的探索。与图像相比,攻击视频更具挑战性,因为它不仅需要考虑空间线索,而且需要考虑时间线索。为了缩短这种差距,我们在本文中介绍了一种新的对抗性攻击,子弹屏幕评论(BSC)攻击,攻击了BSC的视频识别模型。具体地,通过增强学习(RL)框架产生对抗性BSC,其中环境被设置为目标模型,并且代理商扮演选择每个BSC的位置和透明度的作用。通过不断查询目标模型和接收反馈,代理程序逐渐调整其选择策略,以实现具有非重叠BSC的高鬼速。由于BSC可以被视为一种有意义的补丁,将它添加到清洁视频不会影响人们对视频内容的理解,也不会引起人们的怀疑。我们进行广泛的实验,以验证所提出的方法的有效性。在UCF-101和HMDB-51数据集中,我们的BSC攻击方法可以在攻击三个主流视频识别模型时达到约90 \%的愚蠢速率,同时仅在视频中封闭\无文无线8 \%区域。我们的代码可在https://github.com/kay -ck/bsc-attack获得。
translated by 谷歌翻译
为了跟踪视频中的目标,当前的视觉跟踪器通常采用贪婪搜索每个帧中目标对象定位,也就是说,将选择最大响应分数的候选区域作为每个帧的跟踪结果。但是,我们发现这可能不是一个最佳选择,尤其是在遇到挑战性的跟踪方案(例如重闭塞和快速运动)时。为了解决这个问题,我们建议维护多个跟踪轨迹并将光束搜索策略应用于视觉跟踪,以便可以识别出更少的累积错误的轨迹。因此,本文介绍了一种新型的基于梁搜索策略的新型多代理增强学习策略,称为横梁。它主要是受图像字幕任务的启发,该任务将图像作为输入,并使用Beam搜索算法生成多种描述。因此,我们通过多个并行决策过程来将跟踪提出作为样本选择问题,每个过程旨在将一个样本作为每个帧的跟踪结果选择。每个维护的轨迹都与代理商相关联,以执行决策并确定应采取哪些操作来更新相关信息。处理所有帧时,我们将最大累积分数作为跟踪结果选择轨迹。在七个流行的跟踪基准数据集上进行了广泛的实验证实了所提出的算法的有效性。
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
受益于深度神经网络的发展,多物体跟踪(MOT)取得了积极进展。目前,基于实时的联合检测跟踪(JDT)的MOT跟踪器增加了越来越多的关注并导出了许多优秀的型号。然而,JDT跟踪器的稳健性很少研究,因为它的成熟协会算法攻击MOT系统是挑战,因为其成熟的协会算法被设计为在跟踪期间对错误进行稳健。在这项工作中,我们分析了JDT跟踪器的弱点,并提出了一种新的逆势攻击方法,称为Tracklet-Switch(Trasw),反对MOT的完整跟踪管道。具体地,旨在为重新ID特征和对象检测而生成对抗性示例的推挽损失和中心跳跃优化。 Trasw可以通过攻击极少帧来欺骗跟踪器无法跟踪后续帧中的目标。我们使用MOT挑战数据集(即2DMOT15,MOT17和MOT20)评估我们在高级深度跟踪器(即FAIRMOT,JDE,BYTTRATTRATT)上的方法。实验表明,通过仅对单一目标攻击平均攻击五个帧,Trasw可以通过仅攻击五个帧来实现超过95%的高度成功率,并且对于多目标攻击的相当高的成功率超过80%。该代码可在https://github.com/derryhub/fairmot-attack获得。
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
The deep neural network (DNN) models for object detection using camera images are widely adopted in autonomous vehicles. However, DNN models are shown to be susceptible to adversarial image perturbations. In the existing methods of generating the adversarial image perturbations, optimizations take each incoming image frame as the decision variable to generate an image perturbation. Therefore, given a new image, the typically computationally-expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while considering the underlying physical dynamics of autonomous vehicles, their mission, and the environment. We propose a multi-level stochastic optimization framework that monitors an attacker's capability of generating the adversarial perturbations. Based on this capability level, a binary decision attack/not attack is introduced to enhance the effectiveness of the attacker. We evaluate our proposed multi-level image attack framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. The results show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
最近的工作表明,深增强学习(DRL)政策易受对抗扰动的影响。对手可以通过扰乱药剂观察到的环境来误导DRL代理商的政策。现有攻击原则上是可行的,但在实践中面临挑战,例如通过太慢,无法实时欺骗DRL政策。我们表明,使用通用的对冲扰动(UAP)方法来计算扰动,独立于应用它们的各个输入,可以有效地欺骗DRL策略。我们描述了三种这样的攻击变体。通过使用三个Atari 2600游戏的广泛评估,我们表明我们的攻击是有效的,因为它们完全降低了三种不同的DRL代理商的性能(高达100%,即使在扰乱的$ L_ infty $绑定时也很小为0.01)。与不同DRL策略的响应时间(平均0.6ms)相比,它比不同DRL策略的响应时间(0.6ms)更快,并且比使用对抗扰动的前攻击更快(平均1.8ms)。我们还表明,我们的攻击技术是高效的,平均地产生0.027ms的在线计算成本。使用涉及机器人运动的两个进一步任务,我们确认我们的结果概括了更复杂的DRL任务。此外,我们证明了已知防御的有效性降低了普遍扰动。我们提出了一种有效的技术,可检测针对DRL政策的所有已知的对抗性扰动,包括本文呈现的所有普遍扰动。
translated by 谷歌翻译
在过去的几年中,对针对基于学习的对象探测器的对抗性攻击进行了广泛的研究。提出的大多数攻击都针对模型的完整性(即导致模型做出了错误的预测),而针对模型可用性的对抗性攻击,这是安全关键领域(例如自动驾驶)的关键方面,尚未探索。机器学习研究社区。在本文中,我们提出了一种新颖的攻击,对端到端对象检测管道的决策潜伏期产生负面影响。我们制作了一种通用的对抗扰动(UAP),该扰动(UAP)针对了许多对象检测器管道中的广泛使用的技术 - 非最大抑制(NMS)。我们的实验证明了拟议的UAP通过添加“幻影”对象来增加单个帧的处理时间的能力,该对象在保留原始对象的检测时(允许攻击时间更长的时间内未检测到)。
translated by 谷歌翻译
Video compression plays a crucial role in video streaming and classification systems by maximizing the end-user quality of experience (QoE) at a given bandwidth budget. In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems. Our attack framework, dubbed RoVISQ, manipulates the Rate-Distortion ($\textit{R}$-$\textit{D}$) relationship of a video compression model to achieve one or both of the following goals: (1) increasing the network bandwidth, (2) degrading the video quality for end-users. We further devise new objectives for targeted and untargeted attacks to a downstream video classification service. Finally, we design an input-invariant perturbation that universally disrupts video compression and classification systems in real time. Unlike previously proposed attacks on video classification, our adversarial perturbations are the first to withstand compression. We empirically show the resilience of RoVISQ attacks against various defenses, i.e., adversarial training, video denoising, and JPEG compression. Our extensive experimental results on various video datasets show RoVISQ attacks deteriorate peak signal-to-noise ratio by up to 5.6dB and the bit-rate by up to $\sim$ 2.4$\times$ while achieving over 90$\%$ attack success rate on a downstream classifier. Our user study further demonstrates the effect of RoVISQ attacks on users' QoE.
translated by 谷歌翻译
在国家观察中最强/最佳的对抗性扰动下评估增强学习(RL)代理的最坏情况性能(在某些限制内)对于理解RL代理商的鲁棒性至关重要。然而,在无论我们都能找到最佳攻击以及我们如何找到它,我们都可以找到最佳的对手是具有挑战性的。对普发拉利RL的现有工作要么使用基于启发式的方法,可以找不到最强大的对手,或者通过将代理人视为环境的一部分来说,直接培训基于RL的对手,这可以找到最佳的对手,但可能会变得棘手大状态空间。本文介绍了一种新的攻击方法,通过设计函数与名为“Director”的RL为基础的学习者的设计函数之间的合作找到最佳攻击。演员工艺在给定的政策扰动方向的状态扰动,主任学会提出最好的政策扰动方向。我们所提出的算法PA-AD,比具有大状态空间的环境中的基于RL的工作,理论上是最佳的,并且明显更有效。经验结果表明,我们建议的PA-AD普遍优惠各种Atari和Mujoco环境中最先进的攻击方法。通过将PA-AD应用于对抗性培训,我们在强烈的对手下实现了多个任务的最先进的经验稳健性。
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics. However, some of these applications and systems have been shown to be vulnerable to security or privacy attacks, resulting in unreliable or unstable services. A large number of studies have focused on these security and privacy problems in reinforcement learning. However, few surveys have provided a systematic review and comparison of existing problems and state-of-the-art solutions to keep up with the pace of emerging threats. Accordingly, we herein present such a comprehensive review to explain and summarize the challenges associated with security and privacy in reinforcement learning from a new perspective, namely that of the Markov Decision Process (MDP). In this survey, we first introduce the key concepts related to this area. Next, we cover the security and privacy issues linked to the state, action, environment, and reward function of the MDP process, respectively. We further highlight the special characteristics of security and privacy methodologies related to reinforcement learning. Finally, we discuss the possible future research directions within this area.
translated by 谷歌翻译
近年来,一项大量的研究努力集中在对抗图像上的对抗攻击,而对抗性视频攻击很少被探索。我们提出了对叫做Deepsava的竞争对手攻击战略。我们的模型包括通过统一优化框架的添加剂扰动和空间转换,其中采用结构相似性指数(SSIM)测量来测量对抗距离。我们设计一种有效和新的优化方案,可替代地利用贝叶斯优化来识别视频和随机梯度下降(SGD)优化中最有影响力的帧,以产生添加剂和空间变换的扰动。这样做使DeepSava能够对视频进行非常稀疏的攻击,以维持人类难以察觉,同时在攻击成功率和对抗转移性方面仍然实现最先进的性能。我们对各种类型的深神经网络和视频数据集的密集实验证实了Deepsava的优越性。
translated by 谷歌翻译
愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
基于无人机(UAV)基于无人机的视觉对象跟踪已实现了广泛的应用,并且由于其多功能性和有效性而引起了智能运输系统领域的越来越多的关注。作为深度学习革命性趋势的新兴力量,暹罗网络在基于无人机的对象跟踪中闪耀,其准确性,稳健性和速度有希望的平衡。由于开发了嵌入式处理器和深度神经网络的逐步优化,暹罗跟踪器获得了广泛的研究并实现了与无人机的初步组合。但是,由于无人机在板载计算资源和复杂的现实情况下,暹罗网络的空中跟踪仍然在许多方面都面临严重的障碍。为了进一步探索基于无人机的跟踪中暹罗网络的部署,这项工作对前沿暹罗跟踪器进行了全面的审查,以及使用典型的无人机板载处理器进行评估的详尽无人用分析。然后,进行板载测试以验证代表性暹罗跟踪器在现实世界无人机部署中的可行性和功效。此外,为了更好地促进跟踪社区的发展,这项工作分析了现有的暹罗跟踪器的局限性,并进行了以低弹片评估表示的其他实验。最后,深入讨论了基于无人机的智能运输系统的暹罗跟踪的前景。领先的暹罗跟踪器的统一框架,即代码库及其实验评估的结果,请访问https://github.com/vision4robotics/siamesetracking4uav。
translated by 谷歌翻译