Trajectory prediction is an integral component of modern autonomous systems as it allows for envisioning future intentions of nearby moving agents. Due to the lack of other agents' dynamics and control policies, deep neural network (DNN) models are often employed for trajectory forecasting tasks. Although there exists an extensive literature on improving the accuracy of these models, there is a very limited number of works studying their robustness against adversarially crafted input trajectories. To bridge this gap, in this paper, we propose a targeted adversarial attack against DNN models for trajectory forecasting tasks. We call the proposed attack TA4TP for Targeted adversarial Attack for Trajectory Prediction. Our approach generates adversarial input trajectories that are capable of fooling DNN models into predicting user-specified target/desired trajectories. Our attack relies on solving a nonlinear constrained optimization problem where the objective function captures the deviation of the predicted trajectory from a target one while the constraints model physical requirements that the adversarial input should satisfy. The latter ensures that the inputs look natural and they are safe to execute (e.g., they are close to nominal inputs and away from obstacles). We demonstrate the effectiveness of TA4TP on two state-of-the-art DNN models and two datasets. To the best of our knowledge, we propose the first targeted adversarial attack against DNN models used for trajectory forecasting.
translated by 谷歌翻译
轨迹预测是自动车辆(AVS)执行安全规划和导航的关键组件。然而,很少有研究分析了轨迹预测的对抗性稳健性,或者调查了最坏情况的预测是否仍然可以导致安全规划。为了弥合这种差距,我们通过提出普通车辆轨迹来最大化预测误差来研究轨迹预测模型的对抗鲁棒性。我们在三个模型和三个数据集上的实验表明,对手预测将预测误差增加超过150%。我们的案例研究表明,如果对手在对手轨迹之后驱动靠近目标AV的车辆,则AV可以进行不准确的预测,甚至不安全的驾驶决策。我们还通过数据增强和轨迹平滑探索可能的缓解技术。
translated by 谷歌翻译
轨迹预测对于自动驾驶汽车(AV)是必不可少的,以计划正确且安全的驾驶行为。尽管许多先前的作品旨在达到更高的预测准确性,但很少有人研究其方法的对抗性鲁棒性。为了弥合这一差距,我们建议研究数据驱动的轨迹预测系统的对抗性鲁棒性。我们设计了一个基于优化的对抗攻击框架,该框架利用精心设计的可区分动态模型来生成逼真的对抗轨迹。从经验上讲,我们基于最先进的预测模型的对抗性鲁棒性,并表明我们的攻击使通用指标和计划感知指标的预测错误增加了50%以上和37%。我们还表明,我们的攻击可以导致AV在模拟中驶离道路或碰撞到其他车辆中。最后,我们演示了如何使用对抗训练计划来减轻对抗性攻击。
translated by 谷歌翻译
使用深神经网络(DNN)的轨迹预测是自主驾驶(AD)系统的重要组成部分。但是,这些方法容易受到对抗攻击的影响,从而导致严重的后果,例如碰撞。在这项工作中,我们确定了两种关键要素,以捍卫轨迹预测模型,以防止(1)设计有效的对抗训练方法,以及(2)添加特定领域的数据增强以减轻清洁数据的性能降低。我们证明,与经过干净数据训练的模型相比,我们的方法能够在对抗数据上的性能提高46%,而在干净数据上只有3%的性能退化。此外,与现有的强大方法相比,我们的方法可以在对抗性示例中提高21%的性能,而在清洁数据上可以提高9%。我们的健壮模型与计划者一起评估,以研究其下游影响。我们证明我们的模型可以大大降低严重的事故率(例如碰撞和越野驾驶)。
translated by 谷歌翻译
Video compression plays a crucial role in video streaming and classification systems by maximizing the end-user quality of experience (QoE) at a given bandwidth budget. In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems. Our attack framework, dubbed RoVISQ, manipulates the Rate-Distortion ($\textit{R}$-$\textit{D}$) relationship of a video compression model to achieve one or both of the following goals: (1) increasing the network bandwidth, (2) degrading the video quality for end-users. We further devise new objectives for targeted and untargeted attacks to a downstream video classification service. Finally, we design an input-invariant perturbation that universally disrupts video compression and classification systems in real time. Unlike previously proposed attacks on video classification, our adversarial perturbations are the first to withstand compression. We empirically show the resilience of RoVISQ attacks against various defenses, i.e., adversarial training, video denoising, and JPEG compression. Our extensive experimental results on various video datasets show RoVISQ attacks deteriorate peak signal-to-noise ratio by up to 5.6dB and the bit-rate by up to $\sim$ 2.4$\times$ while achieving over 90$\%$ attack success rate on a downstream classifier. Our user study further demonstrates the effect of RoVISQ attacks on users' QoE.
translated by 谷歌翻译
相应地预测周围交通参与者的未来状态,并计划安全,平稳且符合社会的轨迹对于自动驾驶汽车至关重要。当前的自主驾驶系统有两个主要问题:预测模块通常与计划模块解耦,并且计划的成本功能很难指定和调整。为了解决这些问题,我们提出了一个端到端的可区分框架,该框架集成了预测和计划模块,并能够从数据中学习成本函数。具体而言,我们采用可区分的非线性优化器作为运动计划者,该运动计划将神经网络给出的周围剂的预测轨迹作为输入,并优化了自动驾驶汽车的轨迹,从而使框架中的所有操作都可以在框架中具有可观的成本,包括成本功能权重。提出的框架经过大规模的现实驾驶数据集进行了训练,以模仿整个驾驶场景中的人类驾驶轨迹,并在开环和闭环界面中进行了验证。开环测试结果表明,所提出的方法的表现优于各种指标的基线方法,并提供以计划为中心的预测结果,从而使计划模块能够输出接近人类的轨迹。在闭环测试中,提出的方法表明能够处理复杂的城市驾驶场景和鲁棒性,以抵抗模仿学习方法所遭受的分配转移。重要的是,我们发现计划和预测模块的联合培训比在开环和闭环测试中使用单独的训练有素的预测模块进行计划要比计划更好。此外,消融研究表明,框架中的可学习组件对于确保计划稳定性和性能至关重要。
translated by 谷歌翻译
现实世界的对抗例(通常以补丁形式)对安全关键计算机视觉任务中的深度学习模型(如在自动驾驶中的视觉感知)中使用深度学习模型构成严重威胁。本文涉及用不同类型的对抗性斑块攻击时,对语义分割模型的稳健性进行了广泛的评价,包括数字,模拟和物理。提出了一种新的损失功能,提高攻击者在诱导像素错误分类方面的能力。此外,提出了一种新的攻击策略,提高了在场景中放置补丁的转换方法的期望。最后,首先扩展用于检测对抗性补丁的最先进的方法以应对语义分割模型,然后改进以获得实时性能,并最终在现实世界场景中进行评估。实验结果表明,尽管具有数字和真实攻击的对抗效果,其影响通常在空间上限制在补丁周围的图像区域。这将打开关于实时语义分段模型的空间稳健性的进一步疑问。
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
由于自动驾驶系统变得更好,模拟自动堆栈可能失败的方案变得更加重要。传统上,这些方案对于一些关于将地理演奏器状态作为输入的规划模块而产生的一些场景。这不会缩放,无法识别所有可能的自主义故障,例如由于遮挡引起的感知故障。在本文中,我们提出了对基于LIDAR的自治系统产生了安全性临界情景的促进框架。鉴于初始交通方案,Advsim以物理卓越的方式修改演员的轨迹,并更新LIDAR传感器数据以匹配扰动的世界。重要的是,通过直接模拟传感器数据,我们获得对完整自主堆栈的安全关键的对抗方案。我们的实验表明,我们的方法是一般的,可以识别成千上万的语义有意义的安全关键方案,适用于各种现代自动驾驶系统。此外,我们表明,通过使用Advsim产生的情景训练,可以进一步改善这些系统的稳健性和安全性。
translated by 谷歌翻译
随着深度神经网络中的研究的发展,深度卷积网络对于自动驾驶任务而言是可行的。在驾驶任务的自动化中采用端到端模型有一种新兴趋势。但是,以前的研究揭示了深层神经网络在分类任务中容易受到对抗性攻击的影响。对于回归任务,例如自动驾驶,这些攻击的效果仍然很少探索。在这项研究中,我们设计了针对端到端自动驾驶系统的两次白盒针对性攻击。驾驶模型将图像作为输入并输出转向角度。我们的攻击只能通过扰动输入图像来操纵自主驾驶系统的行为。两种攻击都可以在不使用GPU的情况下实时对CPU进行实时启动。这项研究旨在引起人们对安全关键系统中端到端模型的应用的担忧。
translated by 谷歌翻译
现代自动驾驶汽车采用最先进的DNN模型来解释传感器数据并感知环境。但是,DNN模型容易受到不同类型的对抗攻击的影响,这对车辆和乘客的安全性和安全性构成了重大风险。一个突出的威胁是后门攻击,对手可以通过中毒训练样本来妥协DNN模型。尽管已经大量精力致力于调查后门攻击对传统的计算机视觉任务,但很少探索其对自主驾驶场景的实用性和适用性,尤其是在物理世界中。在本文中,我们针对车道检测系统,该系统是许多自动驾驶任务,例如导航,车道切换的必不可少的模块。我们设计并实现了对此类系统的第一次物理后门攻击。我们的攻击是针对不同类型的车道检测算法的全面有效的。具体而言,我们引入了两种攻击方法(毒药和清洁量)来生成中毒样本。使用这些样品,训练有素的车道检测模型将被后门感染,并且可以通过公共物体(例如,交通锥)进行启动,以进行错误的检测,导致车辆从道路上或在相反的车道上行驶。对公共数据集和物理自动驾驶汽车的广泛评估表明,我们的后门攻击对各种防御解决方案都是有效,隐秘和强大的。我们的代码和实验视频可以在https://sites.google.com/view/lane-detection-attack/lda中找到。
translated by 谷歌翻译
现代机器人需要准确的预测才能在现实世界中做出最佳决策。例如,自动驾驶汽车需要对其他代理商的未来行动进行准确的预测来计划安全轨迹。当前方法在很大程度上依赖历史时间序列来准确预测未来。但是,完全依靠观察到的历史是有问题的,因为它可能被噪声损坏,有离群值或不能完全代表所有可能的结果。为了解决这个问题,我们提出了一个新的框架,用于生成用于机器人控制的强大预测。为了建模影响未来预测的现实世界因素,我们介绍了对手的概念,对敌人观察到了历史时间序列,以增加机器人的最终控制成本。具体而言,我们将这种交互作用建模为机器人的预报器和这个假设对手之间的零和两人游戏。我们证明,我们建议的游戏可以使用基于梯度的优化技术来解决本地NASH均衡。此外,我们表明,经过我们方法训练的预报员在分布外现实世界中的变化数据上的效果要比基线比基线更好30.14%。
translated by 谷歌翻译
The deep neural network (DNN) models for object detection using camera images are widely adopted in autonomous vehicles. However, DNN models are shown to be susceptible to adversarial image perturbations. In the existing methods of generating the adversarial image perturbations, optimizations take each incoming image frame as the decision variable to generate an image perturbation. Therefore, given a new image, the typically computationally-expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while considering the underlying physical dynamics of autonomous vehicles, their mission, and the environment. We propose a multi-level stochastic optimization framework that monitors an attacker's capability of generating the adversarial perturbations. Based on this capability level, a binary decision attack/not attack is introduced to enhance the effectiveness of the attacker. We evaluate our proposed multi-level image attack framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. The results show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
translated by 谷歌翻译
无线系统应用中深度学习(DL)的成功出现引起了人们对与安全有关的新挑战的担忧。一个这样的安全挑战是对抗性攻击。尽管已经有很多工作证明了基于DL的分类任务对对抗性攻击的敏感性,但是从攻击的角度来看,尚未对无线系统的基于回归的问题进行基于回归的问题。本文的目的是双重的:(i)我们在无线设置中考虑回归问题,并表明对抗性攻击可以打破基于DL的方法,并且(ii)我们将对抗性训练作为对抗性环境中的防御技术的有效性分析并表明基于DL的无线系统对攻击的鲁棒性有了显着改善。具体而言,本文考虑的无线应用程序是基于DL的功率分配,以多细胞大量多输入 - 销售输出系统的下行链路分配,攻击的目的是通过DL模型产生不可行的解决方案。我们扩展了基于梯度的对抗性攻击:快速梯度标志方法(FGSM),动量迭代FGSM和预计的梯度下降方法,以分析具有和没有对抗性训练的考虑的无线应用的敏感性。我们对这些攻击进行了分析深度神经网络(DNN)模型的性能,在这些攻击中,使用白色框和黑盒攻击制作了对抗性扰动。
translated by 谷歌翻译
Although deep learning has made remarkable progress in processing various types of data such as images, text and speech, they are known to be susceptible to adversarial perturbations: perturbations specifically designed and added to the input to make the target model produce erroneous output. Most of the existing studies on generating adversarial perturbations attempt to perturb the entire input indiscriminately. In this paper, we propose ExploreADV, a general and flexible adversarial attack system that is capable of modeling regional and imperceptible attacks, allowing users to explore various kinds of adversarial examples as needed. We adapt and combine two existing boundary attack methods, DeepFool and Brendel\&Bethge Attack, and propose a mask-constrained adversarial attack system, which generates minimal adversarial perturbations under the pixel-level constraints, namely ``mask-constraints''. We study different ways of generating such mask-constraints considering the variance and importance of the input features, and show that our adversarial attack system offers users good flexibility to focus on sub-regions of inputs, explore imperceptible perturbations and understand the vulnerability of pixels/regions to adversarial attacks. We demonstrate our system to be effective based on extensive experiments and user study.
translated by 谷歌翻译
Making safe and human-like decisions is an essential capability of autonomous driving systems and learning-based behavior planning is a promising pathway toward this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. Concretely, a behavior generation module first produces a diverse set of candidate behaviors in the form of trajectory proposals. Then the proposed conditional motion prediction network is employed to forecast other agents' future trajectories conditioned on each trajectory proposal. Given the candidate plans and associated prediction results, we learn a scoring module to evaluate the plans using maximum entropy inverse reinforcement learning (IRL). We conduct comprehensive experiments to validate the proposed framework on a large-scale real-world urban driving dataset. The results reveal that the conditional prediction model is able to forecast multiple possible future trajectories given a candidate behavior and the prediction results are reactive to different plans. Moreover, the IRL-based scoring module can properly evaluate the trajectory proposals and select close-to-human ones. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Moreover, we find that the conditional prediction model can improve both prediction and planning performance compared to the non-conditional model, and learning the scoring module is critical to correctly evaluating the candidate plans to align with human drivers.
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
背景信息:在过去几年中,机器学习(ML)一直是许多创新的核心。然而,包括在所谓的“安全关键”系统中,例如汽车或航空的系统已经被证明是非常具有挑战性的,因为ML的范式转变为ML带来完全改变传统认证方法。目的:本文旨在阐明与ML为基础的安全关键系统认证有关的挑战,以及文献中提出的解决方案,以解决它们,回答问题的问题如何证明基于机器学习的安全关键系统?'方法:我们开展2015年至2020年至2020年之间发布的研究论文的系统文献综述(SLR),涵盖了与ML系统认证有关的主题。总共确定了217篇论文涵盖了主题,被认为是ML认证的主要支柱:鲁棒性,不确定性,解释性,验证,安全强化学习和直接认证。我们分析了每个子场的主要趋势和问题,并提取了提取的论文的总结。结果:单反结果突出了社区对该主题的热情,以及在数据集和模型类型方面缺乏多样性。它还强调需要进一步发展学术界和行业之间的联系,以加深域名研究。最后,它还说明了必须在上面提到的主要支柱之间建立连接的必要性,这些主要柱主要主要研究。结论:我们强调了目前部署的努力,以实现ML基于ML的软件系统,并讨论了一些未来的研究方向。
translated by 谷歌翻译
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP 2 ), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP 2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.
translated by 谷歌翻译