Deep learning (DL) systems are increasingly deployed in safety-and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system's behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradientbased search techniques.DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in stateof-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity selfdriving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model's accuracy by up to 3%.
translated by 谷歌翻译
Due to the increasing usage of machine learning (ML) techniques in security- and safety-critical domains, such as autonomous systems and medical diagnosis, ensuring correct behavior of ML systems, especially for different corner cases, is of growing importance. In this paper, we propose a generic framework for evaluating security and robustness of ML systems using different real-world safety properties. We further design, implement and evaluate VeriVis, a scalable methodology that can verify a diverse set of safety properties for state-of-the-art computer vision systems with only blackbox access. VeriVis leverage different input space reduction techniques for efficient verification of different safety properties. VeriVis is able to find thousands of safety violations in fifteen state-of-the-art computer vision systems including ten Deep Neural Networks (DNNs) such as Inception-v3 and Nvidia's Dave self-driving system with thousands of neurons as well as five commercial third-party vision APIs including Google vision and Clarifai for twelve different safety properties. Furthermore, VeriVis can successfully verify local safety properties, on average, for around 31.7% of the test images. VeriVis finds up to 64.8x more violations than existing gradient-based methods that, unlike VeriVis, cannot ensure non-existence of any violations. Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60.2%.
translated by 谷歌翻译
在现实世界应用中的深度神经网络(DNN)的成功受益于丰富的预训练模型。然而,回溯预训练模型可以对下游DNN的部署构成显着的特洛伊木马威胁。现有的DNN测试方法主要旨在在对抗性设置中找到错误的角壳行为,但未能发现由强大的木马攻击所制作的后门。观察特洛伊木马网络行为表明,它们不仅由先前的工作所提出的单一受损神经元反射,而且归因于在多个神经元的激活强度和频率中的关键神经路径。这项工作制定了DNN后门测试,并提出了录音机框架。通过少量良性示例的关键神经元的差异模糊,我们识别特洛伊木马路径,特别是临界人,并通过模拟所识别的路径中的关键神经元来产生后门测试示例。广泛的实验表明了追索者的优越性,比现有方法更高的检测性能。通过隐秘的混合和自适应攻击来检测到后门的录音机更好,现有方法无法检测到。此外,我们的实验表明,录音所可能会揭示模型动物园中的模型的潜在潜在的背面。
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译
背景信息:在过去几年中,机器学习(ML)一直是许多创新的核心。然而,包括在所谓的“安全关键”系统中,例如汽车或航空的系统已经被证明是非常具有挑战性的,因为ML的范式转变为ML带来完全改变传统认证方法。目的:本文旨在阐明与ML为基础的安全关键系统认证有关的挑战,以及文献中提出的解决方案,以解决它们,回答问题的问题如何证明基于机器学习的安全关键系统?'方法:我们开展2015年至2020年至2020年之间发布的研究论文的系统文献综述(SLR),涵盖了与ML系统认证有关的主题。总共确定了217篇论文涵盖了主题,被认为是ML认证的主要支柱:鲁棒性,不确定性,解释性,验证,安全强化学习和直接认证。我们分析了每个子场的主要趋势和问题,并提取了提取的论文的总结。结果:单反结果突出了社区对该主题的热情,以及在数据集和模型类型方面缺乏多样性。它还强调需要进一步发展学术界和行业之间的联系,以加深域名研究。最后,它还说明了必须在上面提到的主要支柱之间建立连接的必要性,这些主要柱主要主要研究。结论:我们强调了目前部署的努力,以实现ML基于ML的软件系统,并讨论了一些未来的研究方向。
translated by 谷歌翻译
深度神经网络(DNN)已广泛用于许多领域,包括图像处理,医疗诊断和自主驾驶。然而,DNN可以表现出可能导致严重错误的错误行为,特别是在安全关键系统中使用时。灵感来自传统软件系统的测试技术,研究人员提出了神经元覆盖标准,作为比喻源代码覆盖率,以指导DNN模型的测试。尽管对DNN覆盖范围非常积极的研究,但最近的几项研究质疑此类标准在指导DNN测试中的有用性。此外,从实际的角度来看,这些标准是白盒,因为它们需要访问DNN模型的内部或培训数据,这在许多情况下不可行或方便。在本文中,我们将黑盒输入分集度量调查为白盒覆盖标准的替代品。为此,我们首先以受控方式选择和适应三个分集指标和学习它们在输入集中测量实际分集的能力。然后,我们使用两个数据集和三个DNN模型分析其与故障检测的统计关联。我们进一步比较了与最先进的白盒覆盖标准的多样性。我们的实验表明,依赖于测试输入集中嵌入的图像特征的多样性是比覆盖标准更可靠的指示,以有效地指导DNN的测试。事实上,我们发现我们选定的黑盒子分集度量的一个远远超出了现有的覆盖范围,以便在发生故障泄露能力和计算时间方面。结果还确认了疑似,最先进的覆盖度量指标不足以指导测试输入集的构建,以检测尽可能多的自然输入的故障。
translated by 谷歌翻译
机器学习算法和深度神经网络在几种感知和控制任务中的卓越性能正在推动该行业在安全关键应用中采用这种技术,作为自治机器人和自动驾驶车辆。然而,目前,需要解决几个问题,以使深入学习方法更可靠,可预测,安全,防止对抗性攻击。虽然已经提出了几种方法来提高深度神经网络的可信度,但大多数都是针对特定类的对抗示例量身定制的,因此未能检测到其他角落案件或不安全的输入,这些输入大量偏离训练样本。本文介绍了基于覆盖范式的轻量级监控架构,以增强针对不同不安全输入的模型鲁棒性。特别是,在用于评估多种检测逻辑的架构中提出并测试了四种覆盖分析方法。实验结果表明,该方法有效地检测强大的对抗性示例和分销外输入,引入有限的执行时间和内存要求。
translated by 谷歌翻译
量化是在嵌入式系统或手机上部署训练有素的DNN模型时,是最应用的深神经网络(DNN)压缩策略之一。这是由于其对广泛的应用和情况的简单性和适应性,而不是特定的人工智能(AI)加速器和编译器,这些加速器和编译器通常仅用于某些特定的硬件(例如Google Coral Edge TPU)。随着对量化的需求不断增长,确保该策略的可靠性成为一个关键挑战。传统的测试方法收集越来越多的真实数据以进行更好的评估,通常是不切实际的,因为输入空间的尺寸很大,并且原始DNN及其量化的对应物之间的相似性很高。结果,高级评估策略已变得至关重要。在本文中,我们提出了Diverget,这是一个基于搜索的测试框架,用于量化评估。 Diverget定义了变质关系的空间,该空间模拟了输入上的自然扭曲。然后,它最佳地探索了这些关系,以揭示不同算术精度的DNN之间的分歧。我们评估了应用于高光谱遥感图像的最先进的DNN上的Diverget的性能。我们选择了遥感DNN,因为它们越来越多地部署在诸如气候变化研究和天文学之类的关键领域中的边缘(例如,高级无人机)。我们的结果表明,Diverget成功地挑战了已建立的量化技术的鲁棒性,以防止自然变化的数据,并胜过其最新的并发,Diffchaser,其成功率(平均)是四倍。
translated by 谷歌翻译
Testing Deep Learning (DL) based systems inherently requires large and representative test sets to evaluate whether DL systems generalise beyond their training datasets. Diverse Test Input Generators (TIGs) have been proposed to produce artificial inputs that expose issues of the DL systems by triggering misbehaviours. Unfortunately, such generated inputs may be invalid, i.e., not recognisable as part of the input domain, thus providing an unreliable quality assessment. Automated validators can ease the burden of manually checking the validity of inputs for human testers, although input validity is a concept difficult to formalise and, thus, automate. In this paper, we investigate to what extent TIGs can generate valid inputs, according to both automated and human validators. We conduct a large empirical study, involving 2 different automated validators, 220 human assessors, 5 different TIGs and 3 classification tasks. Our results show that 84% artificially generated inputs are valid, according to automated validators, but their expected label is not always preserved. Automated validators reach a good consensus with humans (78% accuracy), but still have limitations when dealing with feature-rich datasets.
translated by 谷歌翻译
作为深度图像分类应用,例如,人脸识别,在我们日常生活中越来越普遍,他们的公平问题提高了越来越多的关注。因此,在部署之前全面地测试这些应用的公平性是至关重要的。现有的公平测试方法遭受以下限制:1)适用性,即它们仅适用于结构化数据或文本,而无需处理图像分类应用的语义水平中的高维和抽象域采样; 2)功能,即,它们在不提供测试标准的情况下产生不公平的样本,以表征模型的公平性充足。为了填补差距,我们提出了Deepfait,是专门为深图图像分类应用而设计的系统公平测试框架。 Deepfait由几种重要组成部分组成,实现了对深度图像分类应用的有效公平测试的重要组成部分:1)神经元选择策略,用于识别与公平相关神经元的神经元; 2)一组多粒度充足度指标,以评估模型的公平性; 3)测试选择算法有效地修复公平问题。我们对广泛采用的大型面部识别应用,即VGGFace和Fairface进行了实验。实验结果证实,我们的方法可以有效地识别公平相关的神经元,表征模型的公平性,并选择最有价值的测试用例来减轻模型的公平问题。
translated by 谷歌翻译
深度神经网络(DNN)模型,包括在安全 - 关键域中使用的模型,需要进行彻底测试,以确保它们在不同的情况下可以可靠地表现良好。在本文中,我们提供了用于测试DNN模型的结构覆盖量指标,包括神经元覆盖(NC),K-Multisection神经元覆盖范围(KMNC),TOP-K神经元覆盖范围(TKNC),神经元边界覆盖率(NBC),强元(NBC),强神经元激活覆盖范围(SNAC)和修改条件/决策覆盖范围(MC/DC)。我们评估用于感知任务的现实DNN模型(包括LENET-1,LENET-4,LENET-5和RESNET20)以及自治(TAXINET)中使用的网络的指标。我们还提供了一个工具DNNCOV,可以测量所有这些指标的测试覆盖范围。 DNNCOV向研究人员和从业人员提供了一份信息丰富的报道报告,以评估DNN测试的充分性,比较不同的覆盖范围,并在测试过程中更方便地检查模型的内部。
translated by 谷歌翻译
Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10 30 . We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested.
translated by 谷歌翻译
深度神经网络(DNN)应用越来越多地成为我们日常生活的一部分,从医疗应用到自动车辆。 DNN的传统验证依赖于准确度措施,然而,对抗示例的存在突出了这些准确度措施的局限性,特别是当DNN集成到安全关键系统中时提出担忧。在本文中,我们呈现HOMRS,一种通过自动构建从一组初始变质关系构建小型优化的高阶变质关系来提振变质测试的方法。 Homrs的骨干是一个多目标搜索;它利用传统系统测试中绘制的想法,例如代码覆盖,测试用例,路径分集以及输入验证。我们将HOMRS应用于MNIST / LENET和SVHN / VGG,我们报告了它的证据表明它建立了一个小而有效的高阶变换,概括到输入数据分布很好。此外,与诸如DeepXplore的类似的生成技术相比,我们表明我们的分发的方法更有效,从不确定量化的观点产生有效的变换,同时通过利用方法的泛化能力来实现更少的计算时间。
translated by 谷歌翻译
Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption.Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives. CCS Concepts: • Software and its engineering → Software defect analysis; • Computing methodologies → Machine learning.
translated by 谷歌翻译
当在安全 - 关键系统中使用深层神经网络(DNN)时,工程师应确定在测试过程中观察到的与故障(即错误输出)相关的安全风险。对于DNN处理图像,工程师在视觉上检查所有引起故障的图像以确定它们之间的共同特征。这种特征对应于危害触发事件(例如,低照明),这是安全分析的重要输入。尽管内容丰富,但这种活动却昂贵且容易出错。为了支持此类安全分析实践,我们提出了SEDE,该技术可为失败,现实世界图像中的共同点生成可读的描述,并通过有效的再培训改善DNN。 SEDE利用了通常用于网络物理系统的模拟器的可用性。它依靠遗传算法来驱动模拟器来生成与测试集中诱导失败的现实世界图像相似的图像。然后,它采用规则学习算法来得出以模拟器参数值捕获共同点的表达式。然后,派生表达式用于生成其他图像以重新训练和改进DNN。随着DNN执行车载传感任务,SEDE成功地表征了导致DNN精度下降的危险触发事件。此外,SEDE启用了重新培训,从而导致DNN准确性的显着提高,最高18个百分点。
translated by 谷歌翻译
深度神经网络(DNNS)的快速和广泛采用呼吁测试其行为的方法,许多测试方法成功地揭示了DNN的不当行为。但是,相对尚不清楚启示录后可以采取什么措施来纠正这种行为,因为重新研究涉及昂贵的数据收集,并且不能保证解决基本问题。本文介绍了Arachne,这是一种针对DNNS的新型程序修复技术,该技术使用其输入输出对直接维修DNN作为规范。 Arachne局部性的神经权重可以生成有效的斑块并使用差分进化来优化局部权重并纠正不当行为。使用不同基准的实证研究表明,Arachne可以固定DNN的特定错误分类,而无需显着降低一般准确性。平均而言,Arachne产生的补丁概括至未见不良行为的61.3%,而通过最先进的DNN修复技术的斑块仅概括为10.2%,有时甚至是没有,而无数次数则超过了Arachne。我们还表明,Arachne可以通过对性别分类模型来解决公平问题。最后,我们成功地将Arachne应用于文本情感模型,以表明它的普遍性超出了卷积神经网络。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
The adversarial input generation problem has become central in establishing the robustness and trustworthiness of deep neural nets, especially when they are used in safety-critical application domains such as autonomous vehicles and precision medicine. This is also practically challenging for multiple reasons-scalability is a common issue owing to large-sized networks, and the generated adversarial inputs often lack important qualities such as naturalness and output-impartiality. We relate this problem to the task of patching neural nets, i.e. applying small changes in some of the network$'$s weights so that the modified net satisfies a given property. Intuitively, a patch can be used to produce an adversarial input because the effect of changing the weights can also be brought about by changing the inputs instead. This work presents a novel technique to patch neural networks and an innovative approach of using it to produce perturbations of inputs which are adversarial for the original net. We note that the proposed solution is significantly more effective than the prior state-of-the-art techniques.
translated by 谷歌翻译
背景:机器学习(ML)可以实现有效的自动测试生成。目的:我们表征了新兴研究,检查测试实践,研究人员目标,应用的ML技术,评估和挑战。方法:我们对97个出版物的样本进行系统文献综述。结果:ML生成系统,GUI,单位,性能和组合测试的输入或改善现有生成方法的性能。 ML还用于生成测试判决,基于属性的和预期的输出序列。经常基于神经网络和强化学习的监督学习通常是基于Q学习的 - 很普遍,并且某些出版物还采用了无监督或半监督的学习。使用传统的测试指标和与ML相关的指标(例如准确性)评估(半/非 - )监督方法,而经常使用与奖励功能相关的测试指标来评估强化学习。结论:工作到尽头表现出巨大的希望,但是在培训数据,再探术,可伸缩性,评估复杂性,所采用的ML算法以及如何应用 - 基准和可复制性方面存在公开挑战。我们的发现可以作为该领域研究人员的路线图和灵感。
translated by 谷歌翻译