While deep neural networks (DNNs) have demonstrated impressive performance in solving many challenging tasks, they are limited to resource-constrained devices owing to their demand for computation power and storage space. Quantization is one of the most promising techniques to address this issue by quantizing the weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers. While quantization has been empirically shown to introduce minor accuracy loss, it lacks formal guarantees on that, especially when the resulting quantized neural networks (QNNs) are deployed in safety-critical applications. A majority of existing verification methods focus exclusively on individual neural networks, either DNNs or QNNs. While promising attempts have been made to verify the quantization error bound between DNNs and their quantized counterparts, they are not complete and more importantly do not support fully quantified neural networks, namely, only weights are quantized. To fill this gap, in this work, we propose a quantization error bound verification method (QEBVerif), where both weights and activation tensors are quantized. QEBVerif consists of two analyses: a differential reachability analysis (DRA) and a mixed-integer linear programming (MILP) based verification method. DRA performs difference analysis between the DNN and its quantized counterpart layer-by-layer to efficiently compute a tight quantization error interval. If it fails to prove the error bound, then we encode the verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Thus, QEBVerif is sound, complete, and arguably efficient. We implement QEBVerif in a tool and conduct extensive experiments, showing its effectiveness and efficiency.
translated by 谷歌翻译
随着深度学习在关键任务系统中的越来越多的应用,越来越需要对神经网络的行为进行正式保证。确实,最近提出了许多用于验证神经网络的方法,但是这些方法通常以有限的可伸缩性或不足的精度而挣扎。许多最先进的验证方案中的关键组成部分是在网络中可以为特定输入域获得的神经元获得的值计算下限和上限 - 并且这些界限更紧密,验证的可能性越大,验证的可能性就越大。成功。计算这些边界的许多常见算法是符号结合传播方法的变化。其中,利用一种称为后替代的过程的方法特别成功。在本文中,我们提出了一种使背部替代产生更严格的界限的方法。为了实现这一目标,我们制定并最大程度地减少背部固定过程中发生的不精确错误。我们的技术是一般的,从某种意义上说,它可以将其集成到许多现有的符号结合的传播技术中,并且只有较小的修改。我们将方法作为概念验证工具实施,并且与执行背部替代的最先进的验证者相比,取得了有利的结果。
translated by 谷歌翻译
深度神经网络的鲁棒性对于现代AI支持系统至关重要,应正式验证。在广泛的应用中采用了类似乙状结肠的神经网络。由于它们的非线性,通常会过度评估乙状结肠样激活功能,以进行有效的验证,这不可避免地引入了不精确度。已大量的努力致力于找到所谓的更紧密的近似值,以获得更精确的验证结果。但是,现有的紧密定义是启发式的,缺乏理论基础。我们对现有神经元的紧密表征进行了彻底的经验分析,并揭示它们仅在特定的神经网络上是优越的。然后,我们将网络紧密度的概念介绍为统一的紧密度定义,并表明计算网络紧密度是一个复杂的非convex优化问题。我们通过两个有效的,最紧密的近似值从不同的角度绕过复杂性。结果表明,我们在艺术状态下的方法实现了有希望的表现:(i)达到高达251.28%的改善,以提高认证的较低鲁棒性界限; (ii)在卷积网络上表现出更为精确的验证结果。
translated by 谷歌翻译
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.
translated by 谷歌翻译
作为一个新的编程范式,深度神经网络(DNN)在实践中越来越多地部署,但是缺乏鲁棒性阻碍了他们在安全至关重要的领域中的应用。尽管有用于正式保证的DNN验证DNN的技术,但它们的可伸缩性和准确性有限。在本文中,我们提出了一种新颖的抽象方法,用于可扩展和精确的DNN验证。具体而言,我们提出了一种新颖的抽象来通过过度透明度分解DNN的大小。如果未报告任何虚假反例,验证抽象DNN的结果始终是结论性的。为了消除抽象提出的虚假反例,我们提出了一种新颖的反例引导的改进,该精炼精炼了抽象的DNN,以排除给定的虚假反例,同时仍然过分欣赏原始示例。我们的方法是正交的,并且可以与许多现有的验证技术集成。为了进行演示,我们使用两个有前途和确切的工具Marabou和Planet作为基础验证引擎实施我们的方法,并对广泛使用的基准ACAS XU,MNIST和CIFAR-10进行评估。结果表明,我们的方法可以通过解决更多问题并分别减少86.3%和78.0%的验证时间来提高他们的绩效。与最相关的抽象方法相比,我们的方法是11.6-26.6倍。
translated by 谷歌翻译
Neural networks are increasingly applied in safety critical domains, their verification thus is gaining importance. A large class of recent algorithms for proving input-output relations of feed-forward neural networks are based on linear relaxations and symbolic interval propagation. However, due to variable dependencies, the approximations deteriorate with increasing depth of the network. In this paper we present DPNeurifyFV, a novel branch-and-bound solver for ReLU networks with low dimensional input-space that is based on symbolic interval propagation with fresh variables and input-splitting. A new heuristic for choosing the fresh variables allows to ameliorate the dependency problem, while our novel splitting heuristic, in combination with several other improvements, speeds up the branch-and-bound procedure. We evaluate our approach on the airborne collision avoidance networks ACAS Xu and demonstrate runtime improvements compared to state-of-the-art tools.
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译
深度神经网络(DNN)的巨大进步导致了各种任务的最先进的性能。然而,最近的研究表明,DNNS容易受到对抗的攻击,这在将这些模型部署到自动驾驶等安全关键型应用时,这使得非常关注。已经提出了不同的防御方法,包括:a)经验防御,通常可以在不提供稳健性认证的情况下再次再次攻击; b)可认真的稳健方法,由稳健性验证组成,提供了在某些条件下的任何攻击和相应的强大培训方法中的稳健准确性的下限。在本文中,我们系统化了可认真的稳健方法和相关的实用和理论意义和调查结果。我们还提供了在不同数据集上现有的稳健验证和培训方法的第一个全面基准。特别是,我们1)为稳健性验证和培训方法提供分类,以及总结代表性算法的方法,2)揭示这些方法中的特征,优势,局限性和基本联系,3)讨论当前的研究进展情况TNN和4的可信稳健方法的理论障碍,主要挑战和未来方向提供了一个开放的统一平台,以评估超过20种代表可认真的稳健方法,用于各种DNN。
translated by 谷歌翻译
由于它们在计算机视觉,图像处理和其他人领域的优异性能,卷积神经网络具有极大的普及。不幸的是,现在众所周知,卷积网络通常产生错误的结果 - 例如,这些网络的输入的小扰动可能导致严重的分类错误。近年来提出了许多验证方法,以证明没有此类错误,但这些通常用于完全连接的网络,并且在应用于卷积网络时遭受加剧的可扩展性问题。为了解决这一差距,我们在这里介绍了CNN-ABS框架,特别是旨在验证卷积网络。 CNN-ABS的核心是一种抽象细化技术,它通过拆除卷积连接,以便在这种方式创造原始问题的过度逼近来简化验证问题;如果产生的问题变得过于抽象,它会恢复这些连接。 CNN-ABS旨在使用现有的验证引擎作为后端,我们的评估表明它可以显着提高最先进的DNN验证引擎的性能,平均降低运行时间15.7%。
translated by 谷歌翻译
We present AI 2 , the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks).The key insight behind AI 2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers.We present a complete implementation of AI 2 together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI 2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI 2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, (iii) AI 2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks, and (iv) AI 2 can handle deep convolutional networks, which are beyond the reach of existing methods.
translated by 谷歌翻译
本文提出了一种新的可达性分析工具,用于计算给定输入不确定性下的前馈神经网络的输出集的间隔过度近似。所提出的方法适应神经网络的现有混合单调性方法,用于可动力分析的动态系统,并将其应用于给定神经网络内的所有可能的部分网络。这确保了所获得的结果的交叉点是可以使用混合单调性获得的每层输出的最紧密的间隔过度近似。与文献中的其他工具相比,专注于小类分段 - 仿射或单调激活功能,我们方法的主要优势是其普遍性,它可以处理具有任何嘴唇智能连续激活功能的神经网络。此外,所提出的框架的简单性允许用户通过简单地提供函数,衍生和全局极值以及衍生物的相应参数来非常容易地添加未实现的激活功能。我们的算法经过测试,并将其与1000个随机生成的神经网络上的五个基于间隔的工具进行了比较,用于四个激活功能(Relu,Tanh,Elu,Silu)。我们表明我们的工具总是优于间隔绑定的传播方法,并且我们获得比Reluval,神经化,Verinet和Crown(适用于案件的时)更严格的输出界限。
translated by 谷歌翻译
随着神经网络作为任务至关重要系统中组成部分的越来越多的整合,越来越需要确保它们满足各种安全性和livesice要求。近年来,已经提出了许多声音和完整的验证方法,但这些方法通常受到严重的可伸缩性限制。最近的工作提出了通过抽象 - 再填充功能增强这种验证技术的增强,这些功能已被证明可以提高可伸缩性:而不是验证大型且复杂的网络,而是验证者构造,然后验证一个较小的网络,其正确性意味着原始的正确性网络。这种方案的缺点是,如果验证较小的网络失败,则验证者需要执行改进步骤,以增加验证网络的大小,然后开始从SCRATCH验证新网络 - 有效地``'浪费''它的早期工作在验证较小的网络方面。在本文中,我们通过使用\ emph {残留推理}来提高基于抽象的神经网络验证的增强:在验证抽象网络时使用信息的过程,以加快对精制网络的验证。本质上,该方法允许验证者存储有关确保正确行为的搜索空间部分的信息,并允许其专注于可能发现错误的区域。我们实施了我们的方法,以扩展到Marabou验证者,并获得了有希望的结果。
translated by 谷歌翻译
QNNVerifier是第一个用于验证神经网络实现的开源工具,以考虑其操作数的有限字长(即量化)。通过采用最先进的软件模型检查(SMC)技术来实现对量化的新颖支持。它将神经网络的实现基于可满足模数理论(SMT)来将神经网络的实现到一阶逻辑的可解除片段。通过给定硬件确定的精度,通过直接实现来表示固定和浮点操作的影响。此外,Qnnverifier允许指定定制安全性能,并使用不同的验证策略(增量和K-Incuction)和SMT求解器来验证所产生的模型。最后,QNNVerifier是第一个通过间隔分析和非线性激活功能的离散化来组合不变推论的工具,以加快级别验证神经网络的级数。 qnnverifier的视频呈现可在https://youtu.be/7jmgol41zty中获得
translated by 谷歌翻译
Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.
translated by 谷歌翻译
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU ) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
translated by 谷歌翻译
最近,图形神经网络(GNN)已应用于群集上的调整工作,比手工制作的启发式方法更好地表现了。尽管表现令人印象深刻,但仍然担心这些基于GNN的工作调度程序是否满足用户对其他重要属性的期望,例如防止策略,共享激励和稳定性。在这项工作中,我们考虑对基于GNN的工作调度程序的正式验证。我们解决了几个特定领域的挑战,例如网络,这些挑战比验证图像和NLP分类器时遇到的更深层和规格更丰富。我们开发了拉斯维加斯,这是基于精心设计的算法,将这些调度程序的单步和多步属性验证的第一个通用框架,它们结合了抽象,改进,求解器和证明传输。我们的实验结果表明,与以前的方法相比,维加斯在验证基于GNN的调度程序的重要特性时会达到显着加速。
translated by 谷歌翻译
随着神经网络的不断扩展,对其财产的完整和合理验证的需求变得至关重要。近年来,确定二进制神经网络(BNN)在布尔逻辑中具有等效的表示,并且可以使用诸如SAT求解器之类的逻辑推理工具进行正式分析。但是,迄今为止,只能将BNN转换为SAT公式。在这项工作中,我们介绍了真实表深卷积神经网络(TTNETS),这是一个新的sat-odsody型号,首次是现实价值的重量。此外,它通过构造承认,在稳健性验证设置中,包括调节后和拖延性,包括后调整功能。后一种属性导致比BNN更紧凑的SAT符号编码。这使使用一般SAT求解器的使用使属性验证更加容易。我们证明了TTNET关于形式鲁棒性属性的值:TTNET在具有可比的计算时间的所有BNN的验证精度上优于验证的准确性。更普遍地,它们代表了所有已知的完整验证方法之间的相关权衡:TTNET在快速验证时间内实现了高验证的精度,并且没有超时。在这里,我们正在探索TTNET的概念证明,以实现非常重要的应用(稳健性的完整验证),我们相信这个新颖的实现的网络构成了对功能正式验证需求不断增长的实际响应。我们假设TTNET可以应用于各种基于CNN的架构,并将其扩展到其他属性,例如公平性,故障攻击和精确规则提取。
translated by 谷歌翻译
二进制神经网络(BNNS)已经证明了它们能够以可比精度(DNNS)的准确性来解决复杂任务的能力,同时还降低了计算能力和存储要求并提高处理速度。这些属性使它们成为开发和部署基于DNN的应用程序(IOT)设备的吸引人的替代方法。尽管最近有所改善,但它们的压缩因素可能会导致一些资源非常有限的设备可能导致不足。在这项工作中,我们提出了稀疏的二进制神经网络(SBNNS),这是一种新颖的模型和训练方案,它引入了BNN中的稀疏性和一种新的量化函数,以使网络的权重进行二进制。提出的SBNN能够达到高压因子,并减少了推理时的操作和参数数量。我们还提供工具来协助SBNN设计,同时尊重硬件资源约束。我们通过三个数据集的线性和卷积网络上的一组实验来研究我们方法对不同压缩因子的概括属性。我们的实验证实,SBNNS可以达到高压缩率,而不会损害概括,同时进一步降低了BNNS的操作,使SBNNS成为以廉价,低成本,限量资源的IOT设备和传感器部署DNN的可行选择。
translated by 谷歌翻译
Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum 1 adversarial distortion of a ReLU network with a 0.99 ln n approximation ratio unless NP=P, where n is the number of neurons in the network.
translated by 谷歌翻译
我们考虑了认证深神经网络对现实分布变化的鲁棒性的问题。为此,我们通过提出一个新型的神经符号验证框架来弥合手工制作的规格和现实部署设置之间的差距模型。这种环境引起的一个独特的挑战是,现有的验证者不能紧密地近似sigmoid激活,这对于许多最新的生成模型至关重要。为了应对这一挑战,我们提出了一个通用的元算象来处理乙状结肠激活,该乙状结激素利用反示例引导的抽象细化的经典概念。关键思想是“懒惰地”完善Sigmoid函数的抽象,以排除先前抽象中发现的虚假反示例,从而确保验证过程中的进展,同时保持状态空间较小。 MNIST和CIFAR-10数据集的实验表明,我们的框架在一系列具有挑战性的分配变化方面大大优于现有方法。
translated by 谷歌翻译