We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.
translated by 谷歌翻译
While deep neural networks (DNNs) have demonstrated impressive performance in solving many challenging tasks, they are limited to resource-constrained devices owing to their demand for computation power and storage space. Quantization is one of the most promising techniques to address this issue by quantizing the weights and/or activation tensors of a DNN into lower bit-width fixed-point numbers. While quantization has been empirically shown to introduce minor accuracy loss, it lacks formal guarantees on that, especially when the resulting quantized neural networks (QNNs) are deployed in safety-critical applications. A majority of existing verification methods focus exclusively on individual neural networks, either DNNs or QNNs. While promising attempts have been made to verify the quantization error bound between DNNs and their quantized counterparts, they are not complete and more importantly do not support fully quantified neural networks, namely, only weights are quantized. To fill this gap, in this work, we propose a quantization error bound verification method (QEBVerif), where both weights and activation tensors are quantized. QEBVerif consists of two analyses: a differential reachability analysis (DRA) and a mixed-integer linear programming (MILP) based verification method. DRA performs difference analysis between the DNN and its quantized counterpart layer-by-layer to efficiently compute a tight quantization error interval. If it fails to prove the error bound, then we encode the verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Thus, QEBVerif is sound, complete, and arguably efficient. We implement QEBVerif in a tool and conduct extensive experiments, showing its effectiveness and efficiency.
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译
最近的作品试图通过对比原始扰动大的域进行攻击,并在目标中增加各种正则化项,从而提高受对抗训练的网络的验证性。但是,这些算法表现不佳或需要复杂且昂贵的舞台训练程序,从而阻碍了其实际适用性。我们提出了IBP-R,这是一种新颖的经过验证的培训算法,既简单又有效。 IBP-R通过基于廉价的间隔结合传播对扩大域的对抗域进行对抗性攻击来诱导网络可验证性,从而最大程度地减少了非凸vex验证问题与其近似值之间的差距。通过利用最近的分支机构和结合的框架,我们表明IBP-R获得了最先进的核能 - 智能权准折衷,而在CIFAR-10上进行了小型扰动,而培训的速度明显快于相关的先前工作。此外,我们提出了一种新颖的分支策略,该策略依赖于基于$ \ beta $ crown的简单启发式,可降低最先进的分支分支算法的成本,同时产生可比质量的分裂。
translated by 谷歌翻译
This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
translated by 谷歌翻译
深度神经网络(DNN)的巨大进步导致了各种任务的最先进的性能。然而,最近的研究表明,DNNS容易受到对抗的攻击,这在将这些模型部署到自动驾驶等安全关键型应用时,这使得非常关注。已经提出了不同的防御方法,包括:a)经验防御,通常可以在不提供稳健性认证的情况下再次再次攻击; b)可认真的稳健方法,由稳健性验证组成,提供了在某些条件下的任何攻击和相应的强大培训方法中的稳健准确性的下限。在本文中,我们系统化了可认真的稳健方法和相关的实用和理论意义和调查结果。我们还提供了在不同数据集上现有的稳健验证和培训方法的第一个全面基准。特别是,我们1)为稳健性验证和培训方法提供分类,以及总结代表性算法的方法,2)揭示这些方法中的特征,优势,局限性和基本联系,3)讨论当前的研究进展情况TNN和4的可信稳健方法的理论障碍,主要挑战和未来方向提供了一个开放的统一平台,以评估超过20种代表可认真的稳健方法,用于各种DNN。
translated by 谷歌翻译
深度神经网络的鲁棒性对于现代AI支持系统至关重要,应正式验证。在广泛的应用中采用了类似乙状结肠的神经网络。由于它们的非线性,通常会过度评估乙状结肠样激活功能,以进行有效的验证,这不可避免地引入了不精确度。已大量的努力致力于找到所谓的更紧密的近似值,以获得更精确的验证结果。但是,现有的紧密定义是启发式的,缺乏理论基础。我们对现有神经元的紧密表征进行了彻底的经验分析,并揭示它们仅在特定的神经网络上是优越的。然后,我们将网络紧密度的概念介绍为统一的紧密度定义,并表明计算网络紧密度是一个复杂的非convex优化问题。我们通过两个有效的,最紧密的近似值从不同的角度绕过复杂性。结果表明,我们在艺术状态下的方法实现了有希望的表现:(i)达到高达251.28%的改善,以提高认证的较低鲁棒性界限; (ii)在卷积网络上表现出更为精确的验证结果。
translated by 谷歌翻译
We propose a method to learn deep ReLU-based classifiers that are provably robust against normbounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded ∞ norm less than = 0.1), and code for all experiments is available at http://github.com/ locuslab/convex_adversarial.
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
我们介绍了一种新颖的方法来对计算机程序进行自动终止分析:我们使用神经网络来表示排名功能。排名函数映射程序状态为从下面界限并随着程序运行而减小的值;排名函数的存在证明了程序终止。我们从程序的采样执行轨迹训练神经网络,以使网络的输出沿轨迹降低;然后,我们使用符号推理正式验证其对所有可能执行的概括。通过肯定的答案,我们获得了该计划的正式终止证书,我们称之为神经排名函数。我们证明,由于神经网络代表非线性功能的能力,我们的方法成功地超过了最先进工具的程序。这包括在其循环条件和包括非线性表达式的程序中使用析取的程序。
translated by 谷歌翻译
随着神经网络的不断扩展,对其财产的完整和合理验证的需求变得至关重要。近年来,确定二进制神经网络(BNN)在布尔逻辑中具有等效的表示,并且可以使用诸如SAT求解器之类的逻辑推理工具进行正式分析。但是,迄今为止,只能将BNN转换为SAT公式。在这项工作中,我们介绍了真实表深卷积神经网络(TTNETS),这是一个新的sat-odsody型号,首次是现实价值的重量。此外,它通过构造承认,在稳健性验证设置中,包括调节后和拖延性,包括后调整功能。后一种属性导致比BNN更紧凑的SAT符号编码。这使使用一般SAT求解器的使用使属性验证更加容易。我们证明了TTNET关于形式鲁棒性属性的值:TTNET在具有可比的计算时间的所有BNN的验证精度上优于验证的准确性。更普遍地,它们代表了所有已知的完整验证方法之间的相关权衡:TTNET在快速验证时间内实现了高验证的精度,并且没有超时。在这里,我们正在探索TTNET的概念证明,以实现非常重要的应用(稳健性的完整验证),我们相信这个新颖的实现的网络构成了对功能正式验证需求不断增长的实际响应。我们假设TTNET可以应用于各种基于CNN的架构,并将其扩展到其他属性,例如公平性,故障攻击和精确规则提取。
translated by 谷歌翻译
我们考虑了认证深神经网络对现实分布变化的鲁棒性的问题。为此,我们通过提出一个新型的神经符号验证框架来弥合手工制作的规格和现实部署设置之间的差距模型。这种环境引起的一个独特的挑战是,现有的验证者不能紧密地近似sigmoid激活,这对于许多最新的生成模型至关重要。为了应对这一挑战,我们提出了一个通用的元算象来处理乙状结肠激活,该乙状结激素利用反示例引导的抽象细化的经典概念。关键思想是“懒惰地”完善Sigmoid函数的抽象,以排除先前抽象中发现的虚假反示例,从而确保验证过程中的进展,同时保持状态空间较小。 MNIST和CIFAR-10数据集的实验表明,我们的框架在一系列具有挑战性的分配变化方面大大优于现有方法。
translated by 谷歌翻译
使用浮点实数实现标准深度学习算法。这呈现了在可能没有专用浮点单元(FPU)的低端设备上实现它们的障碍。因此,Tinyml的研究人员认为可以使用Integer操作在低端设备上培训和运行深神经网络(DNN)的机器学习算法。本文在纯C ++中提出了Pocketnn,轻型和独立的概念概念框架,用于仅使用整数的DNN训练和推断。与其他方法不同,PocketNN直接在整数上运行,而无需任何显式量化算法或定制的定期点格式。这是通过口袋激活来实现的,这是一个用于整数DNN的激活函数系列,以及称为直接反馈对准(DFA)的新兴DNN训练算法。与标准BackPropagation(BP)不同,DFA独立列举每个图层,从而避免在使用仅具有整数操作的BP时是一个关键问题的整数溢出。我们使用Pocketnn在两个着名的数据集,Mnist和Fashion-Mnist上培训一些DNN。我们的实验表明,DNN与我们的PocketNN接受过的DNN培训,分别在MNIST和Fashion-Mnist数据集中获得了96.98%和87.7%的准确性。精度非常接近使用具有浮点实数操作的BP培训的等效DNN,使得精度降解分别为1.02%p和2.09%p。最后,我们的PocketNN为低端设备具有高兼容性和可移植性,因为它是开源的开源,并在纯C ++中实现,没有任何依赖项。
translated by 谷歌翻译
卷积神经网络(CNN)的量化是缓解CNN部署的计算负担,尤其是在低资源边缘设备上的常见方法。但是,对于神经网络所涉及的计算类型,固定点算术并不是自然的。在这项工作中,我们探索了使用基于PDE的观点和分析来改善量化CNN的方法。首先,我们利用总变化方法(电视)方法将边缘意识平滑应用于整个网络的特征图。这旨在减少值分布的异常值并促进零件恒定图,这更适合量化。其次,我们考虑用于图像分类的常见CNN的对称和稳定变体,以及用于图源分类的图形卷积网络(GCN)。我们通过几个实验证明,正向稳定性的性质保留了在不同量化速率下网络的作用。结果,稳定的量化网络的行为与非量化的网络相似,即使它们依赖于较少的参数。我们还发现,有时,稳定性甚至有助于提高准确性。对于敏感,资源受限,低功率或实时应用(例如自动驾驶),这些属性特别感兴趣。
translated by 谷歌翻译
We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.
translated by 谷歌翻译
多项式网络(PNS)最近在面部和图像识别方面表现出了有希望的表现。但是,PNS的鲁棒性尚不清楚,因此获得证书对于使其在现实世界应用中的采用至关重要。基于分支和绑定(BAB)技术的Relu神经网络(NNS)上的现有验证算法不能微不足道地应用于PN验证。在这项工作中,我们设计了一种新的边界方法,该方法配备了BAB,用于全球融合保证,称为VPN。一个关键的见解是,我们获得的边界比间隔结合的传播基线更紧密。这可以通过MNIST,CIFAR10和STL10数据集的经验验证进行声音和完整的PN验证。我们认为我们的方法对NN验证具有自身的兴趣。
translated by 谷歌翻译
最近,图形神经网络(GNN)已应用于群集上的调整工作,比手工制作的启发式方法更好地表现了。尽管表现令人印象深刻,但仍然担心这些基于GNN的工作调度程序是否满足用户对其他重要属性的期望,例如防止策略,共享激励和稳定性。在这项工作中,我们考虑对基于GNN的工作调度程序的正式验证。我们解决了几个特定领域的挑战,例如网络,这些挑战比验证图像和NLP分类器时遇到的更深层和规格更丰富。我们开发了拉斯维加斯,这是基于精心设计的算法,将这些调度程序的单步和多步属性验证的第一个通用框架,它们结合了抽象,改进,求解器和证明传输。我们的实验结果表明,与以前的方法相比,维加斯在验证基于GNN的调度程序的重要特性时会达到显着加速。
translated by 谷歌翻译
In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
translated by 谷歌翻译
由于神经网络变得更加强大,因此在现实世界中部署它们的愿望是一个上升的愿望;然而,神经网络的功率和准确性主要是由于它们的深度和复杂性,使得它们难以部署,尤其是在资源受限的设备中。最近出现了神经网络量化,以满足这种需求通过降低网络的精度来降低神经网络的大小和复杂性。具有较小和更简单的网络,可以在目标硬件的约束中运行神经网络。本文调查了在过去十年中开发的许多神经网络量化技术。基于该调查和神经网络量化技术的比较,我们提出了该地区的未来研究方向。
translated by 谷歌翻译
Neural networks are increasingly applied in safety critical domains, their verification thus is gaining importance. A large class of recent algorithms for proving input-output relations of feed-forward neural networks are based on linear relaxations and symbolic interval propagation. However, due to variable dependencies, the approximations deteriorate with increasing depth of the network. In this paper we present DPNeurifyFV, a novel branch-and-bound solver for ReLU networks with low dimensional input-space that is based on symbolic interval propagation with fresh variables and input-splitting. A new heuristic for choosing the fresh variables allows to ameliorate the dependency problem, while our novel splitting heuristic, in combination with several other improvements, speeds up the branch-and-bound procedure. We evaluate our approach on the airborne collision avoidance networks ACAS Xu and demonstrate runtime improvements compared to state-of-the-art tools.
translated by 谷歌翻译