保留隐私深度神经网络(DNN)推理是不同受监管行业的必要条件,如医疗保健,金融和零售。最近,同性恋加密(HE)已被用作在解决隐私问题的同时启用分析的方法。他能够通过加密数据安全预测。然而,与使用他的使用有几个挑战,包括DNN尺寸限制以及对某些操作类型的支持缺乏支持。最值得注意的是,在某些HE方案下不支持常用的Relu激活。我们提出了一种结构化方法来用二次多项式激活替换Relu。为了解决准确性的降级问题,我们使用预先训练的模型,该模型列举了另一个他友好的模型,使用诸如“可训练激活”功能和知识蒸馏等技术。我们使用用于Covid-19检测的胸部X射线和CT数据集,在AlexNet架构上展示了我们的方法。我们的实验表明,通过使用我们的方法,F1分数和用Relu培训的模型的准确性与He-insive模型之间的差距缩小到仅为1.1-5.3%的劣化。
translated by 谷歌翻译
神经网络的外包计算允许用户访问艺术模型的状态,而无需投资专门的硬件和专业知识。问题是用户对潜在的隐私敏感数据失去控制。通过同性恋加密(HE)可以在加密数据上执行计算,而不会显示其内容。在这种知识的系统化中,我们深入了解与隐私保留的神经网络相结合的方法。我们将更改分类为神经网络模型和架构,使其在他和这些变化的影响方面提供影响。我们发现众多挑战是基于隐私保留的深度学习,例如通过加密方案构成的计算开销,可用性和限制。
translated by 谷歌翻译
客户端 - 服务器机器学习中的隐私问题引起了私人推理(PI),其中神经推断直接发生在加密输入上。 PI保护客户的个人数据和服务器的知识产权。 PI的常见做法是使用乱码的电路私下计算非线性功能,即释放。然而,乱码电路遭受高存储,带宽和延迟成本。为了缓解这些问题,采用了PI友好的多项式激活功能来替换Relu。在这项工作中,我们问:替换所有释放函数是否可行,以便为建设深度,隐私友好的神经网络而替换为低度多项式激活功能?我们通过分析与多项式取代释放的挑战来探索这个问题,从简单的液滴替换解决方案到新颖的解决方案,更有涉及的更换和培训策略。我们检查每种方法的局限性,并对PI的多项式激活功能的使用提供评论。我们发现所有评估的解决方案遭受逃避激活问题:前进激活值不可避免地以远离多项式的稳定区域的指数速率扩展,这导致爆炸值(NAN)或差的近似值。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
保留保护解决方案使公司能够在履行政府法规的同时将机密数据卸载到第三方服务。为了实现这一点,它们利用了各种密码技术,例如同性恋加密(HE),其允许对加密数据执行计算。大多数他计划以SIMD方式工作,数据包装方法可以显着影响运行时间和内存成本。找到导致最佳性能实现的包装方法是一个艰难的任务。我们提出了一种简单而直观的框架,摘要为用户提供包装决定。我们解释其底层数据结构和优化器,并提出了一种用于执行2D卷积操作的新算法。我们使用此框架来实现他友好的AlexNet版本,在三分钟内运行,比其他最先进的解决方案更快的数量级,只能使用他。
translated by 谷歌翻译
近年来,已经提出了新颖的激活功能来提高神经网络的性能,并且与Relu对应物相比,它们的性能卓越。但是,在某些环境中,复杂激活的可用性受到限制,并且通常只支持relu。在本文中,我们提出的方法可用于通过在模型训练期间使用这些有效的新型激活来改善Relu网络的性能。更具体地说,我们提出了由relu和这些新型激活之一组成的集合激活。此外,合奏的系数既不固定也不是固定的,而是在训练过程中逐渐更新的方式,即到训练结束时,只有RELU激活在网络中保持活跃,并且可以删除其他激活。这意味着在推理时间内,网络仅包含RELU激活。我们使用各种紧凑的网络体系结构和各种新型激活功能对Imagenet分类任务进行广泛的评估。结果显示0.2-0.8%的TOP-1准确性增益,这证实了所提出的方法的适用性。此外,我们演示了有关语义分割的建议方法,并在CityScapes数据集上提高了紧凑型分割网络的性能。
translated by 谷歌翻译
保存隐私的神经网络(NN)推理解决方案最近在几种提供不同的延迟带宽权衡的解决方案方面获得了重大吸引力。其中,许多人依靠同态加密(HE),这是一种对加密数据进行计算的方法。但是,与他们的明文对应物相比,他的操作即使是最先进的计划仍然很慢。修剪NN模型的参数是改善推理潜伏期的众所周知的方法。但是,在明文上下文中有用的修剪方法可能对HE案的改善几乎可以忽略不计,这在最近的工作中也证明了这一点。在这项工作中,我们提出了一套新颖的修剪方法,以减少潜伏期和记忆要求,从而将明文修剪方法的有效性带到HE中。至关重要的是,我们的建议采用两种关键技术,即。堆积模型权重的置换和扩展,使修剪能够明显更多的密封性下文并分别恢复大部分精度损失。我们证明了我们的方法在完全连接的层上的优势,其中使用最近提出的称为瓷砖张量的包装技术填充了权重,该技术允许在非相互作用模式下执行Deep NN推断。我们在各种自动编码器架构上评估了我们的方法,并证明,对于MNIST上的小均值重建损失为1.5*10^{ - 5},我们将HE-SEAMABLE推断的内存要求和延迟减少了60%。
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
Knowledge Distillation (KD) has been extensively used for natural language understanding (NLU) tasks to improve a small model's (a student) generalization by transferring the knowledge from a larger model (a teacher). Although KD methods achieve state-of-the-art performance in numerous settings, they suffer from several problems limiting their performance. It is shown in the literature that the capacity gap between the teacher and the student networks can make KD ineffective. Additionally, existing KD techniques do not mitigate the noise in the teacher's output: modeling the noisy behaviour of the teacher can distract the student from learning more useful features. We propose a new KD method that addresses these problems and facilitates the training compared to previous techniques. Inspired by continuation optimization, we design a training procedure that optimizes the highly non-convex KD objective by starting with the smoothed version of this objective and making it more complex as the training proceeds. Our method (Continuation-KD) achieves state-of-the-art performance across various compact architectures on NLU (GLUE benchmark) and computer vision tasks (CIFAR-10 and CIFAR-100).
translated by 谷歌翻译
深度神经网络易于对自然投入的离前事实制作,小而难以察觉的变化影响。对这些实例的最有效的防御机制是对逆脉训练在训练期间通过迭代最大化的损失来构建对抗性实例。然后训练该模型以最小化这些构建的实施例的损失。此最小最大优化需要更多数据,更大的容量模型和额外的计算资源。它还降低了模型的标准泛化性能。我们可以更有效地实现鲁棒性吗?在这项工作中,我们从知识转移的角度探讨了这个问题。首先,我们理论上展示了在混合增强的帮助下将鲁棒性从对接地训练的教师模型到学生模型的可转换性。其次,我们提出了一种新颖的鲁棒性转移方法,称为基于混合的激活信道图(MixacM)转移。 MixacM通过匹配未在没有昂贵的对抗扰动的匹配生成的激活频道地图将强大的教师转移到学生的鲁棒性。最后,对多个数据集的广泛实验和不同的学习情景显示我们的方法可以转移鲁棒性,同时还改善自然图像的概括。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns.In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.
translated by 谷歌翻译
Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.
translated by 谷歌翻译
最近,基于云的图形卷积网络(GCN)在许多对隐私敏感的应用程序(例如个人医疗保健和金融系统)中表现出了巨大的成功和潜力。尽管在云上具有很高的推理准确性和性能,但在GCN推理中保持数据隐私,这对于这些实际应用至关重要,但仍未得到探索。在本文中,我们对此进行了初步尝试,并开发了$ \ textit {cryptogcn} $ - 基于GCN推理框架的同型加密(HE)。我们方法成功的关键是减少HE操作的巨大计算开销,这可能比明文空间中的同行高的数量级。为此,我们开发了一种方法,可以有效利用GCN推断中基质操作的稀疏性,从而大大减少计算开销。具体而言,我们提出了一种新型的AMA数据格式方法和相关的空间卷积方法,该方法可以利用复杂的图结构并在HE计算中执行有效的矩阵矩阵乘法,从而大大减少HE操作。我们还开发了一个合作式框架,该框架可以通过明智的修剪和GCN中激活模块的多项式近似来探索准确性,安全级别和计算开销之间的交易折扣。基于NTU-Xview骨架关节数据集,即,据我们所知,最大的数据集对同型的评估,我们的实验结果表明,$ \ textit {cryptogcn} $均优胜于最先进的解决方案。同构操作的延迟和数量,即在延迟上达到3.10 $ \ times $加速,并将总代态操作数量减少77.4 \%,而准确度的较小精度损失为1-1.5 $ \%$。
translated by 谷歌翻译
随着功能加密的出现,已经出现了加密数据计算的新可能性。功能加密使数据所有者能够授予第三方访问执行指定的计算,而无需透露其输入。与完全同态加密不同,它还提供了普通的计算结果。机器学习的普遍性导致在云计算环境中收集了大量私人数据。这引发了潜在的隐私问题,并需要更多私人和安全的计算解决方案。在保护隐私的机器学习(PPML)方面已做出了许多努力,以解决安全和隐私问题。有基于完全同态加密(FHE),安全多方计算(SMC)的方法,以及最近的功能加密(FE)。但是,与基于FHE的PPML方法相比,基于FE的PPML仍处于起步阶段,并且尚未受到很多关注。在本文中,我们基于FE总结文献中的最新作品提供了PPML作品的系统化。我们专注于PPML应用程序的内部产品FE和基于二次FE的机器学习模型。我们分析了可用的FE库的性能和可用性及其对PPML的应用。我们还讨论了基于FE的PPML方法的潜在方向。据我们所知,这是系统化基于FE的PPML方法的第一项工作。
translated by 谷歌翻译
诸如智能手机和自治车辆的移动设备越来越依赖深神经网络(DNN)来执行复杂的推理任务,例如图像分类和语音识别等。但是,在移动设备上连续执行整个DNN可以快速消耗其电池。虽然任务卸载到云/边缘服务器可能会降低移动设备的计算负担,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致任务执行的显着延迟。最近,已经提出了基于分割计算(SC)的方法,其中DNN被分成在移动设备上和边缘服务器上执行的头部和尾模型。最终,这可能会降低带宽使用以及能量消耗。另一种叫做早期退出(EE)的方法,列车模型在架构中呈现多个“退出”,每个都提供越来越高的目标准确性。因此,可以根据当前条件或应用需求进行准确性和延迟之间的权衡。在本文中,我们通过呈现最相关方法的比较,对SC和EE策略进行全面的综合调查。我们通过提供一系列引人注目的研究挑战来结束论文。
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
深度神经网络(DNN)越来越多地部署在诸如个人医疗设备和自动驾驶汽车等安全关键系统中。在基于DNN的系统中,由于DNN推理的故障可能导致错误预测和安全危险,因此错误弹性是一个顶级优先级。对于资源受限边缘设备对延迟关键的DNN推断,它是非应用传统的冗余基于故障公差技术。在本文中,我们提出了合适的方法,通过部署细粒度可训练的激活功能来增强DNN的误差弹性的低成本方法。主要思想是通过神经元 - 明亮的激活功能精确地绑定每个单独神经元的激活值,以便它可以防止网络中的故障传播。为避免复杂的DNN模型重新培训,我们建议将精度培训和恢复力培训解耦,并开发轻量级训练阶段,以了解这些激活功能的精确界限。关于广泛使用的DNN模型(如AlexNet,VGG16和Reset50)的实验结果表明,装配优惠的最先进的研究(如Clip-Act和Ranger)在增强DNN误差弹性方面,在添加可管理的同时增加了各种故障率运行时和内存空间开销。
translated by 谷歌翻译
这项工作引入了图像分类器的注意机制和相应的深神经网络(DNN)结构,称为ISNET。在训练过程中,ISNET使用分割目标来学习如何找到图像感兴趣的区域并将注意力集中在其上。该提案基于一个新颖的概念,即在说明热图中的背景相关性最小化。它几乎可以应用于任何分类神经网络体系结构,而在运行时没有任何额外的计算成本。能够忽略背景的单个DNN可以替换分段者的通用管道,然后是分类器,更快,更轻。我们测试了ISNET的三种应用:Covid-19和胸部X射线中的结核病检测以及面部属性估计。前两个任务采用了混合培训数据库,并培养了快捷方式学习。通过关注肺部并忽略背景中的偏见来源,ISNET减少了问题。因此,它改善了生物医学分类问题中外部(分布外)测试数据集的概括,超越了标准分类器,多任务DNN(执行分类和细分),注意力门控神经网络以及标准段 - 分类管道。面部属性估计表明,ISNET可以精确地集中在面孔上,也适用于自然图像。 ISNET提出了一种准确,快速和轻的方法,可忽略背景并改善各种领域的概括。
translated by 谷歌翻译