已提出水印,以保护深神经网络(DNNS)的知识产权(IPR)并跟踪其使用。已经提出了几种方法将水印嵌入网络的可训练参数(白盒水印)或网络与特定输入(黑盒水印)相应的网络实施的输入输出映射中。在这两种情况下,都可以实现针对微调,模型压缩以及更大的转移学习的鲁棒性,这是研究人员试图面对的最困难的挑战之一。在本文中,我们提出了一种具有强大鲁棒性特性的新的白色框,多位水印算法,包括用于转移学习的重新训练。借助新的信息编码策略,可以根据该固定权重分布水标的信息,其位置取决于秘密钥匙。托管水印的权重是在训练之前设置的,并且在整个训练过程中保持不变。从理论上优化了执行该消息的权重的分布,以确保水标重量与其他重量无法区分,同时保持其幅度尽可能大,以提高稳健性,以提高稳定性。我们进行了几项实验,证明了拟议方案提供高有效载荷的能力,几乎没有影响网络准确性,同时保留了重新使用网络修改的出色鲁棒性,包括用于转移学习的重新使用。
translated by 谷歌翻译
与令人印象深刻的进步触动了我们社会的各个方面,基于深度神经网络(DNN)的AI技术正在带来越来越多的安全问题。虽然在考试时间运行的攻击垄断了研究人员的初始关注,但是通过干扰培训过程来利用破坏DNN模型的可能性,代表了破坏训练过程的可能性,这是破坏AI技术的可靠性的进一步严重威胁。在后门攻击中,攻击者损坏了培训数据,以便在测试时间诱导错误的行为。然而,测试时间误差仅在存在与正确制作的输入样本对应的触发事件的情况下被激活。通过这种方式,损坏的网络继续正常输入的预期工作,并且只有当攻击者决定激活网络内隐藏的后门时,才会发生恶意行为。在过去几年中,后门攻击一直是强烈的研究活动的主题,重点是新的攻击阶段的发展,以及可能对策的提议。此概述文件的目标是审查发表的作品,直到现在,分类到目前为止提出的不同类型的攻击和防御。指导分析的分类基于攻击者对培训过程的控制量,以及防御者验证用于培训的数据的完整性,并监控DNN在培训和测试中的操作时间。因此,拟议的分析特别适合于参考他们在运营的应用方案的攻击和防御的强度和弱点。
translated by 谷歌翻译
我们提出了一种保护生成对抗网络(GAN)的知识产权(IP)的水印方法。目的是为GAN模型加水印,以便GAN产生的任何图像都包含一个无形的水印(签名),其在图像中的存在可以在以后的阶段检查以进行所有权验证。为了实现这一目标,在发电机的输出上插入了预先训练的CNN水印解码块。然后通过包括水印损失项来修改发电机损耗,以确保可以从生成的图像中提取规定的水印。水印是通过微调嵌入的,其时间复杂性降低。结果表明,我们的方法可以有效地将无形的水印嵌入生成的图像中。此外,我们的方法是一种通用方法,可以使用不同的GAN体系结构,不同的任务和输出图像的不同分辨率。我们还证明了嵌入式水印的良好鲁棒性能与几个后处理,其中包括JPEG压缩,噪声添加,模糊和色彩转换。
translated by 谷歌翻译
在部署之前,保护DNN模型的知识产权是至关重要的。到目前为止,提出的方法要么需要更改内部模型参数或机器学习管道,要么无法满足安全性和鲁棒性要求。本文提出了一种轻巧,健壮且安全的黑盒DNN水印协议,该协议利用了加密单向功能以及在训练过程中注入任务钥匙标签 - 标签对。这些对后来用于在测试过程中证明DNN模型所有权。主要功能是证明及其安全性的价值是可衡量的。广泛的实验为各种数据集的图像分类模型以及将它们暴露于各种攻击中,表明它提供了保护的同时,同时保持了足够的安全性和鲁棒性。
translated by 谷歌翻译
在本文中,提出了一种新的方法,该方法允许基于神经网络(NN)均衡器的低复杂性发展,以缓解高速相干光学传输系统中的损伤。在这项工作中,我们提供了已应用于馈电和经常性NN设计的各种深层模型压缩方法的全面描述和比较。此外,我们评估了这些策略对每个NN均衡器的性能的影响。考虑量化,重量聚类,修剪和其他用于模型压缩的尖端策略。在这项工作中,我们提出并评估贝叶斯优化辅助压缩,其中选择了压缩的超参数以同时降低复杂性并提高性能。总之,通过使用模拟和实验数据来评估每种压缩方法的复杂性及其性能之间的权衡,以完成分析。通过利用最佳压缩方法,我们表明可以设计基于NN的均衡器,该均衡器比传统的数字背部传播(DBP)均衡器具有更好的性能,并且只有一个步骤。这是通过减少使用加权聚类和修剪算法后在NN均衡器中使用的乘数数量来完成的。此外,我们证明了基于NN的均衡器也可以实现卓越的性能,同时仍然保持与完整的电子色色散补偿块相同的复杂性。我们通过强调开放问题和现有挑战以及未来的研究方向来结束分析。
translated by 谷歌翻译
State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, these DNNs are location and time sensitive, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. In this paper, we introduce AirNet, a novel training and transmission method that allows efficient wireless delivery of DNNs under stringent transmit power and latency constraints. We first train the DNN with noise injection to counter the wireless channel noise. Then we employ pruning to reduce the network size to the available channel bandwidth, and perform knowledge distillation from a larger model to achieve satisfactory performance, despite pruning. We show that AirNet achieves significantly higher test accuracy compared to digital alternatives under the same bandwidth and power constraints. The accuracy of the network at the receiver also exhibits graceful degradation with channel quality, which reduces the requirement for accurate channel estimation. We further improve the performance of AirNet by pruning the network below the available bandwidth, and using channel expansion to provide better robustness against channel noise. We also benefit from unequal error protection (UEP) by selectively expanding more important layers of the network. Finally, we develop an ensemble training approach, which trains a whole spectrum of DNNs, each of which can be used at different channel condition, resolving the impractical memory requirements.
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10 30 . We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested.
translated by 谷歌翻译
深度神经网络(DNN)的记录断裂性能具有沉重的参数化,导致外部动态随机存取存储器(DRAM)进行存储。 DRAM访问的禁用能量使得在资源受限的设备上部署DNN是不普遍的,呼叫最小化重量和数据移动以提高能量效率。我们呈现SmartDeal(SD),算法框架,以进行更高成本的存储器存储/访问的较低成本计算,以便在推理和培训中积极提高存储和能量效率。 SD的核心是一种具有结构约束的新型重量分解,精心制作以释放硬件效率潜力。具体地,我们将每个重量张量分解为小基矩阵的乘积以及大的结构稀疏系数矩阵,其非零被量化为-2的功率。由此产生的稀疏和量化的DNN致力于为数据移动和重量存储而大大降低的能量,因为由于稀疏的比特 - 操作和成本良好的计算,恢复原始权重的最小开销。除了推理之外,我们采取了另一次飞跃来拥抱节能培训,引入创新技术,以解决培训时出现的独特障碍,同时保留SD结构。我们还设计专用硬件加速器,充分利用SD结构来提高实际能源效率和延迟。我们在不同的设置中对多个任务,模型和数据集进行实验。结果表明:1)应用于推理,SD可实现高达2.44倍的能效,通过实际硬件实现评估; 2)应用于培训,储存能量降低10.56倍,减少了10.56倍和4.48倍,与最先进的训练基线相比,可忽略的准确性损失。我们的源代码在线提供。
translated by 谷歌翻译
机器学习(ML)模型应用于越来越多的域。大量数据和计算资源的可用性鼓励开发更复杂和有价值的模型。这些模型被认为是培训他们的合法缔约方的知识产权,这使得他们防止窃取,非法再分配和未经授权的应用迫切需要。数字水印为标记模型所有权提供了强大的机制,从而提供了对这些威胁的保护。这项工作介绍了ML模型的不同类别水印方案的分类识别和分析。它介绍了一个统一的威胁模型,以允许在不同场景中进行水印方法的有效性的结构化推理和比较。此外,它系统化了期望的安全要求和攻击ML模型水印。根据该框架,调查了该领域的代表文学以说明分类法。最后,讨论了现有方法的缺点和普遍局限性,给出了未来研究方向的前景。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
对基于机器学习的分类器以及防御机制的对抗攻击已在单一标签分类问题的背景下广泛研究。在本文中,我们将注意力转移到多标签分类,其中关于所考虑的类别中的关系的域知识可以提供自然的方法来发现不连贯的预测,即与培训数据之外的对抗的例子相关的预测分配。我们在框架中探讨这种直觉,其中一阶逻辑知识被转换为约束并注入半监督的学习问题。在此设置中,约束分类器学会满足边际分布的域知识,并且可以自然地拒绝具有不连贯预测的样本。尽管我们的方法在训练期间没有利用任何对攻击的知识,但我们的实验分析令人惊讶地推出了域名知识约束可以有效地帮助检测对抗性示例,特别是如果攻击者未知这样的约束。
translated by 谷歌翻译
现代深度神经网络往往太大而无法在许多实际情况下使用。神经网络修剪是降低这种模型的大小的重要技术和加速推断。Gibbs修剪是一种表达和设计神经网络修剪方法的新框架。结合统计物理和随机正则化方法的方法,它可以同时培训和修剪网络,使得学习的权重和修剪面膜彼此很好地适应。它可用于结构化或非结构化修剪,我们为每个提出了许多特定方法。我们将拟议的方法与许多当代神经网络修剪方法进行比较,发现Gibbs修剪优于它们。特别是,我们通过CIFAR-10数据集来实现修剪Reset-56的新型最先进的结果。
translated by 谷歌翻译
深度神经网络(DNNS)已经在许多应用领域取得了巨大的成功,并为我们的社会带来了深刻的变化。但是,它也引发了新的安全问题,其中如何保护DNN的知识产权(IP)免受侵权的侵权是最重要但最具挑战性的主题之一。为了解决这个问题,最近的研究通过应用数字水印来关注DNN的IP保护,该水印将通过直接或间接调整网络参数将源信息和/或身份验证数据嵌入DNN模型中。但是,调整网络参数不可避免地会扭曲DNN,因此无疑会损害DNN模型在其最初任务上的性能,而不管性能降解的程度如何。它激发了本文中的作者提出一种称为\ emph {汇总会员推理(PMI)}的新技术,以保护DNN模型的IP。提出的PMI既没有改变给定DNN模型的网络参数,也没有用一系列精心制作的触发样品来微调DNN模型。取而代之的是,它使原始的DNN模型保持不变,但是可以通过推断出多个迷你数据集中的哪个迷你数据箱来确定DNN模型的所有权。实践。实验还证明了这项工作的优势和适用性。
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary.In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring.
translated by 谷歌翻译
迄今为止,通信系统主要旨在可靠地交流位序列。这种方法提供了有效的工程设计,这些设计对消息的含义或消息交换所旨在实现的目标不可知。但是,下一代系统可以通过将消息语义和沟通目标折叠到其设计中来丰富。此外,可以使这些系统了解进行交流交流的环境,从而为新颖的设计见解提供途径。本教程总结了迄今为止的努力,从早期改编,语义意识和以任务为导向的通信开始,涵盖了基础,算法和潜在的实现。重点是利用信息理论提供基础的方法,以及学习在语义和任务感知通信中的重要作用。
translated by 谷歌翻译
鉴于对机器学习模型的访问,可以进行对手重建模型的培训数据?这项工作从一个强大的知情对手的镜头研究了这个问题,他们知道除了一个之外的所有培训数据点。通过实例化混凝土攻击,我们表明重建此严格威胁模型中的剩余数据点是可行的。对于凸模型(例如Logistic回归),重建攻击很简单,可以以封闭形式导出。对于更常规的模型(例如神经网络),我们提出了一种基于训练的攻击策略,该攻击策略接收作为输入攻击的模型的权重,并产生目标数据点。我们展示了我们对MNIST和CIFAR-10训练的图像分类器的攻击的有效性,并系统地研究了标准机器学习管道的哪些因素影响重建成功。最后,我们从理论上调查了有多差异的隐私足以通过知情对手减轻重建攻击。我们的工作提供了有效的重建攻击,模型开发人员可以用于评估超出以前作品中考虑的一般设置中的个别点的记忆(例如,生成语言模型或访问培训梯度);它表明,标准模型具有存储足够信息的能力,以实现培训数据点的高保真重建;它表明,差异隐私可以成功减轻该参数制度中的攻击,其中公用事业劣化最小。
translated by 谷歌翻译
It is crucial to protect the intellectual property rights of DNN models prior to their deployment. The DNN should perform two main tasks: its primary task and watermarking task. This paper proposes a lightweight, reliable, and secure DNN watermarking that attempts to establish strong ties between these two tasks. The samples triggering the watermarking task are generated using image Mixup either from training or testing samples. This means that there is an infinity of triggers not limited to the samples used to embed the watermark in the model at training. The extensive experiments on image classification models for different datasets as well as exposing them to a variety of attacks, show that the proposed watermarking provides protection with an adequate level of security and robustness.
translated by 谷歌翻译
在本文中,我们提出了一种名为Matryoshka的新型内部攻击,该攻击采用无关紧要的计划与公开的DNN模型作为覆盖多个秘密模型的载体模型,以记住存储在本地数据中心中的私人ML数据的功能。我们没有将载体模型的参数视为位字符串并应用常规隐志,而是设计了一种新型参数共享方法,该方法利用了载体模型的学习能力来隐藏信息。同时实现Matryoshka:(i)高容量 - Matryoshka几乎没有实用性损失载体模型,可以隐藏一个26倍较大的秘密模型或8个跨越载体模型中不同应用程序域的不同体系结构的秘密模型,这两个模型都不能是使用现有的隐志技术完成; (ii)解码效率 - 一旦下载了已发布的运营商模型,外部颜色可以将隐藏的模型独家解码,只有几个整数秘密和隐藏模型体系结构的知识; (iii)有效性 - 此外,几乎所有恢复的模型的性能都与私人数据独立培训一样; (iv)鲁棒性 - 自然会实施信息冗余,以在出版前对载体上的常见后处理技术实现弹性; (v)秘密性 - 具有不同先验知识水平的模型检查员几乎不能将载体模型与正常模型区分开。
translated by 谷歌翻译