Online personalized recommendation services are generally hosted in the cloud where users query the cloud-based model to receive recommended input such as merchandise of interest or news feed. State-of-the-art recommendation models rely on sparse and dense features to represent users' profile information and the items they interact with. Although sparse features account for 99% of the total model size, there was not enough attention paid to the potential information leakage through sparse features. These sparse features are employed to track users' behavior, e.g., their click history, object interactions, etc., potentially carrying each user's private information. Sparse features are represented as learned embedding vectors that are stored in large tables, and personalized recommendation is performed by using a specific user's sparse feature to index through the tables. Even with recently-proposed methods that hides the computation happening in the cloud, an attacker in the cloud may be able to still track the access patterns to the embedding tables. This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns. We first characterize the types of attacks that can be carried out on sparse features in recommendation models in an untrusted cloud, followed by a demonstration of how each of these attacks leads to extracting users' private information or tracking users by their behavior over time.
translated by 谷歌翻译
窃取对受控信息的攻击,以及越来越多的信息泄漏事件,已成为近年来新兴网络安全威胁。由于蓬勃发展和部署先进的分析解决方案,新颖的窃取攻击利用机器学习(ML)算法来实现高成功率并导致大量损坏。检测和捍卫这种攻击是挑战性和紧迫的,因此政府,组织和个人应该非常重视基于ML的窃取攻击。本调查显示了这种新型攻击和相应对策的最新进展。以三类目标受控信息的视角审查了基于ML的窃取攻击,包括受控用户活动,受控ML模型相关信息和受控认证信息。最近的出版物总结了概括了总体攻击方法,并导出了基于ML的窃取攻击的限制和未来方向。此外,提出了从三个方面制定有效保护的对策 - 检测,破坏和隔离。
translated by 谷歌翻译
机器学习的最新进展使其在不同领域的广泛应用程序,最令人兴奋的应用程序之一是自动驾驶汽车(AV),这鼓励了从感知到预测到计划的许多ML算法的开发。但是,培训AV通常需要从不同驾驶环境(例如城市)以及不同类型的个人信息(例如工作时间和路线)收集的大量培训数据。这种收集的大数据被视为以数据为中心的AI时代的ML新油,通常包含大量对隐私敏感的信息,这些信息很难删除甚至审核。尽管现有的隐私保护方法已经取得了某些理论和经验成功,但将它们应用于自动驾驶汽车等现实世界应用时仍存在差距。例如,当培训AVS时,不仅可以单独识别的信息揭示对隐私敏感的信息,还可以揭示人口级别的信息,例如城市内的道路建设以及AVS的专有商业秘密。因此,重新审视AV中隐私风险和相应保护方法的前沿以弥合这一差距至关重要。遵循这一目标,在这项工作中,我们为AVS中的隐私风险和保护方法提供了新的分类法,并将AV中的隐私分为三个层面:个人,人口和专有。我们明确列出了保护每个级别的隐私级别,总结这些挑战的现有解决方案,讨论课程和结论,并为研究人员和从业者提供潜在的未来方向和机会。我们认为,这项工作将有助于塑造AV中的隐私研究,并指导隐私保护技术设计。
translated by 谷歌翻译
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
translated by 谷歌翻译
A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
推荐系统已广泛应用于不同的应用领域,包括能量保存,电子商务,医疗保健,社交媒体等。此类应用需要分析和挖掘大量各种类型的用户数据,包括人口统计,偏好,社会互动等,以便开发准确和精确的推荐系统。此类数据集通常包括敏感信息,但大多数推荐系统专注于模型的准确性和忽略与安全性和用户隐私相关的问题。尽管使用不同的风险减少技术克服这些问题,但它们都没有完全成功,确保了对用户的私人信息的密码安全和保护。为了弥合这一差距,区块链技术作为推动推荐系统中的安全和隐私保存的有希望的策略,不仅是因为其安全性和隐私性突出特征,而且由于其恢复力,适应性,容错和信任特性。本文介绍了涵盖挑战,开放问题和解决方案的基于区块链的推荐系统的整体综述。因此,引入了精心设计的分类,以描述安全和隐私挑战,概述现有框架并在使用区块链之前讨论其应用程序和利益,以指示未来的研究机会。
translated by 谷歌翻译
边缘计算是一个将数据处理服务转移到生成数据的网络边缘的范式。尽管这样的架构提供了更快的处理和响应,但除其他好处外,它还提出了必须解决的关键安全问题和挑战。本文讨论了从硬件层到系统层的边缘网络体系结构出现的安全威胁和漏洞。我们进一步讨论了此类网络中的隐私和法规合规性挑战。最后,我们认为需要一种整体方法来分析边缘网络安全姿势,该姿势必须考虑每一层的知识。
translated by 谷歌翻译
机器学习中的隐私和安全挑战(ML)已成为ML普遍的开发以及最近对大型攻击表面的展示,已成为一个关键的话题。作为一种成熟的以系统为导向的方法,在学术界和行业中越来越多地使用机密计算来改善各种ML场景的隐私和安全性。在本文中,我们将基于机密计算辅助的ML安全性和隐私技术的发现系统化,以提供i)保密保证和ii)完整性保证。我们进一步确定了关键挑战,并提供有关ML用例现有可信赖的执行环境(TEE)系统中限制的专门分析。我们讨论了潜在的工作,包括基础隐私定义,分区的ML执行,针对ML的专用发球台设计,TEE Awawe Aware ML和ML Full Pipeline保证。这些潜在的解决方案可以帮助实现强大的TEE ML,以保证无需引入计算和系统成本。
translated by 谷歌翻译
数十年来,计算机系统持有大量个人数据。一方面,这种数据丰度允许在人工智能(AI),尤其是机器学习(ML)模型中突破。另一方面,它可能威胁用户的隐私并削弱人类与人工智能之间的信任。最近的法规要求,可以从一般情况下从计算机系统中删除有关用户的私人信息,特别是根据要求从ML模型中删除(例如,“被遗忘的权利”)。虽然从后端数据库中删除数据应该很简单,但在AI上下文中,它不够,因为ML模型经常“记住”旧数据。现有的对抗攻击证明,我们可以从训练有素的模型中学习私人会员或培训数据的属性。这种现象要求采用新的范式,即机器学习,以使ML模型忘记了特定的数据。事实证明,由于缺乏共同的框架和资源,最近在机器上学习的工作无法完全解决问题。在本调查文件中,我们试图在其定义,场景,机制和应用中对机器进行彻底的研究。具体而言,作为最先进的研究的类别集合,我们希望为那些寻求机器未学习的入门及其各种表述,设计要求,删除请求,算法和用途的人提供广泛的参考。 ML申请。此外,我们希望概述范式中的关键发现和趋势,并突出显示尚未看到机器无法使用的新研究领域,但仍可以受益匪浅。我们希望这项调查为ML研究人员以及寻求创新隐私技术的研究人员提供宝贵的参考。我们的资源是在https://github.com/tamlhp/awesome-machine-unlearning上。
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
网络威胁情报(CTI)共享是减少攻击者和捍卫者之间信息不对称的重要活动。但是,由于数据共享和机密性之间的紧张关系,这项活动带来了挑战,这导致信息保留通常会导致自由骑士问题。因此,共享的信息仅代表冰山一角。当前的文献假设访问包含所有信息的集中数据库,但是由于上述张力,这并不总是可行的。这会导致不平衡或不完整的数据集,需要使用技术扩展它们。我们展示了这些技术如何导致结果和误导性能期望。我们提出了一个新颖的框架,用于从分布式数据中提取有关事件,漏洞和妥协指标的分布式数据,并与恶意软件信息共享平台(MISP)一起证明其在几种实际情况下的使用。提出和讨论了CTI共享的政策影响。拟议的系统依赖于隐私增强技术和联合处理的有效组合。这使组织能够控制其CTI,并最大程度地减少暴露或泄漏的风险,同时为共享的好处,更准确和代表性的结果以及更有效的预测性和预防性防御能力。
translated by 谷歌翻译
As automated face recognition applications tend towards ubiquity, there is a growing need to secure the sensitive face data used within these systems. This paper presents a survey of biometric template protection (BTP) methods proposed for securing face templates (images/features) in neural-network-based face recognition systems. The BTP methods are categorised into two types: Non-NN and NN-learned. Non-NN methods use a neural network (NN) as a feature extractor, but the BTP part is based on a non-NN algorithm, whereas NN-learned methods employ a NN to learn a protected template from the unprotected template. We present examples of Non-NN and NN-learned face BTP methods from the literature, along with a discussion of their strengths and weaknesses. We also investigate the techniques used to evaluate these methods in terms of the three most common BTP criteria: recognition accuracy, irreversibility, and renewability/unlinkability. The recognition accuracy of protected face recognition systems is generally evaluated using the same (empirical) techniques employed for evaluating standard (unprotected) biometric systems. However, most irreversibility and renewability/unlinkability evaluations are found to be based on theoretical assumptions/estimates or verbal implications, with a lack of empirical validation in a practical face recognition context. So, we recommend a greater focus on empirical evaluations to provide more concrete insights into the irreversibility and renewability/unlinkability of face BTP methods in practice. Additionally, an exploration of the reproducibility of the studied BTP works, in terms of the public availability of their implementation code and evaluation datasets/procedures, suggests that it would be difficult to faithfully replicate most of the reported findings. So, we advocate for a push towards reproducibility, in the hope of advancing face BTP research.
translated by 谷歌翻译
医学事物互联网(IOMT)允许使用传感器收集生理数据,然后将其传输到远程服务器,这使医生和卫生专业人员可以连续,永久地分析这些数据,并在早期阶段检测疾病。但是,使用无线通信传输数据将其暴露于网络攻击中,并且该数据的敏感和私人性质可能代表了攻击者的主要兴趣。在存储和计算能力有限的设备上使用传统的安全方法无效。另一方面,使用机器学习进行入侵检测可以对IOMT系统的要求提供适应性的安全响应。在这种情况下,对基于机器学习(ML)的入侵检测系统如何解决IOMT系统中的安全性和隐私问题的全面调查。为此,提供了IOMT的通用三层体系结构以及IOMT系统的安全要求。然后,出现了可能影响IOMT安全性的各种威胁,并确定基于ML的每个解决方案中使用的优势,缺点,方法和数据集。最后,讨论了在IOMT的每一层中应用ML的一些挑战和局限性,这些挑战和局限性可以用作未来的研究方向。
translated by 谷歌翻译
网络钓鱼袭击在互联网上继续成为一个重大威胁。先前的研究表明,可以确定网站是否是网络钓鱼,也可以更仔细地分析其URL。基于URL的方法的一个主要优点是它即使在浏览器中呈现网页之前,它也可以识别网络钓鱼网站,从而避免了其他潜在问题,例如加密和驾驶下载。但是,传统的基于URL的方法有它们的局限性。基于黑名单的方法容易出现零小时网络钓鱼攻击,基于先进的机器学习方法消耗高资源,而其他方法将URL发送到远程服务器,损害用户的隐私。在本文中,我们提出了一个分层的防护防御,PhishMatch,这是强大,准确,廉价和客户端的。我们设计一种节省空间高效的AHO-Corasick算法,用于精确串联匹配和基于N-GRAM的索引技术,用于匹配的近似字符串,以检测网络钓鱼URL中的各种弧度标准技术。为了减少误报,我们使用全球白名单和个性化用户白名单。我们还确定访问URL的上下文并使用该信息更准确地对输入URL进行分类。 PhishMatch的最后一个组成部分涉及机器学习模型和受控搜索引擎查询以对URL进行分类。发现针对Chrome浏览器开发的PhishMatch的原型插件,是快速轻便的。我们的评价表明,PhishMatch既有效又有效。
translated by 谷歌翻译
想象一组愿意集体贡献他们的个人数据的公民,以获得共同的益处,以产生社会有用的信息,由数据分析或机器学习计算产生。使用执行计算的集中式服务器共享原始的个人数据可能会引发对隐私和感知风险的担忧。相反,公民可以相互信任,并且他们自己的设备可以参与分散的计算,以协同生成要共享的聚合数据释放。在安全计算节点在运行时在安全信道交换消息的上下文中,密钥安全问题是保护对观察流量的外部攻击者,其对数据的依赖可以揭示个人信息。现有解决方案专为云设置而设计,目标是隐藏底层数据集的所有属性,并且不解决上述背景下出现的特定隐私和效率挑战。在本文中,我们定义了一般执行模型,以控制用户侧分散计算中通信的数据依赖性,其中通过组合在局部节点的局部集群上的保证来分析全局执行计划中的差异隐私保证。我们提出了一系列算法,可以在隐私,效用和效率之间进行权衡。我们的正式隐私保障利用,并通过洗牌延长隐私放大的结果。我们说明了我们对具有数据依赖通信的分散执行计划的两个代表性示例的提案的有用性。
translated by 谷歌翻译
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on.We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
translated by 谷歌翻译
联合学习允许一组用户在私人训练数据集中培训深度神经网络。在协议期间,数据集永远不会留下各个用户的设备。这是通过要求每个用户向中央服务器发送“仅”模型更新来实现,从而汇总它们以更新深神经网络的参数。然而,已经表明,每个模型更新都具有关于用户数据集的敏感信息(例如,梯度反转攻击)。联合学习的最先进的实现通过利用安全聚合来保护这些模型更新:安全监控协议,用于安全地计算用户的模型更新的聚合。安全聚合是关键,以保护用户的隐私,因为它会阻碍服务器学习用户提供的个人模型更新的源,防止推断和数据归因攻击。在这项工作中,我们表明恶意服务器可以轻松地阐明安全聚合,就像后者未到位一样。我们设计了两种不同的攻击,能够在参与安全聚合的用户数量上,独立于参与安全聚合的用户数。这使得它们在大规模现实世界联邦学习应用中的具体威胁。攻击是通用的,不瞄准任何特定的安全聚合协议。即使安全聚合协议被其理想功能替换为提供完美的安全性的理想功能,它们也同样有效。我们的工作表明,安全聚合与联合学习相结合,当前实施只提供了“虚假的安全感”。
translated by 谷歌翻译
互联网连接系统的指数增长产生了许多挑战,例如频谱短缺问题,需要有效的频谱共享(SS)解决方案。复杂和动态的SS系统可以接触不同的潜在安全性和隐私问题,需要保护机制是自适应,可靠和可扩展的。基于机器学习(ML)的方法经常提议解决这些问题。在本文中,我们对最近的基于ML的SS方法,最关键的安全问题和相应的防御机制提供了全面的调查。特别是,我们详细说明了用于提高SS通信系统的性能的最先进的方法,包括基于ML基于ML的基于的数据库辅助SS网络,ML基于基于的数据库辅助SS网络,包括基于ML的数据库辅助的SS网络,基于ML的LTE-U网络,基于ML的环境反向散射网络和其他基于ML的SS解决方案。我们还从物理层和基于ML算法的相应防御策略的安全问题,包括主要用户仿真(PUE)攻击,频谱感测数据伪造(SSDF)攻击,干扰攻击,窃听攻击和隐私问题。最后,还给出了对ML基于ML的开放挑战的广泛讨论。这种全面的审查旨在为探索新出现的ML的潜力提供越来越复杂的SS及其安全问题,提供基础和促进未来的研究。
translated by 谷歌翻译