Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.
translated by 谷歌翻译
Classifiers in supervised learning have various security and privacy issues, e.g., 1) data poisoning attacks, backdoor attacks, and adversarial examples on the security side as well as 2) inference attacks and the right to be forgotten for the training data on the privacy side. Various secure and privacy-preserving supervised learning algorithms with formal guarantees have been proposed to address these issues. However, they suffer from various limitations such as accuracy loss, small certified security guarantees, and/or inefficiency. Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data. Given a pre-trained encoder as a feature extractor, supervised learning can train a simple yet accurate classifier using a small amount of labeled training data. In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms. Our key findings are that a pre-trained encoder substantially improves 1) both accuracy under no attacks and certified security guarantees against data poisoning and backdoor attacks of state-of-the-art secure learning algorithms (i.e., bagging and KNN), 2) certified security guarantees of randomized smoothing against adversarial examples without sacrificing its accuracy under no attacks, 3) accuracy of differentially private classifiers, and 4) accuracy and/or efficiency of exact machine unlearning.
translated by 谷歌翻译
半监督学习(SSL)利用标记和未标记的数据来训练机器学习(ML)模型。最先进的SSL方法可以通过利用更少的标记数据来实现与监督学习相当的性能。但是,大多数现有作品都集中在提高SSL的性能。在这项工作中,我们通过研究SSL的培训数据隐私来采取不同的角度。具体而言,我们建议针对由SSL训练的ML模型进行的第一个基于数据增强的成员推理攻击。给定数据样本和黑框访问模型,成员推理攻击的目标是确定数据样本是否属于模型的训练数据集。我们的评估表明,拟议的攻击可以始终超过现有的成员推理攻击,并针对由SSL训练的模型实现最佳性能。此外,我们发现,SSL中会员泄漏的原因与受到监督学习中普遍认为的原因不同,即过度拟合(培训和测试准确性之间的差距)。我们观察到,SSL模型已被概括为测试数据(几乎为0个过度拟合),但“记住”训练数据通过提供更自信的预测,无论其正确性如何。我们还探索了早期停止,作为防止成员推理攻击SSL的对策。结果表明,早期停止可以减轻会员推理攻击,但由于模型的实用性降解成本。
translated by 谷歌翻译
联合学习(FL)容易受到模型中毒攻击的影响,在该攻击中,恶意客户通过将操纵模型更新发送到服务器来破坏全局模型。现有的防御措施主要依靠拜占庭式抗体方法,即使某些客户是恶意的,旨在学习准确的全球模型。但是,在实践中,他们只能抵抗少数恶意客户。如何与大量恶意客户抗衡模型中毒攻击仍然是一个公开挑战。我们的fldetector通过检测恶意客户来应对这一挑战。 FLDETECTOR旨在检测和删除大多数恶意客户,以便拜占庭式的fl方法可以使用其余客户学习准确的全球模型。我们的主要观察结果是,在模型中毒攻击中,在多次迭代中的客户更新的模型更新是不一致的。因此,FLDetector通过检查其模型更高的一致性来检测恶意客户端。大致来说,服务器根据其历史模型更新使用Cauchy Mean Valie Therorem和L-BFG预测客户端的模型更新在多个迭代中不一致。我们在三个基准数据集上进行的广泛实验表明,FLDETECTOR可以准确检测到多种最新模型中毒攻击中的恶意客户。在删除了被检测到的恶意客户端后,现有的拜占庭式FL方法可以学习准确的全球模型。
translated by 谷歌翻译
Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder, a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.
translated by 谷歌翻译
预训练的编码器是通用特征提取器,可用于许多下游任务。自我监督学习的最新进展可以使用大量未标记的数据预先培训高效编码器,从而导致新兴编码器作为服务(EAAS)。预先训练的编码器可能被视为机密性,因为其培训需要大量数据和计算资源及其公开发布可能有助于滥用AI,例如,以进行深层效果。在本文中,我们提出了第一次称为Stolenencoder的攻击,以窃取预训练的图像编码器。我们评估了由我们自己预先训练的多个目标编码器和三个现实世界目标编码器的stolenencoder,包括由Google预先培训的Imagenet编码器,由OpenAI预先培训的剪辑编码器以及Clarifai的一般嵌入式编码器部署为付费EAAS。我们的结果表明,我们被盗的编码器与目标编码器具有相似的功能。特别是,构建在目标编码器和被盗的下游分类器具有相似的精度。此外,使用StolenenCoder窃取目标编码器所需的数据和计算资源要比从头开始进行预训练要少得多。我们还探索了三个防御能力,这些防御能力扰动目标编码器产生的矢量。我们的结果表明,这些防御措施不足以减轻Stolenencoder。
translated by 谷歌翻译
随着最近在移动和边缘设备上部署神经网络模型的需求,希望提高模型对看不见的测试数据的普遍性,以及提高模型在固定点量化下的稳健性,以实现有效部署。然而,最大限度地减少培训损失在泛化和量化性能上提供了一些保证。在这项工作中,我们通过在改善模型对界限重量扰动的框架下理论上统一它们的理论上统一并最小化模型权重的稳健性并最小化了模型权重的框架的框架,同时履行泛化和量化性能。因此,我们提出了HESSIAN增强的鲁棒优化方法,以通过基于梯度的训练过程最小化Hessian特征值,同时提高泛化和量化性能。 HERO在测试准确性上高达3.8%,高度高达30%,在80%的培训标签扰动下的准确性高达30%,以及各种精度范围内的最佳训练后量化精度,包括在SGD上的高精度改善> 10%在各种数据集上的共同模型架构培训模型。
translated by 谷歌翻译
自我监督的学习在过去几年中取得了革命性的进展,并且通常被认为是通用AI的有希望的方法。特别是,自我监督的学习旨在使用大量未标记的数据预先列车。预先培训的编码器就像AI生态系统的“操作系统”。具体地,编码器可以用作许多下游任务的特征提取器,其中没有标记或未标记的训练数据。关于自我监督学习的现有研究主要专注于预先培训更好的编码器,以改善其在非对抗环境中的下游任务的性能,留下其在对抗环境中的安全性和隐私,这在很大程度上是未开发的。预先训练的编码器的安全性或隐私问题导致AI生态系统的单一失败点。在本书章节中,我们在自我监督学习中讨论了预训练的编码器的10个基本安全和隐私问题,包括六个机密性问题,三个完整性问题和一个可用性问题。对于每个问题,我们讨论潜在的机会和挑战。我们希望我们的书籍章节将激发未来的自我监督学习的安全和隐私的研究。
translated by 谷歌翻译
数据中毒攻击和后门攻击旨在通过修改,添加和/或删除一些仔细选择的训练示例来破坏机器学习分类器,使得损坏的分类器将预测不正确,因为攻击者的期望。用于数据中毒攻击和后门攻击的最先进的认证防御的关键概念是创建大多数投票机制来预测测试示例的标签。此外,每个选民是在训练数据集的子集上培训的基本分类器。典型简单学习算法,如K最近邻居(knn)和半径最近的邻居(RNN)具有内在的多数投票机制。在这项工作中,我们表明,KNN和RNN的内在大多数投票机制已经提供了针对数据中毒攻击和后门攻击的经过认证的稳健性。此外,我们对MNIST和CIFAR10的评估结果表明,KNN和RNN的内在认证稳健性担保优于最先进的认证防御。我们的结果作为未来数据中毒攻击和后门攻击的未来认证防御标准基线。
translated by 谷歌翻译
在这项工作中,我们向图形神经网络(GNN)提出了第一个后门攻击。具体而言,我们向GNN提出一个\ emph {子画面的后门攻击},用于图表分类。在我们的后门攻击中,一旦预定义的子图注入测试图,GNN分类器就预测测试图的攻击者所选择的目标标签。我们在三个真实世界图数据集上的经验结果表明,我们的后门攻击对GNN的预测准确性的影响很小,对清洁测试图进行了很小影响。此外,我们概括了基于随机的平滑的认证防御来防御我们的后门攻击。我们的经验结果表明,在某些情况下,防御是有效的,但在其他情况下无效,突出了我们的后门攻击的新防御的需求。
translated by 谷歌翻译