Classifiers in supervised learning have various security and privacy issues, e.g., 1) data poisoning attacks, backdoor attacks, and adversarial examples on the security side as well as 2) inference attacks and the right to be forgotten for the training data on the privacy side. Various secure and privacy-preserving supervised learning algorithms with formal guarantees have been proposed to address these issues. However, they suffer from various limitations such as accuracy loss, small certified security guarantees, and/or inefficiency. Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data. Given a pre-trained encoder as a feature extractor, supervised learning can train a simple yet accurate classifier using a small amount of labeled training data. In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms. Our key findings are that a pre-trained encoder substantially improves 1) both accuracy under no attacks and certified security guarantees against data poisoning and backdoor attacks of state-of-the-art secure learning algorithms (i.e., bagging and KNN), 2) certified security guarantees of randomized smoothing against adversarial examples without sacrificing its accuracy under no attacks, 3) accuracy of differentially private classifiers, and 4) accuracy and/or efficiency of exact machine unlearning.
translated by 谷歌翻译
Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder, a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.
translated by 谷歌翻译
预训练的编码器是通用特征提取器,可用于许多下游任务。自我监督学习的最新进展可以使用大量未标记的数据预先培训高效编码器,从而导致新兴编码器作为服务(EAAS)。预先训练的编码器可能被视为机密性,因为其培训需要大量数据和计算资源及其公开发布可能有助于滥用AI,例如,以进行深层效果。在本文中,我们提出了第一次称为Stolenencoder的攻击,以窃取预训练的图像编码器。我们评估了由我们自己预先训练的多个目标编码器和三个现实世界目标编码器的stolenencoder,包括由Google预先培训的Imagenet编码器,由OpenAI预先培训的剪辑编码器以及Clarifai的一般嵌入式编码器部署为付费EAAS。我们的结果表明,我们被盗的编码器与目标编码器具有相似的功能。特别是,构建在目标编码器和被盗的下游分类器具有相似的精度。此外,使用StolenenCoder窃取目标编码器所需的数据和计算资源要比从头开始进行预训练要少得多。我们还探索了三个防御能力,这些防御能力扰动目标编码器产生的矢量。我们的结果表明,这些防御措施不足以减轻Stolenencoder。
translated by 谷歌翻译
自我监督的学习在过去几年中取得了革命性的进展,并且通常被认为是通用AI的有希望的方法。特别是,自我监督的学习旨在使用大量未标记的数据预先列车。预先培训的编码器就像AI生态系统的“操作系统”。具体地,编码器可以用作许多下游任务的特征提取器,其中没有标记或未标记的训练数据。关于自我监督学习的现有研究主要专注于预先培训更好的编码器,以改善其在非对抗环境中的下游任务的性能,留下其在对抗环境中的安全性和隐私,这在很大程度上是未开发的。预先训练的编码器的安全性或隐私问题导致AI生态系统的单一失败点。在本书章节中,我们在自我监督学习中讨论了预训练的编码器的10个基本安全和隐私问题,包括六个机密性问题,三个完整性问题和一个可用性问题。对于每个问题,我们讨论潜在的机会和挑战。我们希望我们的书籍章节将激发未来的自我监督学习的安全和隐私的研究。
translated by 谷歌翻译
数据中毒攻击和后门攻击旨在通过修改,添加和/或删除一些仔细选择的训练示例来破坏机器学习分类器,使得损坏的分类器将预测不正确,因为攻击者的期望。用于数据中毒攻击和后门攻击的最先进的认证防御的关键概念是创建大多数投票机制来预测测试示例的标签。此外,每个选民是在训练数据集的子集上培训的基本分类器。典型简单学习算法,如K最近邻居(knn)和半径最近的邻居(RNN)具有内在的多数投票机制。在这项工作中,我们表明,KNN和RNN的内在大多数投票机制已经提供了针对数据中毒攻击和后门攻击的经过认证的稳健性。此外,我们对MNIST和CIFAR10的评估结果表明,KNN和RNN的内在认证稳健性担保优于最先进的认证防御。我们的结果作为未来数据中毒攻击和后门攻击的未来认证防御标准基线。
translated by 谷歌翻译
在这项工作中,我们向图形神经网络(GNN)提出了第一个后门攻击。具体而言,我们向GNN提出一个\ emph {子画面的后门攻击},用于图表分类。在我们的后门攻击中,一旦预定义的子图注入测试图,GNN分类器就预测测试图的攻击者所选择的目标标签。我们在三个真实世界图数据集上的经验结果表明,我们的后门攻击对GNN的预测准确性的影响很小,对清洁测试图进行了很小影响。此外,我们概括了基于随机的平滑的认证防御来防御我们的后门攻击。我们的经验结果表明,在某些情况下,防御是有效的,但在其他情况下无效,突出了我们的后门攻击的新防御的需求。
translated by 谷歌翻译
在联合学习中,多个客户端设备联合学习机器学习模型:每个客户端设备都为其本地训练数据集维护本地模型,而主设备通过从客户端设备聚合本地模型维护全局模型。该机器学习界最近提出了几种联合学习方法,该方法被声称对某些客户端设备的拜占庭故障(例如,系统故障,对抗性操纵)具有稳健。在这项工作中,我们对局部模型中毒攻击进行了第一个系统研究对联邦学习。我们假设攻击者已损害某些客户端设备,并且攻击者在学习过程中操纵受损客户端设备上的本地模型参数,使得全局模型具有大的测试错误率。我们将我们的攻击制订为优化问题,并将我们的攻击应用于四个最近的拜占庭式联邦学习方法。我们在四个真实数据集中的经验结果表明,我们的攻击可以大大增加由联合学习方法学到的模型的错误率,这些方法被声称对某些客户端设备的拜占庭式失败具有强大的稳健性。我们概括了数据中毒攻击的两个防御,以防御我们当地的模型中毒攻击。我们的评价结果​​表明,在某些情况下,一个防御可以有效地防御我们的攻击,但在其他情况下,防御不足,突出了对我们当地模型中毒攻击对联合学习的新防御的必要性。
translated by 谷歌翻译
In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs.In this work, we propose MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples. Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker's classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, Mem-Guard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译