最近已经提出了许多用于对计算上昂贵问题进行多目标优化的方法。通常,每个目标的概率替代物是由初始数据集构建的。然后,替代物可用于在目标空间中为任何解决方案产生预测密度。使用预测密度,我们可以根据解决方案来计算预期的超量改进(EHVI)。使EHVI最大化,我们可以找到接下来可能会缴纳的最有希望的解决方案。有用于计算EHVI的封闭式表达式,并在多元预测密度上整合。但是,它们需要分区目标空间,对于三个以上的目标而言,这可能会非常昂贵。此外,对于预测密度依赖的问题,没有封闭形式的表达式,可以捕获目标之间的相关性。在这种情况下,使用蒙特卡洛近似值,这并不便宜。因此,仍然需要开发新的准确但便宜的近似方法。在这里,我们研究了使用高斯 - 温石正交近似EHVI的替代方法。我们表明,对于独立和相关的预测密度,对于一系列流行的测试问题,它可以是蒙特卡洛的准确替代品。
translated by 谷歌翻译
Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients. Instead of sharing the data, each client trains the model locally, resulting in improved privacy. However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model. Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions. However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model. Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods. We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation. To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments. To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers. We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios. We show in our extensive evaluation that CrowdGuard can effectively mitigate targeted poisoning attacks and achieve in various scenarios a True-Positive-Rate of 100% and a True-Negative-Rate of 100%.
translated by 谷歌翻译
联合学习(FL)允许多个客户端在私人数据上协作训练神经网络(NN)模型,而不会显示数据。最近,已经介绍了针对FL的几种针对性的中毒攻击。这些攻击将后门注入到所产生的模型中,允许对抗控制的输入被错误分类。抵抗后门攻击的现有对策效率低,并且通常仅旨在排除偏离聚合的偏离模型。然而,这种方法还删除了具有偏离数据分布的客户端的良性模型,导致聚合模型对这些客户端执行不佳。为了解决这个问题,我们提出了一种深入的模型过滤方法,用于减轻后门攻击。它基于三种新颖的技术,允许表征用于培训模型更新的数据的分布,并寻求测量NNS内部结构和输出中的细粒度差异。使用这些技术,DeepSight可以识别可疑的模型更新。我们还开发了一种可以准确集群模型更新的方案。结合两个组件的结果,DeepSight能够识别和消除含有高攻击模型的模型集群,具有高攻击影响。我们还表明,可以通过现有的基于重量剪切的防御能力减轻可能未被发现的中毒模型的后门贡献。我们评估了深度的性能和有效性,并表明它可以减轻最先进的后门攻击,对模型对良性数据的性能的影响忽略不计。
translated by 谷歌翻译