联合学习(FL)已成为解决数据筒仓问题的实用解决方案,而不会损害用户隐私。它的一种变体垂直联合学习(VFL)最近引起了人们的关注,因为VFL与企业对利用更有价值的功能的需求相匹配,以构建更好的机器学习模型,同时保留用户隐私。当前在VFL中的工作集中于为特定VFL算法开发特定的保护或攻击机制。在这项工作中,我们提出了一个评估框架,该框架提出了隐私 - 私人评估问题。然后,我们将此框架作为指南,以全面评估针对三种广泛依据的VFL算法的大多数最先进的隐私攻击的广泛保护机制。这些评估可以帮助FL从业人员在特定要求下选择适当的保护机制。我们的评估结果表明:模型反转和大多数标签推理攻击可能会因现有保护机制而挫败;很难防止模型完成(MC)攻击,这需要更高级的MC靶向保护机制。根据我们的评估结果,我们为提高VFL系统的隐私保护能力提供具体建议。
translated by 谷歌翻译
联合学习(FL)使独立方能够在保护数据隐私的同时协作建立机器学习(ML)模型。 FL的变体垂直联合学习(VFL)最近引起了人们的注意,因为VFL与企业对利用更有价值的功能的需求相匹配,以实现更好的模型性能而不会损害数据隐私。但是,传统的VFL可能会陷入数据缺陷,因为它只能用标签来利用标签的对准​​样品(属于不同的各方),而通常将大多数未对齐和未标记的样品均未使用。数据缺乏阻碍了联邦的努力。在这项工作中,我们提出了一个联合的混合自我监督的学习框架,即Fedhssl,以利用参与者的所有可用数据(包括未对准和未标记的样本)来培训联合VFL模型。 FEDHSSL的核心思想是利用各方之间对齐的样本的跨党派观点(即分散特征)和各方的本地观点(即增强)来提高通过SSL(SSL)的表示能力(例如,simsiam)。 FEDHSSL进一步利用各方共享的通用特征,以通过部分模型聚合来提高联合模型的性能。我们从经验上证明,与基线方法相比,我们的FEDHSSL实现了显着的性能增长,尤其是当标记样品数量较小时。我们对FedHSSL提供有关隐私泄漏的深入分析,这在现有的自我监督的VFL作品中很少讨论。我们研究了FEDHSSL的保护机制。结果表明,我们的保护可以阻止最先进的标签推理攻击。
translated by 谷歌翻译
在联合学习等协作学习环境中,好奇的疗程可能是诚实的,但正在通过推理攻击试图通过推断攻击推断其他方的私人数据,而恶意缔约方可能会通过后门攻击操纵学习过程。但是,大多数现有的作品只考虑通过样本(HFL)划分数据的联合学习场景。特征分区联合学习(VFL)可以是许多真实应用程序中的另一个重要方案。当攻击者和防守者无法访问其他参与者的功能或模型参数时,这种情况下的攻击和防御尤其挑战。以前的作品仅显示了可以从每个样本渐变重建私有标签。在本文中,我们首先表明,只有批量平均梯度被揭示时,可以重建私人标签,这是针对常见的推定。此外,我们表明VFL中的被动派对甚至可以通过梯度替换攻击将其相应的标签用目标标签替换为目标标签。为了防御第一次攻击,我们介绍了一种基于AutoEncoder和熵正则化的混乱自动化器(CoAE)的新技术。我们证明,与现有方法相比,这种技术可以成功阻止标签推理攻击,同时损害较少的主要任务准确性。我们的COAE技术在捍卫梯度替代后门攻击方面也有效,使其成为一个普遍和实用的防御策略,没有改变原来的VFL协议。我们展示了我们双方和多方VFL设置下的方法的有效性。据我们所知,这是第一次处理特征分区联合学习框架中的标签推理和后门攻击的第一个系统研究。
translated by 谷歌翻译
联合学习(FL)旨在通过使客户能够在不共享其私有数据的情况下协作构建机器学习模型来保护数据隐私。然而,最近的作品表明FL容易受到基于梯度的数据恢复攻击。保存技术的品种已经利用,以进一步提升FL的隐私。尽管如此,它们的计算或通信昂贵(例如,同态加密)或遭受精密损失(例如,差异隐私)。在这项工作中,我们提出了\ textsc {fedcg},一个新颖的\下划线{fed} erated学习方法,它利用\下划线{c} onditional \下划线{g}良好的对手网络来实现高级隐私保护,同时仍然保持竞争模型表现。更具体地说,\ textsc {fedcg}将每个客户端的本地网络分解为私有提取器和公共分类器,并保留本地提取器保护隐私。而不是暴露作为隐私泄漏的罪魁祸首的提取器,而是将客户的生成器与服务器共享,以聚合旨在增强客户端网络性能的公共知识。广泛的实验表明,与基线FL方法相比,\ TextSc {FEDCG}可以实现竞争模型性能,数值隐私分析表明\ TextSC {FEDCG}具有高级别的隐私保存能力。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Capturing feature information effectively is of great importance in vision tasks. With the development of convolutional neural networks (CNNs), concepts like residual connection and multiple scales promote continual performance gains on diverse deep learning vision tasks. However, the existing methods do not organically combined advantages of these valid ideas. In this paper, we propose a novel CNN architecture called GoogLe2Net, it consists of residual feature-reutilization inceptions (ResFRI) or split residual feature-reutilization inceptions (Split-ResFRI) which create transverse passages between adjacent groups of convolutional layers to enable features flow to latter processing branches and possess residual connections to better process information. Our GoogLe2Net is able to reutilize information captured by foregoing groups of convolutional layers and express multi-scale features at a fine-grained level, which improves performances in image classification. And the inception we proposed could be embedded into inception-like networks directly without any migration costs. Moreover, in experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%) and Tiny Imagenet (70.54%), we obtain better results on image classification task compared with other modern models.
translated by 谷歌翻译
Despite some successful applications of goal-driven navigation, existing deep reinforcement learning-based approaches notoriously suffers from poor data efficiency issue. One of the reasons is that the goal information is decoupled from the perception module and directly introduced as a condition of decision-making, resulting in the goal-irrelevant features of the scene representation playing an adversary role during the learning process. In light of this, we present a novel Goal-guided Transformer-enabled reinforcement learning (GTRL) approach by considering the physical goal states as an input of the scene encoder for guiding the scene representation to couple with the goal information and realizing efficient autonomous navigation. More specifically, we propose a novel variant of the Vision Transformer as the backbone of the perception system, namely Goal-guided Transformer (GoT), and pre-train it with expert priors to boost the data efficiency. Subsequently, a reinforcement learning algorithm is instantiated for the decision-making system, taking the goal-oriented scene representation from the GoT as the input and generating decision commands. As a result, our approach motivates the scene representation to concentrate mainly on goal-relevant features, which substantially enhances the data efficiency of the DRL learning process, leading to superior navigation performance. Both simulation and real-world experimental results manifest the superiority of our approach in terms of data efficiency, performance, robustness, and sim-to-real generalization, compared with other state-of-art baselines. Demonstration videos are available at \colorb{https://youtu.be/93LGlGvaN0c.
translated by 谷歌翻译