Hyperparameter tuning is critical to the success of federated learning applications. Unfortunately, appropriately selecting hyperparameters is challenging in federated networks. Issues of scale, privacy, and heterogeneity introduce noise in the tuning process and make it difficult to evaluate the performance of various hyperparameters. In this work, we perform the first systematic study on the effect of noisy evaluation in federated hyperparameter tuning. We first identify and rigorously explore key sources of noise, including client subsampling, data and systems heterogeneity, and data privacy. Surprisingly, our results indicate that even small amounts of noise can significantly impact tuning methods-reducing the performance of state-of-the-art approaches to that of naive baselines. To address noisy evaluation in such scenarios, we propose a simple and effective approach that leverages public proxy data to boost the evaluation signal. Our work establishes general challenges, baselines, and best practices for future work in federated hyperparameter tuning.
translated by 谷歌翻译
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data.
translated by 谷歌翻译
个性化联合学习认为在异质网络中每个客户独有的学习模型。据称,最终的客户特定模型是为了改善联合网络中的准确性,公平性和鲁棒性等指标。但是,尽管该领域有很多工作,但仍不清楚:(1)哪些个性化技术在各种环境中最有效,以及(2)个性化对现实的联合应用程序的真正重要性。为了更好地回答这些问题,我们提出了Motley,这是个性化联合学习的基准。 Motley由一套来自各种问题域的跨设备和跨核管联合数据集组成,以及彻底的评估指标,以更好地理解个性化的可能影响。我们通过比较许多代表性的个性化联合学习方法来建立基准基准。这些最初的结果突出了现有方法的优势和劣势,并为社区提出了几个开放问题。 Motley旨在提供一种可再现的手段,以推进个性化和异质性的联合学习以及转移学习,元学习和多任务学习的相关领域。
translated by 谷歌翻译
虽然差异隐私的应用(DP)在联合学习(FL)方面进行了充分研究,但考虑到跨索洛FL的DP缺乏工作,该设置的特征是有限数量的客户,每个客户都包含许多人数据主体。在跨索洛fl中,由于现实世界中的隐私法规,通常涉及核心数据主体,而不是孤岛本身,因此客户级隐私的通常概念不太适合。在这项工作中,我们相反,考虑了更现实的孤岛特定项目级隐私的概念,其中筒仓为当地示例设定了自己的隐私目标。在这种情况下,我们重新考虑了个性化在联合学习中的作用。特别是,我们表明,均值进行的多任务学习(MR-MTL)是一个简单的个性化框架,是跨索洛FL的强大基准:在更强的隐私下,孤岛进一步激励彼此“联合”以互相“联合”减轻DP噪声,相对于标准基线方法,导致一致的改进。我们为竞争方法以及MR-MTL的理论表征提供了一项彻底的经验研究,以实现平均估计问题,从而突出了隐私与跨核数据异质性之间的相互作用。我们的工作旨在为私人跨索洛FL建立基准,并确定该领域未来工作的关键方向。
translated by 谷歌翻译
在联邦学习中,对受保护群体的公平预测是许多应用程序的重要限制。不幸的是,先前研究集团联邦学习的工作往往缺乏正式的融合或公平保证。在这项工作中,我们为可证明的公平联合学习提供了一个一般框架。特别是,我们探索并扩展了有限的群体损失的概念,作为理论上的群体公平方法。使用此设置,我们提出了一种可扩展的联合优化方法,该方法在许多群体公平限制下优化了经验风险。我们为该方法提供收敛保证,并为最终解决方案提供公平保证。从经验上讲,我们评估了公平ML和联合学习的共同基准的方法,表明它可以比基线方法提供更公平,更准确的预测。
translated by 谷歌翻译
自适应优化方法已成为许多机器学习任务的默认求解器。不幸的是,适应性的好处可能会在具有不同隐私的训练时降低,因为噪声增加了,以确保隐私会降低自适应预处理的有效性。为此,我们提出了ADADP,这是一个使用非敏感的侧面信息来预处梯度的一般框架,从而可以在私有设置中有效使用自适应方法。我们正式显示ADADPS减少了获得类似隐私保证所需的噪声量,从而提高了优化性能。从经验上讲,我们利用简单且随时可用的侧面信息来探索实践中ADADP的性能,与集中式和联合设置中的强大基线相比。我们的结果表明,ADADP平均提高了准确性7.7%(绝对) - 在大规模文本和图像基准上产生最先进的隐私性权衡权衡。
translated by 谷歌翻译
输入管道,其摄取和转换输入数据,是培训机器学习(ML)模型的重要组成部分。然而,实现有效的输入管道有挑战性,因为它需要推理有关并行性,异步的推理和细粒度分析信息的可变性。我们对谷歌数据中心超过200万毫升工作的分析表明,大量模型培训工作可以从更快的输入数据管道中受益。与此同时,我们的分析表明,大多数工作都不饱和主机硬件,指向基于软件的瓶颈的方向。这些发现的动机,我们提出了水管工,一种用于在ML输入管道中找到瓶颈的工具。管道工使用可扩展和可解释的操作分析分析模型来自动调整Host资源约束下的并行性,预取和缓存。在五个代表性ML管道上,水管工可获得最多46倍的误配置管道的加速。通过自动化缓存,与最先进的调谐器相比,水管工获得超过40%的端到端加速。
translated by 谷歌翻译
调整Quand参数是机器学习管道的重要而艰巨的部分。在联合学习中,封锁率优化更具挑战性,在多均匀设备的分布式网络上学习模型;在这里,需要保留设备上的数据并执行本地培训使得难以有效地培训和评估配置。在这项工作中,我们调查联邦封面调整的问题。我们首先识别关键挑战,并展示标准方法如何适应联合环境的基线。然后,通过与重量共享的神经结构搜索技术进行新颖的连接,我们介绍了一种新的方法,联邦快递,以加速联合的超参数调整,该调整适用于广泛使用的联合优化方法,例如FADVG和最近的变体。从理论上讲,我们表明联邦快递器在跨设备的在线凸优化的设置中正确调整了在设备上的学习速率。凭经验,我们表明,联邦快递可以在莎士比亚,春头和CIFAR-10基准上的几个百分点占据联邦封面调整的自然基线,使用相同的培训预算获得更高的准确性。
translated by 谷歌翻译
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
translated by 谷歌翻译
Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
translated by 谷歌翻译