Federated learning (FL) on deep neural networks facilitates new applications at the edge, especially for wearable and Internet-of-Thing devices. Such devices capture a large and diverse amount of data, but they have memory, compute, power, and connectivity constraints which hinder their participation in FL. We propose Centaur, a multitier FL framework, enabling ultra-constrained devices to efficiently participate in FL on large neural nets. Centaur combines two major ideas: (i) a data selection scheme to choose a portion of samples that accelerates the learning, and (ii) a partition-based training algorithm that integrates both constrained and powerful devices owned by the same user. Evaluations, on four benchmark neural nets and three datasets, show that Centaur gains ~10% higher accuracy than local training on constrained devices with ~58% energy saving on average. Our experimental results also demonstrate the superior efficiency of Centaur when dealing with imbalanced data, client participation heterogeneity, and various network connection probabilities.
translated by 谷歌翻译
联合学习通常用于容易获得标签的任务(例如,下一个单词预测)。放松这种约束需要设计无监督的学习技术,该技术可以支持联合培训的理想特性:稳健性对统计/系统异质性,可伸缩性与参与者数量以及沟通效率。关于该主题的先前工作集中在直接扩展集中式的自我监督学习技术上,这些学习技术并非旨在具有上面列出的属性。为了解决这种情况,我们提出了乐团,这是一种新颖的无监督联盟学习技术,利用联邦的层次结构来协调分布式的聚类任务,并将客户数据对客户数据的全球始终划分为可区分的群集。我们显示了管弦乐队中的算法管道可确保在线性探针下良好的概括性能,从而使其在广泛的条件下胜过替代技术,包括异质性,客户次数,参与率和本地时期的变化。
translated by 谷歌翻译
联合学习(FL)可以对机器学习模型进行分布式培训,同时将个人数据保存在用户设备上。尽管我们目睹了FL在移动传感领域的越来越多的应用,例如人类活动识别(HAR),但在多设备环境(MDE)的背景下,尚未对FL进行研究,其中每个用户都拥有多个数据生产设备。随着移动设备和可穿戴设备的扩散,MDE在Ubicomp设置中越来越受欢迎,因此需要对其中的FL进行研究。 MDE中的FL的特征是在客户和设备异质性的存在中并不复杂,并不是独立的,并且在客户端之间并非独立分布(非IID)。此外,确保在MDE中有效利用佛罗里达州客户的系统资源仍然是一个重要的挑战。在本文中,我们提出了以用户为中心的FL培训方法来应对MDE中的统计和系统异质性,并在设备之间引起推理性能的一致性。火焰功能(i)以用户为中心的FL培训,利用同一用户的设备之间的时间对齐; (ii)准确性和效率感知设备的选择; (iii)对设备的个性化模型。我们还提出了具有现实的能量流量和网络带宽配置文件的FL评估测试,以及一种基于类的新型数据分配方案,以将现有HAR数据集扩展到联合设置。我们在三个多设备HAR数据集上的实验结果表明,火焰的表现优于各种基准,F1得分高4.3-25.8%,能源效率提高1.02-2.86倍,并高达2.06倍的收敛速度,以通过FL的公平分布来获得目标准确性工作量。
translated by 谷歌翻译
无监督域适应(UDA)的突破可以帮助将富含标签的源域的模型调整为未标记的目标域。尽管有这些进步,但缺乏对UDA算法的缺乏研究,特别是基于对抗性学习的算法,可以在分布式设置中工作。在现实世界应用中,目标域通常分布在数千个设备上,并且现有的对抗UDA算法 - 这些算法中集中在本质上 - 无法应用于这些设置。为了解决这一重要问题,我们介绍了弗鲁加:分布式对策UDA的端到端框架。通过对UDA文献进行仔细分析,我们确定了分布式UDA系统的设计目标,并提出了两种新算法,以提高分布式环境中对抗性UDA的适应准确性和培训效率。我们对具有五个图像和语音数据集的弗鲁加的评估表明,它可以将目标域精度升高至50%,并提高对抗越野UDA的培训效率至少11次。
translated by 谷歌翻译
Pretrained language models (PLMs) often fail to fairly represent target users from certain world regions because of the under-representation of those regions in training datasets. With recent PLMs trained on enormous data sources, quantifying their potential biases is difficult, due to their black-box nature and the sheer scale of the data sources. In this work, we devise an approach to study the geographic bias (and knowledge) present in PLMs, proposing a Geographic-Representation Probing Framework adopting a self-conditioning method coupled with entity-country mappings. Our findings suggest PLMs' representations map surprisingly well to the physical world in terms of country-to-country associations, but this knowledge is unequally shared across languages. Last, we explain how large PLMs despite exhibiting notions of geographical proximity, over-amplify geopolitical favouritism at inference time.
translated by 谷歌翻译
Reliable forecasting of traffic flow requires efficient modeling of traffic data. Different correlations and influences arise in a dynamic traffic network, making modeling a complicated task. Existing literature has proposed many different methods to capture the complex underlying spatial-temporal relations of traffic networks. However, methods still struggle to capture different local and global dependencies of long-range nature. Also, as more and more sophisticated methods are being proposed, models are increasingly becoming memory-heavy and, thus, unsuitable for low-powered devices. In this paper, we focus on solving these problems by proposing a novel deep learning framework - STLGRU. Specifically, our proposed STLGRU can effectively capture both local and global spatial-temporal relations of a traffic network using memory-augmented attention and gating mechanism. Instead of employing separate temporal and spatial components, we show that our memory module and gated unit can learn the spatial-temporal dependencies successfully, allowing for reduced memory usage with fewer parameters. We extensively experiment on several real-world traffic prediction datasets to show that our model performs better than existing methods while the memory footprint remains lower. Code is available at \url{https://github.com/Kishor-Bhaumik/STLGRU}.
translated by 谷歌翻译
Most camera lens systems are designed in isolation, separately from downstream computer vision methods. Recently, joint optimization approaches that design lenses alongside other components of the image acquisition and processing pipeline -- notably, downstream neural networks -- have achieved improved imaging quality or better performance on vision tasks. However, these existing methods optimize only a subset of lens parameters and cannot optimize glass materials given their categorical nature. In this work, we develop a differentiable spherical lens simulation model that accurately captures geometrical aberrations. We propose an optimization strategy to address the challenges of lens design -- notorious for non-convex loss function landscapes and many manufacturing constraints -- that are exacerbated in joint optimization tasks. Specifically, we introduce quantized continuous glass variables to facilitate the optimization and selection of glass materials in an end-to-end design context, and couple this with carefully designed constraints to support manufacturability. In automotive object detection, we show improved detection performance over existing designs even when simplifying designs to two- or three-element lenses, despite significantly degrading the image quality. Code and optical designs will be made publicly available.
translated by 谷歌翻译
有效的量子控制对于使用当前技术的实用量子计算实施是必需的。用于确定最佳控制参数的常规算法在计算上是昂贵的,在很大程度上将它们排除在模拟之外。构成作为查找表的现有硬件解决方案不精确且昂贵。通过设计机器学习模型来近似传统工具的结果,可以生成更有效的方法。然后可以将这样的模型合成为硬件加速器以用于量子系统。在这项研究中,我们演示了一种用于预测最佳脉冲参数的机器学习算法。该算法的轻量级足以适合低资源FPGA,并以175 ns的延迟和管道间隔为5 ns,$〜>〜>〜$〜>〜$ 0.99。从长远来看,这种加速器可以在传统计算机无法运行的量子计算硬件附近使用,从而在低潜伏期以合理的成本实现量子控制,而不会在低温环境之外产生大型数据带宽。
translated by 谷歌翻译
在过去的几年中,未配对的图像DeNoising取得了有希望的发展。无论表现如何,方法都倾向于严重依赖潜在的噪声属性或任何并不总是实用的假设。另外,如果我们可以从结构的角度而不是噪声统计数据解决问题,那么我们可以实现更强大的解决方案。通过这种动机,我们提出了一个自制的剥夺计划,该计划是不成功的,依赖于空间降解,然后进行正规化的精炼。我们的方法比以前的方法显示出显着改善,并且在不同的数据域上表现出一致的性能。
translated by 谷歌翻译
类激活图(CAM)有助于制定显着图,有助于解释深度神经网络的预测。基于梯度的方法通常比视力解释性的其他分支更快,并且独立于人类的指导。类似CAM的研究的性能取决于管理模型的层响应以及梯度的影响。典型的面向梯度的CAM研究依赖加权聚合来进行显着图估计,通过将梯度图投影到单权重值中,这可能导致过度的广义显着图。为了解决此问题,我们使用全球指导图来纠正显着性估计过程中加权聚合操作,在这种情况下,结果解释是相对干净的ER且特定于实例的。我们通过在特征图及其相应的梯度图之间执行元素乘法来获得全局引导图。为了验证我们的研究,我们将拟议的研究与八个不同的显着性可视化器进行了比较。此外,我们使用七个常用的评估指标进行定量比较。提出的方案比ImageNet,MS-Coco 14和Pascal VOC 2012数据集的测试图像取得了重大改进。
translated by 谷歌翻译