Federated Learning (FL) is extensively used to train AI/ML models in distributed and privacy-preserving settings. Participant edge devices in FL systems typically contain non-independent and identically distributed~(Non-IID) private data and unevenly distributed computational resources. Preserving user data privacy while optimizing AI/ML models in a heterogeneous federated network requires us to address data heterogeneity and system/resource heterogeneity. Hence, we propose \underline{R}esource-\underline{a}ware \underline{F}ederated \underline{L}earning~(RaFL) to address these challenges. RaFL allocates resource-aware models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion. Integrating NAS into FL enables on-demand customized model deployment for resource-diverse edge devices. Furthermore, we propose a multi-model architecture fusion scheme allowing the aggregation of the distributed learning results. Results demonstrate RaFL's superior resource efficiency compared to SoTA.
translated by 谷歌翻译
随着对用户数据隐私的越来越关注,联合学习(FL)已被开发为在边缘设备上训练机器学习模型的独特培训范式,而无需访问敏感数据。传统的FL和现有方法直接在云服务器的同一型号和培训设备的所有边缘上采用聚合方法。尽管这些方法保护了数据隐私,但它们不能具有模型异质性,甚至忽略了异质的计算能力,也可以忽略陡峭的沟通成本。在本文中,我们目的是将资源感知的FL汇总为从边缘模型中提取的本地知识的集合,而不是汇总每个本地模型的权重,然后将其蒸馏成一个强大的全局知识,作为服务器模型通过知识蒸馏。通过深入的相互学习,将本地模型和全球知识提取到很小的知识网络中。这种知识提取使Edge客户端可以部署资源感知模型并执行多模型知识融合,同时保持沟通效率和模型异质性。经验结果表明,在异质数据和模型中的通信成本和概括性能方面,我们的方法比现有的FL算法有了显着改善。我们的方法将VGG-11的沟通成本降低了102美元$ \ times $和Resnet-32,当培训Resnet-20作为知识网络时,最多可达30美元$ \ times $。
translated by 谷歌翻译
高效联合学习是在边缘设备上培训和部署AI模型的关键挑战之一。然而,在联合学习中维护数据隐私提出了几种挑战,包括数据异质性,昂贵的通信成本和有限的资源。在本文中,我们通过(a)通过基于本地客户端的深度增强学习引入突出参数选择代理的上述问题,并在中央服务器上聚合所选择的突出参数,(b)分割正常的深度学习模型〜 (例如,CNNS)作为共享编码器和本地预测器,并通过联合学习训练共享编码器,同时通过本地自定义预测器将其知识传送到非IID客户端。所提出的方法(a)显着降低了联合学习的通信开销,并加速了模型推断,而方法(b)则在联合学习中解决数据异质性问题。此外,我们利用梯度控制机制来校正客户之间的梯度异质性。这使得训练过程更稳定并更快地收敛。实验表明,我们的方法产生了稳定的训练过程,并与最先进的方法相比实现了显着的结果。在培训VGG-11时,我们的方法明显降低了通信成本最高108 GB,并在培训Reset-20时需要7.6美元的通信开销,同时通过减少高达39.7 \%$ 39.7 \%$ vgg- 11.
translated by 谷歌翻译
模型压缩是在功率和内存受限资源上部署深神网络(DNN)的必要技术。但是,现有的模型压缩方法通常依赖于人类的专业知识,并专注于参数的本地重要性,而忽略了DNN中丰富的拓扑信息。在本文中,我们提出了一种基于图神经网络(GNNS)的新型多阶段嵌入技术,以识别DNN拓扑并使用增强学习(RL)以找到合适的压缩策略。我们执行了资源约束(即失败)通道修剪,并将我们的方法与最先进的模型压缩方法进行了比较。我们评估了从典型到移动友好网络的各种模型的方法,例如Resnet家族,VGG-16,Mobilenet-V1/V2和Shufflenet。结果表明,我们的方法可以通过最低的微调成本实现更高的压缩比,但产生了出色和竞争性的表现。
translated by 谷歌翻译
模型压缩旨在将深神经网络(DNN)部署在具有有限的计算和存储资源的移动设备上。但是,大多数现有模型压缩方法依赖于手动定义的规则,这些规则需要域专业知识。 DNN基本上是计算图形,其包含丰富的结构信息。在本文中,我们的目标是从DNNS结构信息找到合适的压缩策略。我们提出了一种自动图形编码器 - 解码器模型压缩(AGMC)方法与图形神经网络(GNN)和加强学习(RL)结合。我们将目标DNN模拟为图形并使用GNN自动学习DNN的嵌入物。我们将我们的方法与基于规则的DNN嵌入模型压缩方法进行了比较,以显示我们方法的有效性。结果表明,基于学习的DNN嵌入实现了更好的性能和更高的搜索步骤的压缩比。我们在过度参数化和移动友好的DNN上进行了评估方法,并将我们的方法与基于手工和学习的模型压缩方法进行了比较。在参数化DNN(如Resnet-56)上,我们的方法分别优于3.36 \%$ 4.36 \%$ 4.36 \%$ 4.36 \%$ 2.56 \%$ 2.56 \%的准确性。此外,在MobileNet-V2上,我们达到了比最先进的方法更高的压缩比,只需0.93±%$精度损失。
translated by 谷歌翻译
从X射线图像中自动生成医疗报告可以帮助放射科医生执行耗时但重要的报告任务。然而,实现临床准确的生成报告仍然具有挑战性。发现使用知识图方法对潜在异常进行建模有望在提高临床准确性方面。在本文中,我们介绍了一种新型的罚款颗粒知识图结构,称为属性异常图(ATAG)。 ATAG由互连的异常节点和属性节点组成,使其可以更好地捕获异常细节。与手动构建异常图的现有方法相反,我们提出了一种方法,以根据注释,X射线数据集中的医疗报告和Radlex放射线词典自动构建细粒度的图形结构。然后,我们将使用深层模型与用编码器架构结构进行报告的ATAG嵌入。特别是,探索了图表网络以编码异常及其属性之间的关系。采用门控机制并将其与各种解码器整合在一起。我们根据基准数据集进行了广泛的实验,并表明基于ATAG的深层模型优于SOTA方法,并可以提高生成报告的临床准确性。
translated by 谷歌翻译
文本样式传输(TST)旨在在保持相同内容的同时将源文本的底层样式更改为另一种特定样式。由于高质量平行训练数据的稀缺性,无监督的学习已成为TST任务的趋势方向。在本文中,我们提出了一种新的基于VAE的文本方式转移,具有Pivot词增强学习(VT-LOWER)方法,该方法利用变分AutiConder(VAE)和外部风格嵌入,共同学习语义和风格分布。此外,我们介绍了枢轴词学习,它用于学习特定风格的决定性词语,从而进一步提高风格转移的整体性能。所提出的vt-rtower可以缩放到不同的TST场景,因为具有新颖和灵活的风格强度控制机制的非常有限和非平行训练数据。实验表明,VT-BURER优于语言,形式和代码切换TST任务的最先进。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译