Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training. This approach allows a variety of mobile devices to collaboratively train a machine learning model without sharing the raw on-device training data with the cloud. However, efficient edge deployment of FL is challenging because of the system/data heterogeneity and runtime variance. This paper optimizes the energy-efficiency of FL use cases while guaranteeing model convergence, by accounting for the aforementioned challenges. We propose FedGPO based on a reinforcement learning, which learns how to identify optimal global parameters (B, E, K) for each FL aggregation round adapting to the system/data heterogeneity and stochastic runtime variance. In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings, respectively.
translated by 谷歌翻译
We present a robust, privacy-preserving visual localization algorithm using event cameras. While event cameras can potentially make robust localization due to high dynamic range and small motion blur, the sensors exhibit large domain gaps making it difficult to directly apply conventional image-based localization algorithms. To mitigate the gap, we propose applying event-to-image conversion prior to localization which leads to stable localization. In the privacy perspective, event cameras capture only a fraction of visual information compared to normal cameras, and thus can naturally hide sensitive visual details. To further enhance the privacy protection in our event-based pipeline, we introduce privacy protection at two levels, namely sensor and network level. Sensor level protection aims at hiding facial details with lightweight filtering while network level protection targets hiding the entire user's view in private scene applications using a novel neural network inference pipeline. Both levels of protection involve light-weight computation and incur only a small performance loss. We thus project our method to serve as a building block for practical location-based services using event cameras. The code and dataset will be made public through the following link: https://github.com/82magnolia/event_localization.
translated by 谷歌翻译
Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text.
translated by 谷歌翻译
We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
磁共振图像的降解有益于提高低信噪比图像的质量。最近,使用深层神经网络进行DENOSING表现出了令人鼓舞的结果。但是,这些网络大多数都利用监督学习,这需要大量的噪声和清洁图像对的培训图像。获得训练图像,尤其是干净的图像,既昂贵又耗时。因此,已经开发了仅需要成对噪声浪费图像的噪声2Noise(N2N)之类的方法来减轻获得训练数据集的负担。在这项研究中,我们提出了一种新的自我监督的denoising方法Coil2Coil(C2C),该方法不需要获取干净的图像或配对的噪声浪费图像进行训练。取而代之的是,该方法利用了从分阶段阵列线圈中的多通道数据来生成训练图像。首先,它将多通道线圈图像分为两个图像,一个用于输入,另一个用于标签。然后,它们被处理以施加噪声独立性和敏感性归一化,以便它们可用于N2N的训练图像。为了推断,该方法输入了一个线圈组合的图像(例如DICOM图像),从而允许该方法的广泛应用。当使用合成噪声添加的图像进行评估时,C2C对几种自我监督方法显示了最佳性能,从而报告了与监督方法的可比结果。在测试DICOM图像时,C2C成功地将真实噪声降低,而没有显示误差图中的结构依赖性残差。由于不需要对清洁或配对图像进行额外扫描的显着优势,因此可以轻松地用于各种临床应用。
translated by 谷歌翻译
使用变压器模型,多语言神经机器的翻译一直显示出巨大的成功。部署这些模型是具有挑战性的,因为它们通常需要各种语言的大词汇(词汇)尺寸。这限制了在上一个词汇投影层中预测输出令牌的速度。为了减轻这些挑战,本文提出了一种通过聚类的快速词汇投影方法,该方法可用于GPU上的多语言变压器。首先,我们脱机将词汇搜索空间分为不同的结合群,鉴于解码器输出的隐藏上下文向量,这导致词汇投影的词汇列要小得多。其次,在推理时,提出的方法预测了词汇投影中隐藏上下文向量的簇和候选候选代币。本文还包括对在多语言环境中构建这些群集的不同方式的分析。我们的结果表明,FLOAT16 GPU推断中的端到端速度增长高达25%,同时保持BLEU得分并略有增加记忆成本。所提出的方法将词汇投影步骤加速自身最多2.6倍。我们还进行了广泛的人类评估,以验证所提出的方法保留了原始模型的翻译质量。
translated by 谷歌翻译
多人在线战场(MOBA)是最成功的游戏类型之一。像英雄联盟这样的MOBA游戏具有竞争性环境,玩家竞争他们的排名。在大多数MOBA游戏中,玩家的排名取决于比赛结果(获胜或输)。由于团队合作的本质,这似乎很自然,但是从某种意义上说,这是不公平的,因为在损失的情况下,付出很多努力的球员失去了排名胜利。为了减少基于团队的排名系统的副作用并公正地评估球员的表现,我们提出了一种新颖的嵌入模型,该模型将球员的动作转换为基于动作对球队胜利的各自贡献的定量分数。我们的模型是使用基于序列的深度学习模型构建的,其新型损失功能在团队比赛中起作用。基于序列的深度学习模型处理从游戏开始到团队游戏中的动作序列,使用GRU单元从上一步和当前输入选择性地采用隐藏状态。损失功能旨在帮助动作得分反映球队的最终成绩和成功。我们表明,我们的模型可以公平地评估玩家的个人表现,并分析玩家各自动作的贡献。
translated by 谷歌翻译
我们提出了一个新的变压器模型,用于无监督学习骨架运动序列的任务。用于基于无监督骨骼的动作学习的现有变压器模型被了解到每个关节从相邻帧的瞬时速度没有全球运动信息。因此,该模型在学习全身运动和暂时遥远的关节方面的关注方面存在困难。此外,模型中尚未考虑人与人之间的互动。为了解决全身运动,远程时间动态和人与人之间的互动的学习,我们设计了一种全球和本地的注意机制,在其中,全球身体动作和本地关节运动相互关注。此外,我们提出了一种新颖的预处理策略,即多间隔姿势位移预测,以在不同的时间范围内学习全球和本地关注。提出的模型成功地学习了关节的局部动力学,并从运动序列中捕获了全局上下文。我们的模型优于代表性基准中明显边缘的最先进模型。代码可在https://github.com/boeun-kim/gl-transformer上找到。
translated by 谷歌翻译
我们提出了CPO,这是一种快速且强大的算法,该算法与可能包含更改的场景的3D点云相对于2D全景图。为了稳健地处理场景的变化,我们的方法偏离了传统的特征点匹配,并着重于全景图像提供的空间上下文。具体而言,我们建议使用得分图提出有效的颜色直方图生成和随后的鲁棒定位。通过利用球形投影的唯一模棱两可,我们提出了大量相机姿势的非常快的颜色直方图生成,而无需明确渲染所有候选姿势的图像。我们将全景云和点云的区域一致性作为2D/3D分数图,并使用它们来称量输入颜色值以进一步提高鲁棒性。加权颜色分布很快找到了良好的初始姿势,并实现了基于梯度的优化的稳定收敛。 CPO是轻量级的,在所有测试的场景中都能实现有效的本地化,尽管场景变化,重复性结构或无特征区域都显示出稳定的性能,这是带有透视摄像头视觉定位的典型挑战。
translated by 谷歌翻译