随着边缘设备变得越来越强大,数据分析逐渐从集中式转移到分散的制度,在该制度中,利用边缘计算资源以在本地处理更多数据。这种分析制度被认为是联合数据分析(FDA)。尽管FDA最近有成功的案例,但大多数文献都专注于深度神经网络。在这项工作中,我们退后一步,为最基本的统计模型之一开发了FDA处理:线性回归。我们的处理是建立在层次建模的基础上,该模型允许多个组借用强度。为此,我们提出了两个联合的层次模型结构,它们在跨设备之间提供共享表示以促进信息共享。值得注意的是,我们提出的框架能够提供不确定性量化,可变选择,假设测试以及对新看不见数据的快速适应。我们在一系列现实生活中验证了我们的方法,包括对飞机发动机的条件监控。结果表明,我们对线性模型的FDA处理可以作为联合算法未来开发的竞争基准模型。
translated by 谷歌翻译
在本文中,我们提出\ texttt {fgpr}:一个联合高斯进程($ \ mathcal {gp} $)回归框架,它使用了用于本地客户端计算的模型聚合和随机梯度血缘的平均策略。值得注意的是,由此产生的全局模型在个性化中excels作为\ texttt {fgpr}共同学习所有客户端之前的全局$ \ mathcal {gp} $。然后通过利用该本地数据来获得预测后的后退,并在从特定客户端编码个性化功能的本地数据获得。从理论上讲,我们显示\ texttt {fgpr}会聚到完整对数似然函数的关键点,但符合统计误差。通过广泛的案例研究,我们展示了\ TextTT {FGPR}在广泛的应用中擅长,并且是隐私保留多保真数据建模的有希望的方法。
translated by 谷歌翻译
事情互联网(物联网)正处于重大范式转变的边缘。在未来的IOT系统中,IOFT,云将被人群代替模型训练被带到边缘的人群,允许IOT设备协作提取知识并构建智能分析/型号,同时保持本地存储的个人数据。这种范式转变被IOT设备的计算能力巨大增加以及分散和隐私保留模型培训的最近进步,作为联合学习(FL)。本文为IOFT提供了愿景,并系统概述当前努力实现这一愿景。具体而言,我们首先介绍IOFT的定义特征,并讨论了三维内部的分散推断的流动方法,机会和挑战:(i)全局模型,最大化跨所有IOT设备的实用程序,(ii)个性化模型所有设备的借款强度都保留了自己的模型,(iii)一个迅速适应新设备或学习任务的元学习模型。通过描述Ioft通过域专家镜头重塑不同行业的愿景和挑战来结束。这些行业包括制造,运输,能源,医疗保健,质量和可靠性,商业和计算。
translated by 谷歌翻译
Lookahead,也称为非洋流,贝叶斯优化(BO)旨在通过解决动态程序(DP)来找到最佳的采样策略,从而最大程度地利用滚动地平线获得长期奖励。尽管很有希望,但Lookahead Bo通过增加对可能错误指定模型的依赖而面临错误传播的风险。在这项工作中,我们专注于用于解决棘手的DP的推出近似值。我们首先证明了推出在解决LookAhead BO方面的提高性质,并提供了足够的条件,可以使使用的启发式效果提高推广。然后,我们提供一个理论和实用的指南来决定滚动地平线阶段。该指南基于量化错误指定模型的负面影响。为了说明我们的想法,我们提供了有关单一和多信息源BO的案例研究。经验结果表明,我们方法比几种近视和非侧视算法的优势性能。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor $\gamma$ goes to $1$, and moreover, when $\gamma$ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译
Neural operators, which emerge as implicit solution operators of hidden governing equations, have recently become popular tools for learning responses of complex real-world physical systems. Nevertheless, the majority of neural operator applications has thus far been data-driven, which neglects the intrinsic preservation of fundamental physical laws in data. In this paper, we introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed. In particular, by replacing the frame-dependent position information with its invariant counterpart in the kernel space, the proposed neural operator is by design translation- and rotation-invariant, and consequently abides by the conservation laws of linear and angular momentums. As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets, and show that, by automatically satisfying these essential physical laws, our learned neural operator is not only generalizable in handling translated and rotated datasets, but also achieves state-of-the-art accuracy and efficiency as compared to baseline neural operator models.
translated by 谷歌翻译