Federated学习(FL)最近已成为流行的隐私合作学习范式。但是,它遭受了客户之间非独立和相同分布的(非IID)数据的困扰。在本文中,我们提出了一个新颖的框架,称为合成数据辅助联合学习(SDA-FL),以通过共享合成数据来解决这一非IID挑战。具体而言,每个客户端都预测了本地生成对抗网络(GAN)以生成差异化私有合成数据,这些数据被上传到参数服务器(PS)以构建全局共享的合成数据集。为了为合成数据集生成自信的伪标签,我们还提出了PS执行的迭代伪标记机制。本地私人数据集和合成数据集与自信的伪标签的结合可导致客户之间的数据分布几乎相同,从而提高了本地模型之间的一致性并使全球聚合受益。广泛的实验证明,在监督和半监督的设置下,所提出的框架在几个基准数据集中的大幅度优于基线方法。
translated by 谷歌翻译
在分散的学习中,节点网络协作以最小化通常是其本地目标的有限总和的整体目标函数,并结合了非平滑的正则化术语,以获得更好的泛化能力。分散的随机近端梯度(DSPG)方法通常用于培训这种类型的学习模型,而随机梯度的方差延迟了收敛速率。在本文中,我们提出了一种新颖的算法,即DPSVRG,通过利用方差减少技术来加速分散的训练。基本思想是在每个节点中引入估计器,该节点周期性地跟踪本地完整梯度,以校正每次迭代的随机梯度。通过将分散的算法转换为具有差异减少的集中内隙近端梯度算法,并控制错误序列的界限,我们证明了DPSVRG以o(1 / t)$的速率收敛于一般凸起目标加上非平滑术语以$ t $作为迭代的数量,而dspg以$ o(\ frac {1} {\ sqrt {t}})$汇聚。我们对不同应用,网络拓扑和学习模型的实验表明,DPSVRG会收敛于DSPG的速度要快得多,DPSVRG的损耗功能与训练时期顺利降低。
translated by 谷歌翻译
联邦边缘学习(诱导)吸引了许多隐私范例的关注,以有效地纳入网络边缘的分布式数据来训练深度学习模型。然而,单个边缘服务器的有限覆盖范围导致参与者的客户节点数量不足,这可能会损害学习性能。在本文中,我们调查了一种新颖的感觉框架,即半分散的联邦边缘学习(SD-INES),其中采用多个边缘服务器集体协调大量客户端节点。通过利用边缘服务器之间的低延迟通信进行高效的模型共享,SD-Feels可以包含更多的培训数据,同时与传统联合学习相比享受更低的延迟。我们详细介绍了三个主要步骤的SD感觉的培训算法,包括本地模型更新,群集内部和群集间模型聚合。在非独立和相同分布的(非IID)数据上证明了该算法的收敛性,这也有助于揭示关键参数对培训效率的影响,并提供实用的设计指南。同时,边缘装置的异质性可能导致级体效应并降低SD感应的收敛速度。为了解决这个问题,我们提出了一种具有SD-Iave的稳定性舒长方案的异步训练算法,其中,还分析了收敛性能。模拟结果展示了所提出的SD感觉和证实我们分析的算法的有效性和效率。
translated by 谷歌翻译
联邦边缘学习(诱导)已成为一种有效的方法来减少基于云的机器学习解决方案的大型通信延迟,同时保留数据隐私。不幸的是,由于单边簇中的训练数据有限,感觉的学习性能可能会受到损害。在本文中,我们调查了一种新颖的感觉框架,即半分散的联邦边缘学习(SD-Inve)。通过允许不同边缘集群的模型聚合,SD-vee致力于减少培训延迟的感觉,同时通过访问来自多个边缘集群的更丰富的训练数据来提高学习性能。介绍了每轮三个主要过程的SD-ide的训练算法,包括本地模型更新,集群内部和群集间模型聚合,这被证明是在非独立和相同分布的(非IID)数据上收敛。我们还表征了边缘服务器的网络拓扑之间的相互作用以及在训练性能上群集间模型聚合的通信开销。实验结果证实了我们的分析,并展示了SD-FFEL在实现比传统联邦学习架构更快的收敛方面的有效性。此外,还提供了选择训练算法关键超参数的指导方针。
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning. While they have been studied extensively for classification and regression problems, how they apply to ranking problems, where the data and metrics have a list structure, is not well understood. Theoretically, we establish a domain adaptation generalization bound for ranking under listwise metrics such as MRR and NDCG. The bound suggests an adaptation method via learning list-level domain-invariant feature representations, whose benefits are empirically demonstrated by unsupervised domain adaptation experiments on real-world ranking tasks, including passage reranking. A key message is that for domain adaptation, the representations should be analyzed at the same level at which the metric is computed, as we show that learning invariant representations at the list level is most effective for adaptation on ranking problems.
translated by 谷歌翻译
We introduce anchored radial observations (ARO), a novel shape encoding for learning neural field representation of shapes that is category-agnostic and generalizable amid significant shape variations. The main idea behind our work is to reason about shapes through partial observations from a set of viewpoints, called anchors. We develop a general and unified shape representation by employing a fixed set of anchors, via Fibonacci sampling, and designing a coordinate-based deep neural network to predict the occupancy value of a query point in space. Differently from prior neural implicit models, that use global shape feature, our shape encoder operates on contextual, query-specific features. To predict point occupancy, locally observed shape information from the perspective of the anchors surrounding the input query point are encoded and aggregated through an attention module, before implicit decoding is performed. We demonstrate the quality and generality of our network, coined ARO-Net, on surface reconstruction from sparse point clouds, with tests on novel and unseen object categories, "one-shape" training, and comparisons to state-of-the-art neural and classical methods for reconstruction and tessellation.
translated by 谷歌翻译
Ultra-fine entity typing (UFET) predicts extremely free-formed types (e.g., president, politician) of a given entity mention (e.g., Joe Biden) in context. State-of-the-art (SOTA) methods use the cross-encoder (CE) based architecture. CE concatenates the mention (and its context) with each type and feeds the pairs into a pretrained language model (PLM) to score their relevance. It brings deeper interaction between mention and types to reach better performance but has to perform N (type set size) forward passes to infer types of a single mention. CE is therefore very slow in inference when the type set is large (e.g., N = 10k for UFET). To this end, we propose to perform entity typing in a recall-expand-filter manner. The recall and expand stages prune the large type set and generate K (K is typically less than 256) most relevant type candidates for each mention. At the filter stage, we use a novel model called MCCE to concurrently encode and score these K candidates in only one forward pass to obtain the final type prediction. We investigate different variants of MCCE and extensive experiments show that MCCE under our paradigm reaches SOTA performance on ultra-fine entity typing and is thousands of times faster than the cross-encoder. We also found MCCE is very effective in fine-grained (130 types) and coarse-grained (9 types) entity typing. Our code is available at \url{https://github.com/modelscope/AdaSeq/tree/master/examples/MCCE}.
translated by 谷歌翻译
U-shaped networks are widely used in various medical image tasks, such as segmentation, restoration and reconstruction, but most of them usually rely on centralized learning and thus ignore privacy issues. To address the privacy concerns, federated learning (FL) and split learning (SL) have attracted increasing attention. However, it is hard for both FL and SL to balance the local computational cost, model privacy and parallel training simultaneously. To achieve this goal, in this paper, we propose Robust Split Federated Learning (RoS-FL) for U-shaped medical image networks, which is a novel hybrid learning paradigm of FL and SL. Previous works cannot preserve the data privacy, including the input, model parameters, label and output simultaneously. To effectively deal with all of them, we design a novel splitting method for U-shaped medical image networks, which splits the network into three parts hosted by different parties. Besides, the distributed learning methods usually suffer from a drift between local and global models caused by data heterogeneity. Based on this consideration, we propose a dynamic weight correction strategy (\textbf{DWCS}) to stabilize the training process and avoid model drift. Specifically, a weight correction loss is designed to quantify the drift between the models from two adjacent communication rounds. By minimizing this loss, a correction model is obtained. Then we treat the weighted sum of correction model and final round models as the result. The effectiveness of the proposed RoS-FL is supported by extensive experimental results on different tasks. Related codes will be released at https://github.com/Zi-YuanYang/RoS-FL.
translated by 谷歌翻译
Adversarial attacks on thermal infrared imaging expose the risk of related applications. Estimating the security of these systems is essential for safely deploying them in the real world. In many cases, realizing the attacks in the physical space requires elaborate special perturbations. These solutions are often \emph{impractical} and \emph{attention-grabbing}. To address the need for a physically practical and stealthy adversarial attack, we introduce \textsc{HotCold} Block, a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste. By attaching these readily available temperature-controlled materials to the body, \textsc{HotCold} Block evades human eyes efficiently. Moreover, unlike existing methods that build adversarial patches with complex texture and structure features, \textsc{HotCold} Block utilizes an SSP-oriented adversarial optimization algorithm that enables attacks with pure color blocks and explores the influence of size, shape, and position on attack performance. Extensive experimental results in both digital and physical environments demonstrate the performance of our proposed \textsc{HotCold} Block. \emph{Code is available: \textcolor{magenta}{https://github.com/weihui1308/HOTCOLDBlock}}.
translated by 谷歌翻译