使用和部署不同本地模型的个性化联合学习(PFL),由于其在处理佛罗里达州客户的统计异质性方面的成功,近年来引起了人们的关注。但是,对不同PFL方法的标准化评估和系统分析仍然是一个挑战。首先,高度多样化的数据集,FL仿真设置和PFL实现可以防止对PFL方法的快速和公平比较。其次,在各种实践场景中,PFL方法的有效性和鲁棒性不足,例如新客户的概括和资源有限的客户参与。最后,当前的PFL文献在采用的评估和消融方案中有所不同。为了应对这些挑战,我们提出了第一个全面的PFL基准PFL基准,以促进快速,可重现,标准化和彻底的PFL评估。所提出的基准测试包含具有统一数据分区和现实异质设置的不同应用程序域中的10多个数据集;一个模块化且易于扩展的PFL代码库,具有20多个竞争性PFL基线实现;以及在集装环境下进行的系统评估,以概括,公平,系统开销和收敛性。我们强调了最先进的PFL方法的好处和潜力,并希望PFL板台实现了进一步的PFL研究和广泛的应用,否则由于缺乏专用的基准,这将是困难的。该代码在https://github.com/alibaba/federatedscope/tree/master/master/benchmark/pfl-bench上发布。
translated by 谷歌翻译
为了调查现实世界中联邦学习的异质性,我们将经典的联合学习概括为联合的异性任务学习,这强调了参与者在数据分布和学习任务方面的联盟学习中的不一致性。我们还提出了B-FHTL,这是一种联合的杂项任务学习基准,该基准包括模拟数据集,FL协议和统一的评估机制。 B-FHTL数据集包含三个精心设计的联合学习任务,异质性增加。每个任务都使用不同的非IID数据和学习任务模拟客户端。为了确保不同的FL算法之间的公平比较,B-FHTL通过提供高级API来避免隐私泄漏,在整个FL协议中构建,并预设跨越不同的学习任务的最常见评估指标,例如回归,分类,文本,文本,文本此外,我们还比较了B-FHTL中联合多任务学习,联合个性化和联合元学习领域的FL算法,并突出了联盟异质任务学习的异质性和困难的影响。我们的基准测试,包括联合数据集,协议,评估机制和初步实验,可在https://github.com/alibaba/federatedscope/tree/master/master/master/benchmark/b-fhtl上开放。
translated by 谷歌翻译
尽管现有联合学习平台(FL)平台已取得了显着的进展,以提供开发基础架构,但这些平台可能无法很好地应对各种异质性带来的挑战,包括参与者本地数据,资源,行为和学习目标中的异质性。为了填补这一空白,在本文中,我们提出了一个名为FederatedScope的新型FL平台,该平台采用事件驱动的架构为用户提供极大的灵活性,以独立描述不同参与者的行为。这样的设计使用户可以轻松地描述参与者具有各种本地培训过程,学习目标和后端,并通过同步或异步培训策略将其协调为FL课程。 FederatedScope为易于使用和灵活的平台提供了丰富类型的插入操作和组件,以有效地进行进一步开发,并且我们实施了几个重要组件,以更好地帮助用户进行隐私保护,攻击模拟和自动调整。我们已经在https://github.com/alibaba/federatedscope上发布了FederatedScope,以在各种情况下促进联邦学习的学术研究和工业部署。
translated by 谷歌翻译
在本文中,我们提出了连续时间游戏理论镜中下降(MD)动态的二阶扩展,称为MD2,其收敛于MED(但不一定是严格的)变分性稳定状态(VSS)而不使用常见辅助技术,如平均或折扣。我们表明MD2在轻微修改后享有无悔的趋势以及对强大的VSS的指数汇率。此外,MD2可用于导出许多新颖的原始空间动态。最后,使用随机近似技术,我们提供了对内部仅噪声的离散时间MD2的收敛保证。提供了所选模拟以说明我们的结果。
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Hybrid unmanned aerial vehicles (UAVs) integrate the efficient forward flight of fixed-wing and vertical takeoff and landing (VTOL) capabilities of multicopter UAVs. This paper presents the modeling, control and simulation of a new type of hybrid micro-small UAVs, coined as lifting-wing quadcopters. The airframe orientation of the lifting wing needs to tilt a specific angle often within $ 45$ degrees, neither nearly $ 90$ nor approximately $ 0$ degrees. Compared with some convertiplane and tail-sitter UAVs, the lifting-wing quadcopter has a highly reliable structure, robust wind resistance, low cruise speed and reliable transition flight, making it potential to work fully-autonomous outdoor or some confined airspace indoor. In the modeling part, forces and moments generated by both lifting wing and rotors are considered. Based on the established model, a unified controller for the full flight phase is designed. The controller has the capability of uniformly treating the hovering and forward flight, and enables a continuous transition between two modes, depending on the velocity command. What is more, by taking rotor thrust and aerodynamic force under consideration simultaneously, a control allocation based on optimization is utilized to realize cooperative control for energy saving. Finally, comprehensive Hardware-In-the-Loop (HIL) simulations are performed to verify the advantages of the designed aircraft and the proposed controller.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译