Natural language processing (NLP) sees rich mobile applications. To support various language understanding tasks, a foundation NLP model is often fine-tuned in a federated, privacy-preserving setting (FL). This process currently relies on at least hundreds of thousands of labeled training samples from mobile clients; yet mobile users often lack willingness or knowledge to label their data. Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shot scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and prompt learning, we first establish a training pipeline that delivers competitive accuracy when only 0.05% (fewer than 100) of the training data is labeled and the remaining is unlabeled. To instantiate the workflow, we further present a system FFNLP, addressing the high execution cost with novel designs. (1) Curriculum pacing, which injects pseudo labels to the training workflow at a rate commensurate to the learning progress; (2) Representational diversity, a mechanism for selecting the most learnable data, only for which pseudo labels will be generated; (3) Co-planning of a model's training depth and layer capacity. Together, these designs reduce the training delay, client energy, and network traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$, respectively. Through algorithm/system co-design, FFNLP demonstrates that FL can apply to challenging settings where most training samples are unlabeled.
translated by 谷歌翻译
Transformer-based pre-trained models have become the de-facto solution for NLP tasks. Fine-tuning such pre-trained models for downstream tasks often requires tremendous amount of data that is both private and labeled. However, in reality: 1) such private data cannot be collected and is distributed across mobile devices, and 2) well-curated labeled data is scarce. To tackle those issues, we first define a data generator for federated few-shot learning tasks, which encompasses the quantity and distribution of scarce labeled data in a realistic setting. Then we propose AUG-FedPrompt, a prompt-based federated learning algorithm that carefully annotates abundant unlabeled data for data augmentation. AUG-FedPrompt can perform on par with full-set fine-tuning with very few initial labeled data.
translated by 谷歌翻译
由于模型列出是现代NLP的核心,因此我们着手提高其效率。通过训练示例的动机通常是多余的,我们设计了一种以流媒体方式过滤示例的算法。我们的关键技术是两个:(1)自动确定跳过向后传播的训练损失阈值;(2)维护一个元预测指标,以进一步跳过正向传播。在各种基准测试的基准上,我们的算法将所需的训练示例降低了5 $ \ times $,而平均仅看到轻微的降级,因此将其化为三阶段的过程。我们的方法即使在一个训练时期也很少有效,每个训练示例一次遇到一次。它易于实现,并且与现有的模型列出优化(例如层冻结)兼容。
translated by 谷歌翻译
自然语言处理(NLP)推论正在看到移动应用程序的采用量增加,在此,对于至关重要的保留用户数据隐私和避免网络往返的推论是必需的。然而,NLP模型的前所未有的大小强调了延迟和内存,这是移动设备的两个关键资源。为了满足目标延迟,将整个模型保存在内存中会尽快启动执行,但将一个应用程序的内存足迹增加了几次,将其收益限制为仅在被移动内存管理回收之前的一些推论。另一方面,从存储按需加载模型会导致几秒钟的io长,远远超过了用户满足的延迟范围;由于IO和计算延迟之间的偏斜度很大,因此管道层的模型加载和执行也不会隐藏IO。为此,我们提出了Speedy Transformer推断(STI)。 STI建立在模型最重要的部分上最大化IO/计算资源利用率的关键思想,通过两种新颖的技术来调和延迟/记忆张力。首先,模型碎片。 STI将模型参数视为独立可调的碎片,并介绍了其对准确性的重要性。其次,带有预紧缓冲液的弹性管道计划。 STI实例化IO/计算管道,并使用一个小的缓冲区进行预加载碎片来进行引导执行,而不会在早期阶段停滞不前;它根据资源弹性执行的重要性明智地选择,调音和汇编碎片,从而最大程度地提高推理精度。在两个商品SoC上,我们在实用的目标潜伏期以及CPU和GPU上建立了STI并根据广泛的NLP任务进行评估。我们证明,STI提供高精度的高度较低的记忆级,表现优于竞争基准。
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
派生是一个重要而基本的计算机视觉任务,旨在消除在下雨天捕获的图像或视频中的雨条纹和累积。现有的派威方法通常会使雨水模型的启发式假设,这迫使它们采用复杂的优化或迭代细化以获得高回收质量。然而,这导致耗时的方法,并影响解决从假设偏离的雨水模式的有效性。在本文中,我们通过在没有复杂的雨水模型假设的情况下,通过在没有复杂的雨水模型假设的情况下制定污染作为预测滤波问题的简单而有效的污染方法。具体地,我们识别通过深网络自适应地预测适当的核的空间变型预测滤波(SPFILT以过滤不同的各个像素。由于滤波可以通过加速卷积来实现,因此我们的方法可以显着效率。我们进一步提出了eFderain +,其中包含三个主要贡献来解决残留的雨迹,多尺度和多样化的雨水模式而不会损害效率。首先,我们提出了不确定感知的级联预测滤波(UC-PFILT),其可以通过预测的内核来识别重建清洁像素的困难,并有效地移除残留的雨水迹线。其次,我们设计重量共享多尺度扩张过滤(WS-MS-DFILT),以处理多尺度雨条纹,而不会损害效率。第三,消除各种雨水模式的差距,我们提出了一种新颖的数据增强方法(即Rainmix)来培养我们的深层模型。通过对不同变体的复杂分析的所有贡献相结合,我们的最终方法在恢复质量和速度方面优于四个单像辐照数据集和一个视频派威数据集的基线方法。
translated by 谷歌翻译
医学成像数据中的胰腺分割对于临床胰腺诊断和治疗至关重要。然而,即使是利用完全跨斜神经网络(FCNS)的最新算法,胰腺形状和体积的较大人口变化也会引起巨大的分割困难。具体而言,胰腺分割遭受2D方法中空间信息的损失,以及3D方法的高计算成本。为了减轻这些问题,我们提出了一个概率的映射引导的双向复发性UNET(PBR-UNET)体系结构,该体系结构融合了板板内的信息和层间概率图,然后将其融合到本地3D混合正则化方案中,随后是BI - 方向复发网络优化。 PBR-UNET方法由一个初始估计模块组成,用于有效提取像素级概率图和主要分割模块,用于通过2.5D U-NET体系结构传播混合信息。具体而言,通过将输入图像与相邻切片的概率图组合到多通道混合数据中,然后层次汇总整个分割网络的混合信息,来推断本地3D信息。此外,开发了双向反复优化机制,以更新远期和向后方向的混合信息。这允许拟议的网络充分利用本地上下文信息。对NIH Pancreas-CT数据集进行了定量和定性评估,与其他最新方法相比,我们提出的PBR-UNET方法获得了更好的分割结果,计算成本较少。
translated by 谷歌翻译
Neural networks are known to be a class of highly expressive functions able to fit even random inputoutput mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we highlight a learning bias of deep networks towards low frequency functions -i.e. functions that vary globally without local fluctuations -which manifests itself as a frequency-dependent learning speed. Intuitively, this property is in line with the observation that over-parameterized networks prioritize learning simple patterns that generalize across data samples. We also investigate the role of the shape of the data manifold by presenting empirical and theoretical evidence that, somewhat counter-intuitively, learning higher frequencies gets easier with increasing manifold complexity.
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译