Self-supervised learning (SSL) speech models generate meaningful representations of given clips and achieve incredible performance across various downstream tasks. Model extraction attack (MEA) often refers to an adversary stealing the functionality of the victim model with only query access. In this work, we study the MEA problem against SSL speech model with a small number of queries. We propose a two-stage framework to extract the model. In the first stage, SSL is conducted on the large-scale unlabeled corpus to pre-train a small speech model. Secondly, we actively sample a small portion of clips from the unlabeled corpus and query the target model with these clips to acquire their representations as labels for the small model's second-stage training. Experiment results show that our sampling methods can effectively extract the target model without knowing any information about its model architecture.
translated by 谷歌翻译
最近,先驱工作发现,演讲预训练模型可以解决全堆栈语音处理任务,因为该模型利用底层学习扬声器相关信息和顶层以编码与内容相关的信息。由于网络容量有限,我们认为如果模型专用于音频内容信息学习,则可以进一步提高语音识别性能。为此,我们向自我监督学习(ILS-SSL)提出中间层监督,这将模型通过在中间层上添加额外的SSL丢失来尽可能地专注于内容信息。 LibrisPeech测试 - 其他集合的实验表明,我们的方法显着优于Hubert,这实现了基数/大型模型的W / O语言模型设置的相对字错误率降低了23.5%/ 11.6%。详细分析显示我们模型的底层与拼音单元具有更好的相关性,这与我们的直觉一致,并解释了我们对ASR的方法的成功。
translated by 谷歌翻译
自我监督学习(SSL)在语音识别方面取得了巨大的成功,而有限的探索已尝试完成其他语音处理任务。由于语音信号包含多方面的信息,包括说话者身份,副语言学,口语内容等,学习所有语音任务的通用表示都具有挑战性。为了解决该问题,我们提出了一个新的预培训模型WAVLM,以解决全堆栈的下游语音任务。 Wavlm共同学习了蒙面的语音预测和预训练。通过这种方式,WAVLM不仅可以通过掩盖的语音预测来保持语音内容建模能力,而且还可以通过语音denoing来提高非ASR任务的潜力。此外,WAVLM还采用封闭式的变压器结构的封闭相对位置偏置,以更好地捕获输入语音的序列排序。我们还将培训数据集从60k小时扩展到94K小时。 WAVLM大型在精湛的基准上实现了最先进的性能,并在其代表性基准上为各种语音处理任务带来了重大改进。代码和预培训模型可在https://aka.ms/wavlm上找到。
translated by 谷歌翻译
大规模的语音自我监督学习(SSL)已经出现到语音处理的主要领域,但是,由于其巨大规模而引起的计算成本问题是对学术界的高障碍。此外,语音SSL模型的现有蒸馏技术通过减少层来压缩模型,从而在语言模式识别任务(例如音素识别(PR))中引起性能降解。在本文中,我们提出了Fithubert,它几乎在几乎所有模型组件中都使尺寸较薄,并且与先前的语音SSL蒸馏作品相比,层层更深。此外,我们采用缩短时间来加快推理时间,并提出一种基于提示的蒸馏方法,以减少性能降解。与休伯特相比,我们的方法将模型降低到23.8%,推理时间为35.9%。此外,我们在优越的基准上达到了12.1%的单词错误率和13.3%的音素错误率,这比先前的工作优越。
translated by 谷歌翻译
自我监督的语音表示学习在各种语音处理任务中显示出令人鼓舞的结果。但是,预先训练的模型,例如休伯特是存储密集型变压器,限制了其在低资源设置下的应用程序范围。为此,我们建议通过修剪结构化参数自动找到所需的体系结构Lighthubert,这是一个曾经是变压器的压缩框架。更确切地说,我们创建了一个基于变压器的超级网,该超网嵌套着数千个重量共享子网,并设计了一个两阶段的蒸馏策略,以利用休伯特的上下文化潜在表示。关于自动语音识别(ASR)和出色基准的实验表明,拟议的lighthubert可实现$ 10^9 $的架构,该体系结构涉及嵌入尺寸,注意力维度,头部编号,进率向前网络比率和网络深度。 Lighthubert优于ASR上的原始Hubert和Hubert大小的五个出色的任务,在大多数任务中,在大多数任务中都具有可比的性能,并减少了29%的参数,并获得了$ 3.5 \ times $ times $ compression $压缩比在三个超级任务中,例如自动扬声器验证,关键字发现和意图分类,略有准确的损失。代码和预培训模型可在https://github.com/mechanicalsea/lighthubert上找到。
translated by 谷歌翻译
自我监督学习是一种新兴的机器学习(ML)范式。与监督的学习相比,哪些利用高质量标记的数据集以实现良好的性能相比,自我监督的学习依赖于未标记的数据集来预先培训功能强大的编码器,然后可以将其视为各种下游任务的功能提取器。大量的数据和计算资源消耗使编码器本身成为模型所有者的宝贵知识产权。最近的研究表明,ML模型的版权受到模型窃取攻击的威胁,该攻击旨在训练替代模型以模仿给定模型的行为。我们从经验上表明,预训练的编码器极易受到模型窃取攻击的影响。但是,版权保护算法(例如水印)的大多数努力集中在分类器上。同时,预先培训的编码器版权保护的内在挑战在很大程度上仍然没有研究。我们通过提出SSLGuard,这是第一种用于预训练的编码器的水印算法。鉴于干净的预训练编码器,SSLGuard向其中注入了水印,并输出了水印版本。还采用了阴影训练技术来保留潜在模型窃取攻击下的水印。我们广泛的评估表明,SSLGuard在水印注入和验证方面有效,并且可以防止模型窃取和其他水印去除攻击,例如输入噪声,输出扰动,覆盖,覆盖,模型修剪和微调。
translated by 谷歌翻译
自我监督学习(SSL)是一个日益流行的ML范式,它训练模型以将复杂的输入转换为表示形式而不依赖于明确的标签。这些表示编码的相似性结构可以有效学习多个下游任务。最近,ML-AS-A-A-Service提供商已开始为推理API提供训练有素的SSL模型,该模型将用户输入转换为有用的费用表示。但是,训练这些模型及其对API的曝光涉及的高昂成本都使黑盒提取成为现实的安全威胁。因此,我们探索了对SSL的窃取攻击的模型。与输出标签的分类器上的传统模型提取不同,受害者模型在这里输出表示;与分类器的低维预测分数相比,这些表示的维度明显更高。我们构建了几次新颖的攻击,发现直接在受害者被盗的陈述上训练的方法是有效的,并且能够为下游模型高精度。然后,我们证明现有针对模型提取的防御能力不足,并且不容易改装为SSL的特异性。
translated by 谷歌翻译
The massive growth of self-supervised learning (SSL) has been witnessed in language, vision, speech, and audio domains over the past few years. While discrete label prediction is widely adopted for other modalities, the state-of-the-art audio SSL models still employ reconstruction loss for pre-training. Compared with reconstruction loss, semantic-rich discrete label prediction encourages the SSL model to abstract the high-level audio semantics and discard the redundant details as in human perception. However, a semantic-rich acoustic tokenizer for general audio pre-training is usually not straightforward to obtain, due to the continuous property of audio and unavailable phoneme sequences like speech. To tackle this challenge, we propose BEATs, an iterative audio pre-training framework to learn Bidirectional Encoder representation from Audio Transformers, where an acoustic tokenizer and an audio SSL model are optimized by iterations. In the first iteration, we use random projection as the acoustic tokenizer to train an audio SSL model in a mask and label prediction manner. Then, we train an acoustic tokenizer for the next iteration by distilling the semantic knowledge from the pre-trained or fine-tuned audio SSL model. The iteration is repeated with the hope of mutual promotion of the acoustic tokenizer and audio SSL model. The experimental results demonstrate our acoustic tokenizers can generate discrete labels with rich audio semantics and our audio SSL models achieve state-of-the-art results across various audio classification benchmarks, even outperforming previous models that use more training data and model parameters significantly. Specifically, we set a new state-of-the-art mAP 50.6% on AudioSet-2M for audio-only models without using any external data, and 98.1% accuracy on ESC-50. The code and pre-trained models are available at https://aka.ms/beats.
translated by 谷歌翻译
自我监督学习(SSL)被视为一种非常有前途的方法,对于下游任务的几个语音,高性能。由于SSL模型的参数通常是如此之大,以至于训练和推理需要大量的内存和计算成本,因此希望通过应用诸如知识蒸馏(KD)等压缩方法来生成紧凑的SSL模型,而无需显着性能降解。尽管KD方法能够缩小SSL模型结构的深度和/或宽度,但几乎没有研究如何改变深度和宽度对小脚印模型的内部表示。本文提供了一项解决问题的经验研究。我们在改变结构和KD方法的同时研究了Superb的性能,以保持参数恒定的数量;这使我们能够分析通过改变模型体系结构引入的表示的贡献。实验表明,一定深度对于准确地求解面向内容的任务(例如自动语音识别)至关重要,而在几个面向讲话者的任务上(例如,说话者的身份),必须进行一定宽度对于实现高性能。基于这些观察结果,我们确定了与以前的研究相比,具有更好性能的更高压模型。
translated by 谷歌翻译
Spoken language understanding (SLU) is a task aiming to extract high-level semantics from spoken utterances. Previous works have investigated the use of speech self-supervised models and textual pre-trained models, which have shown reasonable improvements to various SLU tasks. However, because of the mismatched modalities between speech signals and text tokens, previous methods usually need complex designs of the frameworks. This work proposes a simple yet efficient unsupervised paradigm that connects speech and textual pre-trained models, resulting in an unsupervised speech-to-semantic pre-trained model for various tasks in SLU. To be specific, we propose to use unsupervised automatic speech recognition (ASR) as a connector that bridges different modalities used in speech and textual pre-trained models. Our experiments show that unsupervised ASR itself can improve the representations from speech self-supervised models. More importantly, it is shown as an efficient connector between speech and textual pre-trained models, improving the performances of five different SLU tasks. Notably, on spoken question answering, we reach the state-of-the-art result over the challenging NMSQA benchmark.
translated by 谷歌翻译
最近,蒙面的预测预训练在自我监督的学习(SSL)方面取得了显着的进展,以进行语音识别。它通常需要以无监督的方式获得的代码簿,从而使其准确和难以解释。我们提出了两种监督指导的代码书生成方法,以提高自动语音识别(ASR)的性能以及预训练效率,要么通过使用混合ASR系统来解码以生成音素级别对准(命名为PBERT),要么通过在上进行集群进行聚类。从端到端CTC模型(命名CTC聚类)提取的监督语音功能。混合动力和CTC模型均经过与微调相同的少量标记语音训练。实验表明,我们的方法对各种SSL和自我训练基准的优势具有显着优势,相对减少了17.0%。我们的预训练模型在非ASR语音任务中还显示出良好的可传递性。
translated by 谷歌翻译
自我监督的学习(SSL)从大量未标记的数据中学习知识,然后将知识转移到有限数量的标记数据的特定问题上。SSL在各个领域都取得了有希望的结果。这项工作解决了细分级通用音频SSL的问题,并提出了一个新的基于变压器的教师学生SSL模型,名为ATST。在最近出现的教师基线方案上开发了变压器编码器,该方案在很大程度上提高了预训练的建模能力。此外,旨在充分利用变压器的能力的新策略旨在充分利用。已经进行了广泛的实验,并且提出的模型几乎所有下游任务都实现了新的最新结果。
translated by 谷歌翻译
一个名为语音处理通用性能基准(Superb)的排行榜,它旨在基准测试各种下游语音任务的共享自我监督学习(SSL)语音模型的性能,并推动了研究用于语音表示学习。 SuperB演示语音SSL上游模型通过仅限最小的调整来提高各种下游任务的性能。由于自我监督学习上游模型的范式,其次是下游任务,在语音界引起更多关注,表征此类范例的对抗性稳健性是高优先级的。在本文中,我们首次尝试在零知识对手和有限知识对手的袭击下调查此类范例的对抗脆弱性。实验结果表明,Superb提出的范例严重易受有限的知识对手的影响,零知识对手产生的攻击是可转移性的。 XAB测试验证了制作的对抗性攻击的难以察觉。
translated by 谷歌翻译
The sequence length along the time axis is often the dominant factor of the computational cost of self-supervised speech models. Works have been proposed to reduce the sequence length for lowering the computational cost. However, different downstream tasks have different tolerance of sequence compressing, so a model that produces a fixed compressing rate may not fit all tasks. In this work, we introduce a once-for-all (OFA) sequence compression framework for self-supervised speech models that supports a continuous range of compressing rates. The framework is evaluated on various tasks, showing marginal degradation compared to the fixed compressing rate variants with a smooth performance-efficiency trade-off. We further explore adaptive compressing rate learning, demonstrating the ability to select task-specific preferred frame periods without needing a grid search.
translated by 谷歌翻译
Through solving pretext tasks, self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task. In audio/speech signal processing, a wide range of features where engineered through decades of research efforts. As it turns out, learning to predict such features (a.k.a pseudo-labels) has proven to be a particularly relevant pretext task, leading to useful self-supervised representations which prove to be effective for downstream tasks. However, methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly. In fact, the process relies almost exclusively on a computationally heavy experimental procedure, which becomes intractable with the increase of the number of pretext tasks. This paper introduces a method to select a group of pretext tasks among a set of candidates. The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process. The experiments conducted on automatic speech recognition, speaker and emotion recognition validate our approach, as the groups selected and weighted with our method perform better than classic baselines, thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning.
translated by 谷歌翻译
Self-supervised learning (SSL) is a powerful technique for learning representations from unlabeled data. Transformer based models such as HuBERT, which consist a feature extractor and transformer layers, are leading the field in the speech domain. SSL models are fine-tuned on a wide range of downstream tasks, which involves re-training the majority of the model for each task. Previous studies have introduced applying adapters, which are small lightweight modules commonly used in Natural Language Processing (NLP) to adapt pre-trained models to new tasks. However, such efficient tuning techniques only provide adaptation at the transformer layer, but failed to perform adaptation at the feature extractor. In this paper, we propose CHAPTER, an efficient tuning method specifically designed for SSL speech model, by applying CNN adapters at the feature extractor. Using this method, we can only fine-tune fewer than 5% of parameters per task compared to fully fine-tuning and achieve better and more stable performance. We empirically found that adding CNN adapters to the feature extractor can help the adaptation on emotion and speaker tasks. For instance, the accuracy of SID is improved from 87.71 to 91.56, and the accuracy of ER is improved by 5%.
translated by 谷歌翻译
自我监督的学习(SSL)语音模型在语音表示学习中取得了前所未有的成功,但是有关其表示能力的一些问题仍未得到答复。本文解决了其中的两个:(1)SSL语音模型可以处理非语音音频吗? (2)不同的SSL语音模型会对音频功能的各个方面有洞察力吗?为了回答这两个问题,我们对丰富的语音和非语音音频数据集进行了广泛的实验,以评估当前最先进的SSL语音模型的表示能力,该模型是WAV2VEC 2.0和本文中的Hubert。这些实验是在2021年神经期间进行的,听到挑战作为竞争官员提供的标准评估管道。结果表明,(1)SSL语音模型可以提取各种非语音音频的有意义的功能,而它们也可能在某些类型的数据集上失败; (2)不同的SSL语音模型对音频功能的不同方面有洞察力。这两个结论为表示模型的合奏提供了基础。我们进一步提出了一个合奏框架,以融合语音表示模型的嵌入。我们的框架的表现优于最先进的SSL语音/音频模型,并且与Hear Challenge中的其他团队相比,在丰富的数据集上的性能总体上具有出色的性能。我们的代码可在https://github.com/tony101105/hear-2021-neurips-challenge- NTU-GURA获得。
translated by 谷歌翻译
我们介绍折扣,一种用于学习通用音频表示的自我监督的预训练方法。我们的系统基于群集:它利用了离线群集步骤来提供充当伪标签的目标标签,用于解决预测任务。我们开发了最近的自我监督学习近期进步,为计算机愿景和设计轻量级,易于使用的自我监督的预训练计划。我们在大型音频数据集的平衡子集上预先列车脱换嵌入式,并将这些表示转移到9个下游分类任务,包括语音,音乐,动物声音和声学场景。此外,我们开展识别关键设计选择的消融研究,并通过公开提供所有代码和预先训练的型号。
translated by 谷歌翻译
语音扭曲是一个长期存在的问题,它降低了受过监督训练的语音处理模型的性能。现在是时候提高语音处理模型的鲁棒性,以在遇到语音扭曲时获得良好的性能,而不会伤害干净的语音上的原始表现。在这项工作中,我们建议通过域对抗训练(DAT)提高语音处理模型的鲁棒性。我们根据五个不同的语音处理任务的精湛框架进行了实验。如果我们并不总是对语音数据的失真类型有所了解,我们分析了二进制域和多域设置,其中前者将所有扭曲的语音视为一个域,而后者将不同的扭曲视为不同的域。与监督训练方法相反,我们在目标域中获得了有希望的结果,在这些目标域中,语音数据因不同的扭曲而扭曲,包括在测试过程中引入的新看不见的扭曲。
translated by 谷歌翻译
自我监督的学习(SSL)在各种与语音有关的下游任务(包括自动语音识别(ASR))中表现出巨大的成功。 SSL模型的输出嵌入被视为语音信号的强大短期表示。但是,在ASR任务中,主要目标是获得正确的声学单元,字符或字节对编码(BPE)的正确顺序。通常,对于ASR等序列到序列任务,编码器解码器架构非常出色。因此,在本文中,我们提出了一个新的范式,该范式在自学学习过程中利用解码器的力量。我们使用隐藏的单位Bert(Hubert)SSL框架来计算编码器的常规掩蔽预测损失。此外,我们在SSL框架中引入了解码器,并为解码器提出了目标准备策略。最后,我们使用多任务SSL设置,其中我们共同优化编码器和解码器损耗。我们假设SSL模型中的解码器的存在有助于它学习基于声学单元的语言模型,这可能会改善ASR下游任务的性能。我们将我们提出的SSL模型与Hubert进行了比较,并通过对各种LibrisPeech子集进行填充,在ASR上的性能相对相对提高了25%。
translated by 谷歌翻译