自我监督的语音表示学习在各种语音处理任务中显示出令人鼓舞的结果。但是,预先训练的模型,例如休伯特是存储密集型变压器,限制了其在低资源设置下的应用程序范围。为此,我们建议通过修剪结构化参数自动找到所需的体系结构Lighthubert,这是一个曾经是变压器的压缩框架。更确切地说,我们创建了一个基于变压器的超级网,该超网嵌套着数千个重量共享子网,并设计了一个两阶段的蒸馏策略,以利用休伯特的上下文化潜在表示。关于自动语音识别(ASR)和出色基准的实验表明,拟议的lighthubert可实现$ 10^9 $的架构,该体系结构涉及嵌入尺寸,注意力维度,头部编号,进率向前网络比率和网络深度。 Lighthubert优于ASR上的原始Hubert和Hubert大小的五个出色的任务,在大多数任务中,在大多数任务中都具有可比的性能,并减少了29%的参数,并获得了$ 3.5 \ times $ times $ compression $压缩比在三个超级任务中,例如自动扬声器验证,关键字发现和意图分类,略有准确的损失。代码和预培训模型可在https://github.com/mechanicalsea/lighthubert上找到。
translated by 谷歌翻译
大规模的语音自我监督学习(SSL)已经出现到语音处理的主要领域,但是,由于其巨大规模而引起的计算成本问题是对学术界的高障碍。此外,语音SSL模型的现有蒸馏技术通过减少层来压缩模型,从而在语言模式识别任务(例如音素识别(PR))中引起性能降解。在本文中,我们提出了Fithubert,它几乎在几乎所有模型组件中都使尺寸较薄,并且与先前的语音SSL蒸馏作品相比,层层更深。此外,我们采用缩短时间来加快推理时间,并提出一种基于提示的蒸馏方法,以减少性能降解。与休伯特相比,我们的方法将模型降低到23.8%,推理时间为35.9%。此外,我们在优越的基准上达到了12.1%的单词错误率和13.3%的音素错误率,这比先前的工作优越。
translated by 谷歌翻译
自我监督学习(SSL)在语音识别方面取得了巨大的成功,而有限的探索已尝试完成其他语音处理任务。由于语音信号包含多方面的信息,包括说话者身份,副语言学,口语内容等,学习所有语音任务的通用表示都具有挑战性。为了解决该问题,我们提出了一个新的预培训模型WAVLM,以解决全堆栈的下游语音任务。 Wavlm共同学习了蒙面的语音预测和预训练。通过这种方式,WAVLM不仅可以通过掩盖的语音预测来保持语音内容建模能力,而且还可以通过语音denoing来提高非ASR任务的潜力。此外,WAVLM还采用封闭式的变压器结构的封闭相对位置偏置,以更好地捕获输入语音的序列排序。我们还将培训数据集从60k小时扩展到94K小时。 WAVLM大型在精湛的基准上实现了最先进的性能,并在其代表性基准上为各种语音处理任务带来了重大改进。代码和预培训模型可在https://aka.ms/wavlm上找到。
translated by 谷歌翻译
最近,先驱工作发现,演讲预训练模型可以解决全堆栈语音处理任务,因为该模型利用底层学习扬声器相关信息和顶层以编码与内容相关的信息。由于网络容量有限,我们认为如果模型专用于音频内容信息学习,则可以进一步提高语音识别性能。为此,我们向自我监督学习(ILS-SSL)提出中间层监督,这将模型通过在中间层上添加额外的SSL丢失来尽可能地专注于内容信息。 LibrisPeech测试 - 其他集合的实验表明,我们的方法显着优于Hubert,这实现了基数/大型模型的W / O语言模型设置的相对字错误率降低了23.5%/ 11.6%。详细分析显示我们模型的底层与拼音单元具有更好的相关性,这与我们的直觉一致,并解释了我们对ASR的方法的成功。
translated by 谷歌翻译
The sequence length along the time axis is often the dominant factor of the computational cost of self-supervised speech models. Works have been proposed to reduce the sequence length for lowering the computational cost. However, different downstream tasks have different tolerance of sequence compressing, so a model that produces a fixed compressing rate may not fit all tasks. In this work, we introduce a once-for-all (OFA) sequence compression framework for self-supervised speech models that supports a continuous range of compressing rates. The framework is evaluated on various tasks, showing marginal degradation compared to the fixed compressing rate variants with a smooth performance-efficiency trade-off. We further explore adaptive compressing rate learning, demonstrating the ability to select task-specific preferred frame periods without needing a grid search.
translated by 谷歌翻译
最近,蒙面的预测预训练在自我监督的学习(SSL)方面取得了显着的进展,以进行语音识别。它通常需要以无监督的方式获得的代码簿,从而使其准确和难以解释。我们提出了两种监督指导的代码书生成方法,以提高自动语音识别(ASR)的性能以及预训练效率,要么通过使用混合ASR系统来解码以生成音素级别对准(命名为PBERT),要么通过在上进行集群进行聚类。从端到端CTC模型(命名CTC聚类)提取的监督语音功能。混合动力和CTC模型均经过与微调相同的少量标记语音训练。实验表明,我们的方法对各种SSL和自我训练基准的优势具有显着优势,相对减少了17.0%。我们的预训练模型在非ASR语音任务中还显示出良好的可传递性。
translated by 谷歌翻译
自我监督学习(SSL)被视为一种非常有前途的方法,对于下游任务的几个语音,高性能。由于SSL模型的参数通常是如此之大,以至于训练和推理需要大量的内存和计算成本,因此希望通过应用诸如知识蒸馏(KD)等压缩方法来生成紧凑的SSL模型,而无需显着性能降解。尽管KD方法能够缩小SSL模型结构的深度和/或宽度,但几乎没有研究如何改变深度和宽度对小脚印模型的内部表示。本文提供了一项解决问题的经验研究。我们在改变结构和KD方法的同时研究了Superb的性能,以保持参数恒定的数量;这使我们能够分析通过改变模型体系结构引入的表示的贡献。实验表明,一定深度对于准确地求解面向内容的任务(例如自动语音识别)至关重要,而在几个面向讲话者的任务上(例如,说话者的身份),必须进行一定宽度对于实现高性能。基于这些观察结果,我们确定了与以前的研究相比,具有更好性能的更高压模型。
translated by 谷歌翻译
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-ofthe-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. 1
translated by 谷歌翻译
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutional decoder and amortize the effort to build teacher representations. data2vec 2.0 benefits from the rich contextualized target representations introduced in data2vec which enable a fast self-supervised learner. Experiments on ImageNet-1K image classification show that data2vec 2.0 matches the accuracy of Masked Autoencoders in 16.4x lower pre-training time, on Librispeech speech recognition it performs as well as wav2vec 2.0 in 10.6x less time, and on GLUE natural language understanding it matches a retrained RoBERTa model in half the time. Trading some speed for accuracy results in ImageNet-1K top-1 accuracy of 86.8\% with a ViT-L model trained for 150 epochs.
translated by 谷歌翻译
自我监督的语音表示,如Wav2Vec 2.0和Hubert正在自动语音识别(ASR)中进行革命性进展。但是,未经监督模型没有完全证明在ASR以外的任务中产生更好的性能。在这项工作中,我们探索了Wav2Vec 2.0和Hubert预先训练模型的部分微调和整个微调,适用于三个非ASR语音任务:语音情感识别,发言者验证和口语理解。我们还比较带有/没有ASR微调的预训练型号。通过简单的下游框架,最佳分数对IEMocap上的语音情感识别的加权精度达到79.58%,扬声器验证对voxcereB1的2.36%,意图分类的准确性为87.51%,Slotp的槽填充的75.32%f1,因此为这三个基准设置新的最先进,证明了微调Wave2VEC 2.0和Hubert模型可以更好地学习韵律,语音印刷和语义表示。
translated by 谷歌翻译
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. 1 1 Code and models are available at https://github.com/pytorch/fairseq Preprint. Under review.
translated by 谷歌翻译
In this paper, we propose a novel multi-modal multi-task encoder-decoder pre-training framework (MMSpeech) for Mandarin automatic speech recognition (ASR), which employs both unlabeled speech and text data. The main difficulty in speech-text joint pre-training comes from the significant difference between speech and text modalities, especially for Mandarin speech and text. Unlike English and other languages with an alphabetic writing system, Mandarin uses an ideographic writing system where character and sound are not tightly mapped to one another. Therefore, we propose to introduce the phoneme modality into pre-training, which can help capture modality-invariant information between Mandarin speech and text. Specifically, we employ a multi-task learning framework including five self-supervised and supervised tasks with speech and text data. For end-to-end pre-training, we introduce self-supervised speech-to-pseudo-codes (S2C) and phoneme-to-text (P2T) tasks utilizing unlabeled speech and text data, where speech-pseudo-codes pairs and phoneme-text pairs are a supplement to the supervised speech-text pairs. To train the encoder to learn better speech representation, we introduce self-supervised masked speech prediction (MSP) and supervised phoneme prediction (PP) tasks to learn to map speech into phonemes. Besides, we directly add the downstream supervised speech-to-text (S2T) task into the pre-training process, which can further improve the pre-training performance and achieve better recognition results even without fine-tuning. Experiments on AISHELL-1 show that our proposed method achieves state-of-the-art performance, with a more than 40% relative improvement compared with other pre-training methods.
translated by 谷歌翻译
We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low- and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models will be made public.
translated by 谷歌翻译
本文研究了一种新型的预训练技术,该技术具有未配对的语音数据Segend2C,用于基于编码器的自动语音识别(ASR)。在一个多任务学习框架内,我们使用声音单元(即伪代码)介绍了编码器 - 编码器网络的两个预训练任务,这些任务来自离线聚类模型。一种是通过在编码器输出中通过掩盖语言建模来预测伪代码,例如Hubert模型,而另一个使解码器学会学会重建伪代码自动加工,而不是生成文本脚本。通过这种方式,解码器学会了在学习生成正确的文本之前先用代码重建原始语音信息。在Librispeech语料库上进行的综合实验表明,在没有解码器预训练的情况下,提出的Speek2C可以相对将单词错误率(WER)降低19.2%,并且在最先进的WAV2VEC 2.0和HUBERT上的表现显着优于微调子集为10h和100h。我们在https://github.com/microsoft/speecht5/tree/main/main/speech2c上发布代码和模型。
translated by 谷歌翻译
一个名为语音处理通用性能基准(Superb)的排行榜,它旨在基准测试各种下游语音任务的共享自我监督学习(SSL)语音模型的性能,并推动了研究用于语音表示学习。 SuperB演示语音SSL上游模型通过仅限最小的调整来提高各种下游任务的性能。由于自我监督学习上游模型的范式,其次是下游任务,在语音界引起更多关注,表征此类范例的对抗性稳健性是高优先级的。在本文中,我们首次尝试在零知识对手和有限知识对手的袭击下调查此类范例的对抗脆弱性。实验结果表明,Superb提出的范例严重易受有限的知识对手的影响,零知识对手产生的攻击是可转移性的。 XAB测试验证了制作的对抗性攻击的难以察觉。
translated by 谷歌翻译
知识蒸馏(KD),最称为模型压缩的有效方法,旨在将更大的网络(教师)的知识转移到更小的网络(学生)。传统的KD方法通常采用以监督方式培训的教师模型,其中输出标签仅作为目标处理。我们进一步扩展了这一受监督方案,我们为KD,即Oracle老师推出了一种新型的教师模型,它利用源输入和输出标签的嵌入来提取更准确的知识来转移到学生。所提出的模型遵循变压器网络的编码器解码器注意结构,这允许模型从输出标签上参加相关信息。在三种不同的序列学习任务中进行了广泛的实验:语音识别,场景文本识别和机器翻译。从实验结果来看,我们经验证明,拟议的模型在这些任务中改善了学生,同时在教师模型的培训时间内实现了相当大的速度。
translated by 谷歌翻译
我们总结了使用巨大的自动语音识别(ASR)模型的大量努力的结果,该模型使用包含大约一百万小时音频的大型,多样的未标记数据集进行了预训练。我们发现,即使对于拥有数万个小时的标记数据的非常大的任务,预训练,自我培训和扩大模型大小的组合也大大提高了数据效率。特别是,在具有34K小时标记数据的ASR任务上,通过微调80亿个参数预先训练的构象异构体模型,我们可以匹配最先进的(SOTA)性能(SOTA)的性能,只有3%的培训数据和通过完整的训练集可以显着改善SOTA。我们还报告了从使用大型预训练和自我训练的模型来完成一系列下游任务所获得的普遍利益,这些任务涵盖了广泛的语音域,并涵盖了多个数据集大小的大小,包括在许多人中获得SOTA性能公共基准。此外,我们利用预先训练的网络的学会表示,在非ASR任务上实现SOTA结果。
translated by 谷歌翻译
在生物医学语料库中预先培训的语言模型,例如Biobert,最近在下游生物医学任务上显示出令人鼓舞的结果。另一方面,由于嵌入尺寸,隐藏尺寸和层数等因素,许多现有的预训练模型在资源密集型和计算上都是沉重的。自然语言处理(NLP)社区已经制定了许多策略来压缩这些模型,利用修剪,定量和知识蒸馏等技术,从而导致模型更快,更小,随后更易于使用。同样,在本文中,我们介绍了六种轻型模型,即Biodistilbert,Biotinybert,BioMobilebert,Distilbiobert,Tinybiobert和Cmpactactbiobert,并通过掩护的语言在PubMed DataSet上通过掩护数据进行了知识蒸馏而获得的知识蒸馏来获得。建模(MLM)目标。我们在三个生物医学任务上评估了所有模型,并将它们与Biobert-V1.1进行比较,以创建有效的轻量级模型,以与较大的对应物相同。所有模型将在我们的HuggingFace配置文件上公开可用,网址为https://huggingface.co/nlpie,用于运行实验的代码将在https://github.com/nlpie-research/compact-compact-biomedical-transformers上获得。
translated by 谷歌翻译
本文介绍了基于Wav2VEC 2.0的跨语言语音表示学习的大规模模型。我们在128种语言中培训最多2B个公共讲话音频的近半小时的型号的模型,比公共数据的数量级比最大的已知事先工作。我们的评估涵盖了广泛的任务,域,数据制度和语言,都是高低资源。在Covost-2语音翻译基准测试中,我们将先前的最先进的状态平均为7.4 BLEU超过21个翻译方向进入英语。对于语音识别,XLS-R在Babel,MLS,CommonVoice以及Voxpopuli上的最佳已知工作中提高,降低了相对的误差率14-34%。 XLS-R还在Voxlingua107语言识别上设置了新的技术状态。此外,我们表明,具有足够的模型规模,交叉思维预先预测可以在将英语演讲翻译成其他语言时才能优于英语撇印,这是一个有利于单晶的预借预制的设置。我们希望XLS-R可以帮助改善世界上更多语言的语音处理任务。
translated by 谷歌翻译
这项工作介绍了Brillsson,这是一种基于二进制神经网络的新型表示模型,用于广泛的非语义语音任务。我们从一个大型且价值的琐事模型中使用知识蒸馏来训练该模型,其中仅用于训练Trillsson的数据集中只有一小部分。由此产生的Brillsson型号的尺寸仅为2MB,潜伏期小于8ms,使其适合在低资源设备(例如可穿戴设备)中部署。我们在八项基准任务(包括但不限于口语识别,情感识别,荒地状况诊断和关键字斑点)上评估布里尔森,并证明我们提出的拟议的超轻质和低延迟模型以及大型模型以及大型模型。
translated by 谷歌翻译