In this paper, we introduce our work of building a Streaming Multilingual Speech Model (SM2), which can transcribe or translate multiple spoken languages into texts of the target language. The backbone of SM2 is Transformer Transducer, which has high streaming capability. Instead of human labeled speech translation (ST) data, SM2 models are trained using weakly supervised data generated by converting the transcriptions in speech recognition corpora with a machine translation service. With 351 thousand hours of anonymized speech training data from 25 languages, SM2 models achieve comparable or even better ST quality than some recent popular large-scale non-streaming speech models. More importantly, we show that SM2 has the truly zero-shot capability when expanding to new target languages, yielding high quality ST results for {source-speech, target-text} pairs that are not seen during training.
translated by 谷歌翻译
神经传感器已被广泛用于自动语音识别(ASR)。在本文中,我们将其介绍给流端到端语音翻译(ST),该语音旨在将音频信号直接转换为其他语言的文本。与执行ASR之后的级联ST相比,基于文本的机器翻译(MT),拟议的变压器传感器(TT)基于ST模型大大降低了推理潜伏期,利用语音信息并避免了从ASR到MT的错误传播。为了提高建模能力,我们提出了TT中联合网络的注意集合。此外,我们将基于TT的ST扩展到多语言ST,该ST同时生成多种语言的文本。大规模5万(k)小时的伪标记训练集的实验结果表明,基于TT的ST不仅显着减少了推理时间,而且还优于非流式级联ST进行英语 - 德语翻译。
translated by 谷歌翻译
End-to-end formulation of automatic speech recognition (ASR) and speech translation (ST) makes it easy to use a single model for both multilingual ASR and many-to-many ST. In this paper, we propose streaming language-agnostic multilingual speech recognition and translation using neural transducers (LAMASSU). To enable multilingual text generation in LAMASSU, we conduct a systematic comparison between specified and unified prediction and joint networks. We leverage a language-agnostic multilingual encoder that substantially outperforms shared encoders. To enhance LAMASSU, we propose to feed target LID to encoders. We also apply connectionist temporal classification regularization to transducer training. Experimental results show that LAMASSU not only drastically reduces the model size but also outperforms monolingual ASR and bilingual ST models.
translated by 谷歌翻译
We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models are freely available.
translated by 谷歌翻译
设备的端到端(E2E)模型已显示出对质量和延迟的英语语音搜索任务的常规模型的改进。 E2E模型还显示了多语言自动语音识别(ASR)的有希望的结果。在本文中,我们将以前的容量解决方案扩展到流应用程序,并提出流媒体多语言E2E ASR系统,该系统在设备上完全运行,质量和延迟与单个单语言模型相当。为了实现这一目标,我们提出了一个编码器端量模型和一个终端(EOU)联合层,以提高质量和延迟权衡。我们的系统以语言不可知论的方式构建,允许它实时支持本条件的代码切换。为了解决大型模型的可行性问题,我们进行了设备分析,并用最近开发的嵌入解码器代替了耗时的LSTM解码器。通过这些更改,我们设法在不到实时的时间内在移动设备上运行了这样的系统。
translated by 谷歌翻译
语音翻译模型无法直接处理较长的音频,例如TED Talks,必须将其分为较短的段。语音翻译数据集提供了音频的手动分割,这些音频在现实世界中不可用,而现有的分割方法通常会在推理时大大降低翻译质量。为了弥合训练的手动分割与推理的自动分割之间的差距,我们提出了有监督的混合音频分割(SHAS),该方法可以有效地从任何手动分段语音语料库中学习最佳分割。首先,我们使用预先训练的WAV2VEC 2.0的语音表示形式来训练分类器,以识别分段中所包含的帧。然后,通过概率分裂和诱导算法找到最佳的分裂点,该算法逐渐在最低概率的框架下逐渐分裂,直到所有段都低于预先指定的长度为止。在Mast-C和MedX上进行的实验表明,通过我们的方法生成的片段的翻译方法将手动分割的质量在5个语言对上进行质量。也就是说,SHAS保留了手动细分的95-98%的BLEU分数,而现有方法的87-93%。我们的方法还可以推广到不同的域,并以看不见的语言实现高零弹性性能。
translated by 谷歌翻译
最近,语音界正在看到从基于深神经网络的混合模型移动到自动语音识别(ASR)的端到端(E2E)建模的显着趋势。虽然E2E模型在大多数基准测试中实现最先进的,但在ASR精度方面,混合模型仍然在当前的大部分商业ASR系统中使用。有很多实际的因素会影响生产模型部署决定。传统的混合模型,用于数十年的生产优化,通常擅长这些因素。在不为所有这些因素提供优异的解决方案,E2E模型很难被广泛商业化。在本文中,我们将概述最近的E2E模型的进步,专注于解决行业视角的挑战技术。
translated by 谷歌翻译
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. The rest of the model, which includes an encoder, decoder and attention module, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model without any increase in parameters, which is significantly simpler than previous proposals for Multilingual NMT. On the WMT'14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-the-art results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT'14 and WMT'15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages.
translated by 谷歌翻译
我们介绍了Fairseq S2T,这是语音到文本(S2T)建模任务的Fairseq扩展,例如端到端语音识别和语音到文本翻译。它遵循Fairseq的仔细设计,以实现可扩展性和可扩展性。我们提供从数据预处理,模型培训到离线推理的端到端工作流程。我们实施了基于最新的RNN,基于变压器以及基于构象的模型和开源详细培训配方。Fairseq的机器翻译模型和语言模型可以无缝集成到S2T工作流中,以进行多任务学习或转移学习。Fairseq S2T文档和示例可在https://github.com/pytorch/fairseq/tree/master/master/examples/speech_to_text上获得。
translated by 谷歌翻译
We propose a) a Language Agnostic end-to-end Speech Translation model (LAST), and b) a data augmentation strategy to increase code-switching (CS) performance. With increasing globalization, multiple languages are increasingly used interchangeably during fluent speech. Such CS complicates traditional speech recognition and translation, as we must recognize which language was spoken first and then apply a language-dependent recognizer and subsequent translation component to generate the desired target language output. Such a pipeline introduces latency and errors. In this paper, we eliminate the need for that, by treating speech recognition and translation as one unified end-to-end speech translation problem. By training LAST with both input languages, we decode speech into one target language, regardless of the input language. LAST delivers comparable recognition and speech translation accuracy in monolingual usage, while reducing latency and error rate considerably when CS is observed.
translated by 谷歌翻译
本文介绍了流媒体和非流定向晶体翻译的统一端到端帧工作。虽然非流媒体语音翻译的培训配方已经成熟,但尚未建立流媒体传播的食谱。在这项工作中,WEFOCUS在开发一个统一的模型(UNIST),它从基本组成部分的角度支持流媒体和非流媒体ST,包括培训目标,注意机制和解码政策。对最流行的语音到文本翻译基准数据集,MERE-C的实验表明,与媒体ST的BLEU评分和延迟度量有更好的折衷和液化标准端到端基线和级联模型。我们将公开提供我们的代码和评估工具。
translated by 谷歌翻译
字幕(替代)的语音翻译是通过将符合特定显示指南的字幕划分插入字幕中断,将语音数据自动转化为良好的字幕。与语音翻译(ST)类似,模型训练需要并行数据,其中包括音频输入与其文本翻译配对。然而,在替代方面,还必须用字幕断裂来注释文本。到目前为止,这一要求代表了系统开发的瓶颈,如公开可用的替代公司所证实。为了填补这一空白,我们提出了一种在不干预的情况下将现有的ST Corpora转换为替代资源的方法。我们构建了一个分段模型,该模型通过以多模式的方式利用音频和文本来自动将文本片段分为适当的字幕,从而在零拍摄条件下实现了高分子的质量。对手动和自动分割培训的替代系统的比较实验导致相似的性能,显示了我们方法的有效性。
translated by 谷歌翻译
语言识别对于自动语音识别(ASR)中的许多下游任务至关重要,并且有益于将多语言端到端的ASR集成为附加任务。在本文中,我们建议通过集成每帧语言标识符(LID)预测器来修改基于层压编码器的复发神经网络传感器(RNN-T)模型的结构。带有级联编码器的RNN-T可以使用不右键的第一通用解码来实现较低延迟的流动ASR,并使用二频道解码使用更长的右文本实现较低的单词错误率(WERS)。通过利用当前文章中的这种差异和统计池的流传输实现,该建议的方法可以实现准确的流盖预测,而几乎没有额外的测试时间成本。语音搜索数据集的实验结果具有9个语言语言位置,表明所提出的方法平均达到96.2%的盖子预测准确性,而与输入中的Oracle盖相同的二次通用方法。
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
我们介绍了一种无线文字语音转换(S2ST)系统,可以将来自一种语言的语音转换为另一种语言,并且可以在不需要任何文本数据的情况下构建。与文献中的现有工作不同,我们解决了模拟多扬声器目标语音的挑战,并用现实世界的S2ST数据训练系统。我们方法的关键是一种自我监督的单位语音标准化技术,该标准化技术将预先训练的语音编码器具有来自多个扬声器的配对声音,以及单个参考扬声器,以减少由于复印件引起的变化,同时保留词汇内容。只有10分钟的语音标准化的配对数据,我们在培训\ vp〜s2st数据集上的S2ST模型时获得平均3.2 BLEU增益,而不是在未标准化的语音目标上培训的基线。我们还将自动开采的S2ST数据纳入并显示额外的2.0 BLEU增益。据我们所知,我们是第一个建立无线的S2ST技术,可以用真实世界的数据培训,并为多种语言配对工作。
translated by 谷歌翻译
Speech translation (ST) is the task of directly translating acoustic speech signals in a source language into text in a foreign language. ST task has been addressed, for a long time, using a pipeline approach with two modules : first an Automatic Speech Recognition (ASR) in the source language followed by a text-to-text Machine translation (MT). In the past few years, we have seen a paradigm shift towards the end-to-end approaches using sequence-to-sequence deep neural network models. This paper presents our efforts towards the development of the first Broadcast News end-to-end Arabic to English speech translation system. Starting from independent ASR and MT LDC releases, we were able to identify about 92 hours of Arabic audio recordings for which the manual transcription was also translated into English at the segment level. These data was used to train and compare pipeline and end-to-end speech translation systems under multiple scenarios including transfer learning and data augmentation techniques.
translated by 谷歌翻译
本文介绍了基于Wav2VEC 2.0的跨语言语音表示学习的大规模模型。我们在128种语言中培训最多2B个公共讲话音频的近半小时的型号的模型,比公共数据的数量级比最大的已知事先工作。我们的评估涵盖了广泛的任务,域,数据制度和语言,都是高低资源。在Covost-2语音翻译基准测试中,我们将先前的最先进的状态平均为7.4 BLEU超过21个翻译方向进入英语。对于语音识别,XLS-R在Babel,MLS,CommonVoice以及Voxpopuli上的最佳已知工作中提高,降低了相对的误差率14-34%。 XLS-R还在Voxlingua107语言识别上设置了新的技术状态。此外,我们表明,具有足够的模型规模,交叉思维预先预测可以在将英语演讲翻译成其他语言时才能优于英语撇印,这是一个有利于单晶的预借预制的设置。我们希望XLS-R可以帮助改善世界上更多语言的语音处理任务。
translated by 谷歌翻译
端到端(E2E)语音到文本翻译(ST)通常取决于通过语音识别或文本翻译任务使用源成绩单预处理其编码器和/或解码器,否则翻译性能会大大下降。但是,笔录并不总是可用的,在文献中很少研究这种预处理的E2E ST。在本文中,我们重新审视了这个问题,并探讨了仅在语音翻译对培训的E2E ST质量的程度。我们重新审查了几种证明对ST的有益的技术,并提供了一系列最佳实践,这些实践使基于变压器的E2E ST系统偏向于从头开始训练。此外,我们提出了参数化的距离惩罚,以促进语音自我注意模型中的位置建模。在涵盖23种语言的四个基准测试中,我们的实验表明,在不使用任何成绩单或预处理的情况下,提议的系统达到甚至优于先前采用预处理的研究,尽管差距仍然存在(极为)低资源的设置。最后,我们讨论了神经声学特征建模,其中神经模型旨在直接从原始语音信号中提取声学特征,以简化电感偏见并为模型描述语音增添自由度。我们第一次证明了它的可行性,并在ST任务上表现出令人鼓舞的结果。
translated by 谷歌翻译
直接语音到语音翻译(S2ST)模型与传统级联系统可用的数据量相比,几乎没有平行的S2ST数据遇到数据稀缺问题,该数据包括自动语音识别(ASR),机器翻译(MT)和文本到语音(TTS)合成。在这项工作中,我们使用未标记的语音数据和数据扩展来探索自我监督的预训练,以解决此问题。我们利用了最近提出的语音到单位翻译(S2UT)框架,该框架将目标语音编码为离散表示形式,并转移前训练前和有效的部分填充技术,可很好地适用于语音到文本翻译(S2T)通过研究语音编码器和离散单位解码器预训练,S2UT域。我们在西班牙语 - 英语翻译上进行的实验表明,与多任务学习相比,自我监督的预训练始终如一地提高模型性能,平均为6.6-12.1 BLEU增长,并且可以与数据增强技术相结合,以应用MT来创建弱监督监督的培训数据。音频样本可在以下网址获得:https://facebookresearch.github.io/speech_translation/enhanced_direct_s2st_units/index.html。
translated by 谷歌翻译