We present a method for introducing a text encoder into pre-trained end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method to build a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-arts on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译
To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique. However, the modality gap between speech and text prevents the ST model from efficiently inheriting knowledge from the pre-trained models. In this work, we propose AdaTranS for end-to-end ST. It adapts the speech features with a new shrinking mechanism to mitigate the length mismatch between speech and text features by predicting word boundaries. Experiments on the MUST-C dataset demonstrate that AdaTranS achieves better performance than the other shrinking-based methods, with higher inference speed and lower memory usage. Further experiments also show that AdaTranS can be equipped with additional alignment losses to further improve performance.
translated by 谷歌翻译
端到端(E2E)语音到文本翻译(ST)通常取决于通过语音识别或文本翻译任务使用源成绩单预处理其编码器和/或解码器,否则翻译性能会大大下降。但是,笔录并不总是可用的,在文献中很少研究这种预处理的E2E ST。在本文中,我们重新审视了这个问题,并探讨了仅在语音翻译对培训的E2E ST质量的程度。我们重新审查了几种证明对ST的有益的技术,并提供了一系列最佳实践,这些实践使基于变压器的E2E ST系统偏向于从头开始训练。此外,我们提出了参数化的距离惩罚,以促进语音自我注意模型中的位置建模。在涵盖23种语言的四个基准测试中,我们的实验表明,在不使用任何成绩单或预处理的情况下,提议的系统达到甚至优于先前采用预处理的研究,尽管差距仍然存在(极为)低资源的设置。最后,我们讨论了神经声学特征建模,其中神经模型旨在直接从原始语音信号中提取声学特征,以简化电感偏见并为模型描述语音增添自由度。我们第一次证明了它的可行性,并在ST任务上表现出令人鼓舞的结果。
translated by 谷歌翻译
由于其误差传播,延迟较少和更少的参数较少的潜力,端到端语音到文本翻译〜(e2e-st)变得越来越受欢迎。鉴于三联培训语料库$ \ langle演讲,转录,翻译\ rangle $,传统的高质量E2E-ST系统利用$ \ langle演讲,转录\ rangle $配对预先培训模型,然后利用$ \ Langle演讲,翻译\ rangle $配对进一步优化它。然而,该过程仅涉及每个阶段的两个元组数据,并且该松散耦合不能完全利用三重态数据之间的关联。在本文中,我们试图基于语音输入模拟转录和翻译的联合概率,以直接利用这种三重态数据。基于此,我们提出了一种新的正规化方法,用于改进三重态数据中双路分解协议的模型培训,理论上应该是相等的。为实现这一目标,我们将两个Kullback-Leibler发散正规化术语介绍到模型培训目的中,以减少双路径输出概率之间的不匹配。然后,训练有素的模型可以通过预定义的早期停止标签自然地被视为E2E-ST模型。 Must-C基准测试的实验表明,我们所提出的方法在所有8个语言对上显着优于最先进的E2E-ST基线,同时在自动语音识别任务中实现更好的性能。我们的代码在https://github.com/duyichao/e2e -st-tda开放。
translated by 谷歌翻译
We present Mu$^{2}$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages. By leveraging a quantized representation of speech as a target, Mu$^{2}$SLAM trains the speech-text models with a sequence-to-sequence masked denoising objective similar to T5 on the decoder and a masked language modeling (MLM) objective on the encoder, for both unlabeled speech and text, while utilizing the supervised tasks to improve cross-lingual and cross-modal representation alignment within the model. On CoVoST AST, Mu$^{2}$SLAM establishes a new state-of-the-art for models trained on public datasets, improving on xx-en translation over the previous best by 1.9 BLEU points and on en-xx translation by 1.1 BLEU points. On Voxpopuli ASR, our model matches the performance of an mSLAM model fine-tuned with an RNN-T decoder, despite using a relatively weaker sequence-to-sequence architecture. On text understanding tasks, our model improves by more than 6\% over mSLAM on XNLI, getting closer to the performance of mT5 models of comparable capacity on XNLI and TydiQA, paving the way towards a single model for all speech and text understanding tasks.
translated by 谷歌翻译
端到端的语音到文本翻译模型通常使用预训练的语音编码器和预训练的文本解码器初始化。这导致了预训练和微调之间的显着训练差距,这在很大程度上是由于语音输出与解码器的文本输入之间的形式差异。在这项工作中,我们旨在弥合语音和文本之间的方式差距,以提高翻译质量。我们提出了一种基于变压器的新型模块M-Adapter,以使语音表示为文本。在缩小语音序列的同时,M-ADAPTER通过建模语音序列的全局和局部依赖性产生了对语音到文本翻译所需的特征。我们的实验结果表明,我们的模型在必要的基线上优于强大的基线,最高1个BLEU得分在必要时$ \ rightarrow $ de DataSet。\ footNote {我们的代码可在https://github.com/mingzi151/w2v2-v2-v2--proce上获得。英石。}
translated by 谷歌翻译
本文介绍了我们针对IWSLT 2022离线任务的端到端Yitrans语音翻译系统的提交,该任务从英语音频转换为德语,中文和日语。 Yitrans系统建立在大规模训练的编码器模型上。更具体地说,我们首先设计了多阶段的预训练策略,以建立具有大量标记和未标记数据的多模式模型。然后,我们为下游语音翻译任务微调模型的相应组件。此外,我们做出了各种努力,以提高性能,例如数据过滤,数据增强,语音细分,模型集合等。实验结果表明,我们的Yitrans系统比在三个翻译方向上的强基线取得了显着改进,并且比去年在TST2021英语 - 德国人中的最佳端到端系统方面的改进+5.2 BLEU改进。根据自动评估指标,我们的最终意见在英语 - 德国和英语端到端系统上排名第一。我们使代码和模型公开可用。
translated by 谷歌翻译
When building state-of-the-art speech translation models, the need for large computational resources is a significant obstacle due to the large training data size and complex models. The availability of pre-trained models is a promising opportunity to build strong speech translation systems efficiently. In a first step, we investigate efficient strategies to build cascaded and end-to-end speech translation systems based on pre-trained models. Using this strategy, we can train and apply the models on a single GPU. While the end-to-end models show superior translation performance to cascaded ones, the application of this technology has a limitation on the need for additional end-to-end training data. In a second step, we proposed an additional similarity loss to encourage the model to generate similar hidden representations for speech and transcript. Using this technique, we can increase the data efficiency and improve the translation quality by 6 BLEU points in scenarios with limited end-to-end training data.
translated by 谷歌翻译
How to solve the data scarcity problem for end-to-end speech-to-text translation (ST)? It's well known that data augmentation is an efficient method to improve performance for many tasks by enlarging the dataset. In this paper, we propose Mix at three levels for Speech Translation (M^3ST) method to increase the diversity of the augmented training corpus. Specifically, we conduct two phases of fine-tuning based on a pre-trained model using external machine translation (MT) data. In the first stage of fine-tuning, we mix the training corpus at three levels, including word level, sentence level and frame level, and fine-tune the entire model with mixed data. At the second stage of fine-tuning, we take both original speech sequences and original text sequences in parallel into the model to fine-tune the network, and use Jensen-Shannon divergence to regularize their outputs. Experiments on MuST-C speech translation benchmark and analysis show that M^3ST outperforms current strong baselines and achieves state-of-the-art results on eight directions with an average BLEU of 29.9.
translated by 谷歌翻译
端到端语音翻译(E2E-ST)由于其误差传播的潜力,较低的延迟和较少的参数而受到了越来越多的关注。但是,基于神经的方法对该任务的有效性受到可用培训语料库的严重限制,尤其是对于较少或不存在的域中三重障碍培训数据的领域适应性。在本文中,我们提出了一种新型的非参数方法,该方法利用特定于域的文本翻译语料库来实现E2E-ST系统的域适应性。为此,我们首先将一个附加的编码器纳入预先训练的E2E-ST模型中,以实现文本翻译建模,然后通过减少可用三重态训练数据中的通讯表示不匹配来统一解码器的输出表示形式,以实现文本和语音翻译任务。在域适应过程中,引入了K-Nearest-neighbor(KNN)分类器,以使用由域特异性文本翻译语料库构建的外部数据存储器生成最终的翻译分布,而采用通用输出表示来执行相似性搜索。 Europarl-St基准的实验表明,仅涉及内域文本翻译数据时,我们提出的方法在所有翻译方向上平均将基线显着提高了基线,即使表现出强大的强度内域微调方法。
translated by 谷歌翻译
Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, {\textit UnitY}, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the first-pass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83x decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51x decoding speed-up compared to that case.
translated by 谷歌翻译
最近在单语数据和机器翻译(MT)进行微调的预培训方面取得了成功,但尚不清楚如何最好地利用预先训练的模型来完成给定的MT任务。本文在微调MT上的预训练模型时研究了冻结参数的好处和缺点。我们专注于1)微调仅在英语单语言数据的BART上训练的模型。2)微调一个模型,该模型对25种语言的单语言数据进行了培训,Mbart。对于Bart,我们通过冻结大多数模型参数并添加额外的位置嵌入来获得最佳性能。对于MBART,我们将大多数语言对的天真微调的性能与编码器以及大多数解码器搭配。编码器的注意参数对于微调最重要。当将自己限制为越南人对英语的室外训练套装时,我们看到了基线的最大进步。
translated by 谷歌翻译
We present a new approach to perform zero-shot cross-modal transfer between speech and text for translation tasks. Multilingual speech and text are encoded in a joint fixed-size representation space. Then, we compare different approaches to decode these multimodal and multilingual fixed-size representations, enabling zero-shot translation between languages and modalities. All our models are trained without the need of cross-modal labeled translation data. Despite a fixed-size representation, we achieve very competitive results on several text and speech translation tasks. In particular, we significantly improve the state-of-the-art for zero-shot speech translation on Must-C. Incorporating a speech decoder in our framework, we introduce the first results for zero-shot direct speech-to-speech and text-to-speech translation.
translated by 谷歌翻译
我们提出了Maestro,这是一种自制的培训方法,可以统一从语音和文本方式中学到的表示形式。从语音信号中进行的自我监督学习旨在学习信号中固有的潜在结构,而从文本尝试捕获词汇信息的文本尝试中学习。从不配对的语音和文本序列中学习对齐表示是一项具有挑战性的任务。先前的工作要么隐含地强制执行从这两种方式中学到的表示形式,要通过多任务和参数共享在潜在空间中对齐,或通过语音综合通过模态转换而明确地进行。前者受到两种方式之间的干扰,而后者则引入了额外的复杂性。在本文中,我们提出了一种新颖的算法Maestro,旨在同时从这两种方式中学习统一的表示,可以转移到各种下游任务,例如自动语音识别(ASR)和语音翻译(ST)。 Maestro通过序列比对,持续时间预测和匹配的嵌入在学习空间中通过对齐的蒙版模型损失来学习统一的表示形式。我们在Voxpopuli多语言ASR上建立了一个新的最先进(SOTA),单词错误率相对相对降低8%(WER),多域Speetstew ASR(相对3.7%)和21种英语多语言ST在Covost 2上2.8 BLEU的改善平均21种语言。
translated by 谷歌翻译
交叉语言语音适应旨在解决利用多种丰富资源语言来构建低资源目标语言的模型的问题。由于低资源语言具有有限的培训数据,语音识别模型可以容易地过度装备。在本文中,我们建议使用适配器来研究多种适配器的性能,用于参数有效的交叉语音语音适应。基于我们以前的MetaAdapter,隐含地利用适配器,我们提出了一种名为SimAdapter的新算法,用于从Adapters明确学习知识。我们的算法利用了可以轻松集成到变压器结构中的适配器.METAADAPTER利用元学习将一般知识从训练数据转移到测试语言。 SimAdapter旨在使用适配器微调期间了解源语言与目标语言之间的相似性。我们在公共语音数据集中对五种低资源语言进行广泛的实验。结果表明,与强大的全型微调基线相比,我们的MetaAdapter和SimAdapter方法可以将WER减小2.98%和2.55%,只有2.5%和15.5%的培训参数。此外,我们还表明这两种新型算法可以集成,以便更好的性能,相对减少高达3.55%。
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
本文介绍了流媒体和非流定向晶体翻译的统一端到端帧工作。虽然非流媒体语音翻译的培训配方已经成熟,但尚未建立流媒体传播的食谱。在这项工作中,WEFOCUS在开发一个统一的模型(UNIST),它从基本组成部分的角度支持流媒体和非流媒体ST,包括培训目标,注意机制和解码政策。对最流行的语音到文本翻译基准数据集,MERE-C的实验表明,与媒体ST的BLEU评分和延迟度量有更好的折衷和液化标准端到端基线和级联模型。我们将公开提供我们的代码和评估工具。
translated by 谷歌翻译
Data scarcity is one of the main issues with the end-to-end approach for Speech Translation, as compared to the cascaded one. Although most data resources for Speech Translation are originally document-level, they offer a sentence-level view, which can be directly used during training. But this sentence-level view is single and static, potentially limiting the utility of the data. Our proposed data augmentation method SegAugment challenges this idea and aims to increase data availability by providing multiple alternative sentence-level views of a dataset. Our method heavily relies on an Audio Segmentation system to re-segment the speech of each document, after which we obtain the target text with alignment methods. The Audio Segmentation system can be parameterized with different length constraints, thus giving us access to multiple and diverse sentence-level views for each document. Experiments in MuST-C show consistent gains across 8 language pairs, with an average increase of 2.2 BLEU points, and up to 4.7 BLEU for lower-resource scenarios in mTEDx. Additionally, we find that SegAugment is also applicable to purely sentence-level data, as in CoVoST, and that it enables Speech Translation models to completely close the gap between the gold and automatic segmentation at inference time.
translated by 谷歌翻译
本文介绍了基于Wav2VEC 2.0的跨语言语音表示学习的大规模模型。我们在128种语言中培训最多2B个公共讲话音频的近半小时的型号的模型,比公共数据的数量级比最大的已知事先工作。我们的评估涵盖了广泛的任务,域,数据制度和语言,都是高低资源。在Covost-2语音翻译基准测试中,我们将先前的最先进的状态平均为7.4 BLEU超过21个翻译方向进入英语。对于语音识别,XLS-R在Babel,MLS,CommonVoice以及Voxpopuli上的最佳已知工作中提高,降低了相对的误差率14-34%。 XLS-R还在Voxlingua107语言识别上设置了新的技术状态。此外,我们表明,具有足够的模型规模,交叉思维预先预测可以在将英语演讲翻译成其他语言时才能优于英语撇印,这是一个有利于单晶的预借预制的设置。我们希望XLS-R可以帮助改善世界上更多语言的语音处理任务。
translated by 谷歌翻译