本文介绍了流媒体和非流定向晶体翻译的统一端到端帧工作。虽然非流媒体语音翻译的培训配方已经成熟,但尚未建立流媒体传播的食谱。在这项工作中,WEFOCUS在开发一个统一的模型(UNIST),它从基本组成部分的角度支持流媒体和非流媒体ST,包括培训目标,注意机制和解码政策。对最流行的语音到文本翻译基准数据集,MERE-C的实验表明,与媒体ST的BLEU评分和延迟度量有更好的折衷和液化标准端到端基线和级联模型。我们将公开提供我们的代码和评估工具。
translated by 谷歌翻译
我们提出了直接同时的语音转换(SIMUL-S2ST)模型,此外,翻译的产生与中间文本表示无关。我们的方法利用了最近与离散单位直接语音转换的最新进展,其中从模型中预测了一系列离散表示,而不是连续频谱图特征,而不是以无监督的方式学习,并直接传递给语音的声码器综合在一起。我们还介绍了变分单调的多口语注意力(V-MMA),以处理语音同声翻译中效率低效的政策学习的挑战。然后,同时策略在源语音特征和目标离散单元上运行。我们开展实证研究,比较级联和直接方法对Fisher西班牙语 - 英语和必需的英语西班牙语数据集。直接同步模型显示通过在翻译质量和延迟之间实现更好的权衡来优于级联模型。
translated by 谷歌翻译
The study of the attention mechanism has sparked interest in many fields, such as language modeling and machine translation. Although its patterns have been exploited to perform different tasks, from neural network understanding to textual alignment, no previous work has analysed the encoder-decoder attention behavior in speech translation (ST) nor used it to improve ST on a specific task. In this paper, we fill this gap by proposing an attention-based policy (EDAtt) for simultaneous ST (SimulST) that is motivated by an analysis of the existing attention relations between audio input and textual output. Its goal is to leverage the encoder-decoder attention scores to guide inference in real time. Results on en->{de, es} show that the EDAtt policy achieves overall better results compared to the SimulST state of the art, especially in terms of computational-aware latency.
translated by 谷歌翻译
同时语音转换(Simulst)是必须在部分,增量语音输入上执行输出生成的任务。近年来,由于交叉语言应用场景的传播,如国际现场会议和流媒体讲座,Sumulst已经变得很受欢迎,因为在飞行的语音翻译中可以促进用户访问视听内容。在本文中,我们分析到目前为止所开发的Simulst系统的特征,讨论其优势和缺点。然后我们专注于正确评估系统效率所需的评估框架。为此,我们提高了更广泛的性能分析的需求,还包括用户体验的角度。实际上,Simulst Systems不仅应在质量/延迟措施方面进行评估,而且还可以通过以任务为导向的指标计费,例如,用于所采用的可视化策略。鉴于此,我们突出了社区实现的目标以及仍然缺少的目标。
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译
手语翻译作为一种具有深刻社会意义的技术,近年来吸引了研究人员的利益。但是,现有的标志语言翻译方法需要在开始翻译之前阅读所有视频,这导致高推理延迟,并限制了它们在现实方案中的应用程序。为了解决这个问题,我们提出了SIMULSLT,这是第一端到端同步标志语言翻译模型,可以同时将手语录像机转换为目标文本。 SIMUSLT由文本解码器,边界预测器和屏蔽编码器组成。我们1)使用Wait-K战略同时翻译。 2)基于集成和火灾模块设计一种新的边界预测器,以输出光泽边界,该边界用于模拟手语视频和光泽之间的对应关系。 3)提出了一种创新的重新编码方法来帮助模型获取更丰富的上下文信息,这允许现有的视频功能完全交互。在Rwth-Phoenix-MoreSt 2014T数据集上进行的实验结果表明,SIMUSLT实现了超过最新的端到端非同时标志语言翻译模型的BLEU分数,同时保持低延迟,这证明了我们方法的有效性。
translated by 谷歌翻译
由于其误差传播,延迟较少和更少的参数较少的潜力,端到端语音到文本翻译〜(e2e-st)变得越来越受欢迎。鉴于三联培训语料库$ \ langle演讲,转录,翻译\ rangle $,传统的高质量E2E-ST系统利用$ \ langle演讲,转录\ rangle $配对预先培训模型,然后利用$ \ Langle演讲,翻译\ rangle $配对进一步优化它。然而,该过程仅涉及每个阶段的两个元组数据,并且该松散耦合不能完全利用三重态数据之间的关联。在本文中,我们试图基于语音输入模拟转录和翻译的联合概率,以直接利用这种三重态数据。基于此,我们提出了一种新的正规化方法,用于改进三重态数据中双路分解协议的模型培训,理论上应该是相等的。为实现这一目标,我们将两个Kullback-Leibler发散正规化术语介绍到模型培训目的中,以减少双路径输出概率之间的不匹配。然后,训练有素的模型可以通过预定义的早期停止标签自然地被视为E2E-ST模型。 Must-C基准测试的实验表明,我们所提出的方法在所有8个语言对上显着优于最先进的E2E-ST基线,同时在自动语音识别任务中实现更好的性能。我们的代码在https://github.com/duyichao/e2e -st-tda开放。
translated by 谷歌翻译
神经传感器已被广泛用于自动语音识别(ASR)。在本文中,我们将其介绍给流端到端语音翻译(ST),该语音旨在将音频信号直接转换为其他语言的文本。与执行ASR之后的级联ST相比,基于文本的机器翻译(MT),拟议的变压器传感器(TT)基于ST模型大大降低了推理潜伏期,利用语音信息并避免了从ASR到MT的错误传播。为了提高建模能力,我们提出了TT中联合网络的注意集合。此外,我们将基于TT的ST扩展到多语言ST,该ST同时生成多种语言的文本。大规模5万(k)小时的伪标记训练集的实验结果表明,基于TT的ST不仅显着减少了推理时间,而且还优于非流式级联ST进行英语 - 德语翻译。
translated by 谷歌翻译
本文介绍了我们针对IWSLT 2022离线任务的端到端Yitrans语音翻译系统的提交,该任务从英语音频转换为德语,中文和日语。 Yitrans系统建立在大规模训练的编码器模型上。更具体地说,我们首先设计了多阶段的预训练策略,以建立具有大量标记和未标记数据的多模式模型。然后,我们为下游语音翻译任务微调模型的相应组件。此外,我们做出了各种努力,以提高性能,例如数据过滤,数据增强,语音细分,模型集合等。实验结果表明,我们的Yitrans系统比在三个翻译方向上的强基线取得了显着改进,并且比去年在TST2021英语 - 德国人中的最佳端到端系统方面的改进+5.2 BLEU改进。根据自动评估指标,我们的最终意见在英语 - 德国和英语端到端系统上排名第一。我们使代码和模型公开可用。
translated by 谷歌翻译
To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique. However, the modality gap between speech and text prevents the ST model from efficiently inheriting knowledge from the pre-trained models. In this work, we propose AdaTranS for end-to-end ST. It adapts the speech features with a new shrinking mechanism to mitigate the length mismatch between speech and text features by predicting word boundaries. Experiments on the MUST-C dataset demonstrate that AdaTranS achieves better performance than the other shrinking-based methods, with higher inference speed and lower memory usage. Further experiments also show that AdaTranS can be equipped with additional alignment losses to further improve performance.
translated by 谷歌翻译
同时翻译系统在处理输入流中的部分源句子时开始产生输出。这些系统需要决定何时读取更多输入以及何时编写输出。这些决定取决于源/目标语言的结构以及部分输入序列中包含的信息。因此,读/写决策策略在不同的输入方式(即语音和文本)中保持不变。这激发了我们利用与语音输入相对应的文本成绩单,以改善同时的语音到文本翻译(Simulst)。我们建议通过同时使用文本到文本翻译(SIMULMT)任务来改善Simulst系统的决策政策,以改善Simulst系统的决策政策。我们还将几种技术从离线语音翻译域扩展,以探索Simulmt任务在改善Simulst性能中的作用。总体而言,我们在不同的延迟制度(ENDE)SIMULST任务的不同延迟制度中取得了34.66% / 4.5 BLEU的改进。
translated by 谷歌翻译
语音到语音翻译(S2ST)将输入语音转换为另一种语言。实时交付S2ST的挑战是翻译和语音合成模块之间的累积延迟。尽管最近增量的文本到语音(ITTS)模型已显示出巨大的质量改进,但它们通常需要其他未来的文本输入才能达到最佳性能。在这项工作中,我们通过调整上游语音翻译器来为语音合成器生成高质量的伪lookahead来最大程度地减少ITT的最初等待时间。缓解初始延迟后,我们证明了合成语音的持续时间在延迟中也起着至关重要的作用。我们将其形式化为延迟度量,然后提出一种简单而有效的持续时间缩放方法,以减少延迟。我们的方法始终将延迟减少0.2-0.5秒,而无需牺牲语音翻译质量。
translated by 谷歌翻译
我们介绍了Fairseq S2T,这是语音到文本(S2T)建模任务的Fairseq扩展,例如端到端语音识别和语音到文本翻译。它遵循Fairseq的仔细设计,以实现可扩展性和可扩展性。我们提供从数据预处理,模型培训到离线推理的端到端工作流程。我们实施了基于最新的RNN,基于变压器以及基于构象的模型和开源详细培训配方。Fairseq的机器翻译模型和语言模型可以无缝集成到S2T工作流中,以进行多任务学习或转移学习。Fairseq S2T文档和示例可在https://github.com/pytorch/fairseq/tree/master/master/examples/speech_to_text上获得。
translated by 谷歌翻译
Direct speech-to-speech translation (S2ST), in which all components can be optimized jointly, is advantageous over cascaded approaches to achieve fast inference with a simplified pipeline. We present a novel two-pass direct S2ST architecture, {\textit UnitY}, which first generates textual representations and predicts discrete acoustic units subsequently. We enhance the model performance by subword prediction in the first-pass decoder, advanced two-pass decoder architecture design and search strategy, and better training regularization. To leverage large amounts of unlabeled text data, we pre-train the first-pass text decoder based on the self-supervised denoising auto-encoding task. Experimental evaluations on benchmark datasets at various data scales demonstrate that UnitY outperforms a single-pass speech-to-unit translation model by 2.5-4.2 ASR-BLEU with 2.83x decoding speed-up. We show that the proposed methods boost the performance even when predicting spectrogram in the second pass. However, predicting discrete units achieves 2.51x decoding speed-up compared to that case.
translated by 谷歌翻译
同时翻译,它在仅在源句中只收到几个单词后开始翻译每个句子,在许多情况下都具有重要作用。虽然以前的前缀到前缀框架被认为适合同时翻译并实现良好的性能,但它仍然有两个不可避免的缺点:由于需要为每个延迟的单独模型训练单独模型而导致的高计算资源成本$ k $和不足能够编码信息,因为每个目标令牌只能参加特定的源前缀。我们提出了一种新颖的框架,采用简单但有效的解码策略,该策略专为全句型而设计。在此框架内,培训单个全句型模型可以实现任意给出的延迟并节省计算资源。此外,随着全句型模型的能力来编码整个句子,我们的解码策略可以在实时增强在解码状态中保持的信息。实验结果表明,我们的方法在4个方向上的基准方向达到了更好的翻译质量:Zh $ \ lightarrow $ en,en $ \ lightarrow $ ro和en $ \ leftrightarrow $ de。
translated by 谷歌翻译
语音细分将长言语分为短段,对于语音翻译(ST)至关重要。像WebRTC VAD这样的流行VAD工具通常依赖于基于暂停的细分。不幸的是,语音中的暂停不一定与句子边界匹配,句子可以通过很短的停顿连接,而VAD很难检测到。在这项研究中,我们建议使用使用分割的双语语音语料库训练的二元分类模型进行语音分割方法。我们还提出了一种结合VAD和上述语音分割方法的混合方法。实验结果表明,所提出的方法比常规分割方法更适合级联和端到端ST系统。混合方法进一步改善了翻译性能。
translated by 谷歌翻译
The black-box nature of end-to-end speech translation (E2E ST) systems makes it difficult to understand how source language inputs are being mapped to the target language. To solve this problem, we would like to simultaneously generate automatic speech recognition (ASR) and ST predictions such that each source language word is explicitly mapped to a target language word. A major challenge arises from the fact that translation is a non-monotonic sequence transduction task due to word ordering differences between languages -- this clashes with the monotonic nature of ASR. Therefore, we propose to generate ST tokens out-of-order while remembering how to re-order them later. We achieve this by predicting a sequence of tuples consisting of a source word, the corresponding target words, and post-editing operations dictating the correct insertion points for the target word. We examine two variants of such operation sequences which enable generation of monotonic transcriptions and non-monotonic translations from the same speech input simultaneously. We apply our approach to offline and real-time streaming models, demonstrating that we can provide explainable translations without sacrificing quality or latency. In fact, the delayed re-ordering ability of our approach improves performance during streaming. As an added benefit, our method performs ASR and ST simultaneously, making it faster than using two separate systems to perform these tasks.
translated by 谷歌翻译
Data scarcity is one of the main issues with the end-to-end approach for Speech Translation, as compared to the cascaded one. Although most data resources for Speech Translation are originally document-level, they offer a sentence-level view, which can be directly used during training. But this sentence-level view is single and static, potentially limiting the utility of the data. Our proposed data augmentation method SegAugment challenges this idea and aims to increase data availability by providing multiple alternative sentence-level views of a dataset. Our method heavily relies on an Audio Segmentation system to re-segment the speech of each document, after which we obtain the target text with alignment methods. The Audio Segmentation system can be parameterized with different length constraints, thus giving us access to multiple and diverse sentence-level views for each document. Experiments in MuST-C show consistent gains across 8 language pairs, with an average increase of 2.2 BLEU points, and up to 4.7 BLEU for lower-resource scenarios in mTEDx. Additionally, we find that SegAugment is also applicable to purely sentence-level data, as in CoVoST, and that it enables Speech Translation models to completely close the gap between the gold and automatic segmentation at inference time.
translated by 谷歌翻译
How to solve the data scarcity problem for end-to-end speech-to-text translation (ST)? It's well known that data augmentation is an efficient method to improve performance for many tasks by enlarging the dataset. In this paper, we propose Mix at three levels for Speech Translation (M^3ST) method to increase the diversity of the augmented training corpus. Specifically, we conduct two phases of fine-tuning based on a pre-trained model using external machine translation (MT) data. In the first stage of fine-tuning, we mix the training corpus at three levels, including word level, sentence level and frame level, and fine-tune the entire model with mixed data. At the second stage of fine-tuning, we take both original speech sequences and original text sequences in parallel into the model to fine-tune the network, and use Jensen-Shannon divergence to regularize their outputs. Experiments on MuST-C speech translation benchmark and analysis show that M^3ST outperforms current strong baselines and achieves state-of-the-art results on eight directions with an average BLEU of 29.9.
translated by 谷歌翻译
自动副标题是将视听产品的语音自动转化为短文本的任务,换句话说,字幕及其相应的时间戳。生成的字幕需要符合多个空间和时间要求(长度,阅读速度),同时与语音同步并以促进理解的方式进行分割。鉴于其相当大的复杂性,迄今为止,通过分别处理转录,翻译,分割为字幕并预测时间戳的元素来解决自动字幕。在本文中,我们提出了第一个直接自动字幕模型,该模型在单个解决方案中从源语音中生成目标语言字幕及其时间戳。与经过内外数据和外域数据训练的最先进的级联模型的比较表明,我们的系统提供了高质量的字幕,同时在整合性方面也具有竞争力,并具有维护单个模型的所有优势。
translated by 谷歌翻译