端到端(E2E)模型已成为最新语音识别系统的默认选择。此类型号经过大量标记数据的培训,这些数据通常无法用于低资源语言。诸如自我监督学习和转移学习的诺言之类的技术尚未在培训准确的模型中有效。另一方面,在各种域和扬声器集合上收集标记的数据集非常昂贵。在这项工作中,我们通过公共资料中的印度语言,特别是来自印度广播电台的公共档案馆的印度语言的``采矿''文本和音频对展示了这些方法的廉价和有效替代方案。作为关键组件,我们将Needleman-Wunsch算法调整为与相应的音频片段对齐句子,并给定长音频和其转录本的PDF,同时由于OCR,无关紧要的文本和未转录的语音而对错误进行了强大的态度。因此,我们创建了Shrutilipi,这是一个数据集,其中包含超过6,400个小时的12个印度语言标签的音频,总计为495万个句子。平均而言,Shrutilipi导致2.3倍增加了公开可用的标签数据。我们在12种语言中与21种人类评估者建立了Shrutilipi的质量。我们还根据代表区域,说话者和提到的实体建立了Shrutilipi的多样性。值得注意的是,我们表明,将Shrutilipi添加到WAV2VEC模型的训练集中,导致在Indicsuperb基准上的7种语言中,平均降低了5.8 \%。对于具有最多基准的印地语(7),平均水平从18.8%下降到13.5%。这种改进扩展到有效的模型:对于构象异构体模型(比WAV2VEC小10倍),我们显示出2.3%的下降。最后,我们通过证明对其进行训练的模型对嘈杂的输入更强大,证明了Shrutilipi的多样性。
translated by 谷歌翻译
最近的言语和语言技术的方法预先rain非常大型模型,用于特定任务。然而,这种大型模型的好处通常仅限于世界上少数资源丰富的语言。在这项工作中,我们对来自印度次大陆的低资源语言构建ASR系统进行多种贡献。首先,我们从各种领域策划40个印度语言的17,000小时的原始语音数据,包括教育,新闻,技术和金融。其次,使用这种原始语音数据,我们预先存在于40个印度语言的Wav2Vec样式模型的多个变体。第三,我们分析佩带的模型以查找关键特点:码本矢量的类似探测音素在语言中共享,跨层的表示是语言系列的判别,并且注意力头通常会在小型本地窗口中注意。第四,我们微调了9种语言的下游ASR模型,并在3个公共数据集上获得最先进的结果,包括非常低的资源语言,如Sinhala和Nepali。我们的工作建立了多语言预介质是建立ASR系统的有效策略,为印度次大陆的语言上不同的扬声器建立ASR系统。
translated by 谷歌翻译
我们提出Vakyansh,这是一种用指示语言识别语音识别的端到端工具包。印度拥有近121种语言和大约125亿扬声器。然而,大多数语言在数据和预验证的模型方面都是低资源。通过Vakyansh,我们介绍了自动数据管道,用于数据创建,模型培训,模型评估和部署。我们以23个指示语言和Train Wav2Vec 2.0预验证的模型创建14,000小时的语音数据。然后,对这些预审预告措施的模型进行了修订,以创建18个指示语言的最先进的语音识别模型,其次是语言模型和标点符号修复模型。我们以使命开源所有这些资源,这将激发语音社区使用ASR模型以指示语言开发语音的首次应用程序。
translated by 谷歌翻译
自动语音识别和文本到语音系统主要以监督方式培训,需要高质量,准确标记的语音数据集。在这项工作中,我们研究语音数据的常见问题,并为语音数据集的构建和交互式错误分析引入工具箱。施工工具基于K \“urzinger等。工作,并且,尽我们所知,数据集探索工具是世界上第一个这类开源工具。我们演示了如何应用这些工具来创建一个俄语语音数据集并分析现有语音数据集(多语种LibrisPeech,Mozilla Common语音)。该工具是开放的,作为Nemo框架的一部分。
translated by 谷歌翻译
AI研究中的基石是创建和采用标准化培训和测试数据集,以指定最新模型的进度。一个特别成功的例子是用于培训和评估英语自然语言理解(NLU)模型的胶水数据集。围绕基于BERT的语言模型的大量研究围绕着胶水中NLU任务的性能改进。为了评估其他语言的语言模型,创建了几个特定语言的胶水数据集。语音语言理解(SLU)的领域遵循了类似的轨迹。大型自我监督模型(例如WAV2VEC2)的成功实现了具有相对易于访问的未标记数据的语音模型。然后可以在SLU任务(例如出色的基准测试)上评估这些模型。在这项工作中,我们将其扩展到通过释放Indicsuperb基准测试来指示语言。具体来说,我们做出以下三项贡献。 (i)我们收集了Kathbath,其中包含来自印度203个地区的1,218个贡献者的12个印度语言的1,684小时的标记语音数据。 (ii)使用Kathbath,我们在6个语音任务中创建基准:自动语音识别,扬声器验证,说话者识别(单声道/多),语言识别,逐个示例查询以及对12种语言的关键字发现。 (iii)在发布的基准测试中,我们与常用的基线Fbank一起训练和评估不同的自我监督模型。我们表明,在大多数任务上,特定于语言的微调模型比基线更准确,包括对于语言识别任务的76 \%差距。但是,对于说话者识别,在大型数据集上训练的自我监督模型证明了一个优势。我们希望Indicsuperb有助于发展印度语言的语音语言理解模型的进步。
translated by 谷歌翻译
我们介绍Samanantar,是最大的公开可用的并行Corpora Collection,用于指示语言。该集合中的英语和11个上线语言之间总共包含4970万句对(来自两种语言系列)。具体而言,我们从现有的公共可用并行基层编译1240万句对,另外,从网络上挖掘3740万句对,导致4倍增加。我们通过组合许多语料库,工具和方法来挖掘网站的并行句子:(a)Web爬行单格式语料库,(b)文档OCR,用于从扫描的文档中提取句子,(c)用于对齐句子的多语言表示模型,以及(d)近似最近的邻居搜索搜索大量句子。人类评估新矿业的Corpora的样本验证了11种语言的高质量平行句子。此外,我们使用英语作为枢轴语言,从英式并行语料库中提取所有55个指示语言对之间的834百万句子对。我们培训了跨越Samanantar上所有这些语言的多语种NMT模型,这在公开可用的基准上表现出现有的模型和基准,例如弗洛雷斯,建立萨曼塔尔的效用。我们的数据和模型可在Https://indicnlp.ai4bharat.org/samanantar/上公开提供,我们希望他们能够帮助推进NMT和Multibingual NLP的研究。
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
构建可用的无线电监控自动语音识别(ASR)系统是资源不足的语言的一项挑战性任务,但这在广播是公众沟通和讨论的主要媒介的社会中至关重要。联合国在乌干达的最初努力证明了如何理解被社交媒体排除在社交媒体中的农村人的看法在国家规划中很重要。但是,由于缺乏转录的语音数据集,这些努力正受到挑战。在本文中,Makerere人工智能研究实验室发布了155小时的Luganda Radio演讲语料库。据我们所知,这是撒哈拉以南非洲第一个公开可用的广播数据集。本文描述了语音语料库的开发,并使用开源语音识别工具包Coqui STT Toolkit提出了基线Luganda ASR绩效结果。
translated by 谷歌翻译
Data scarcity is one of the main issues with the end-to-end approach for Speech Translation, as compared to the cascaded one. Although most data resources for Speech Translation are originally document-level, they offer a sentence-level view, which can be directly used during training. But this sentence-level view is single and static, potentially limiting the utility of the data. Our proposed data augmentation method SegAugment challenges this idea and aims to increase data availability by providing multiple alternative sentence-level views of a dataset. Our method heavily relies on an Audio Segmentation system to re-segment the speech of each document, after which we obtain the target text with alignment methods. The Audio Segmentation system can be parameterized with different length constraints, thus giving us access to multiple and diverse sentence-level views for each document. Experiments in MuST-C show consistent gains across 8 language pairs, with an average increase of 2.2 BLEU points, and up to 4.7 BLEU for lower-resource scenarios in mTEDx. Additionally, we find that SegAugment is also applicable to purely sentence-level data, as in CoVoST, and that it enables Speech Translation models to completely close the gap between the gold and automatic segmentation at inference time.
translated by 谷歌翻译
在本文中,我们介绍了一个高质量的大规模基准数据集,用于英语 - 越南语音翻译,其中有508音频小时,由331k的三胞胎组成(句子长度的音频,英语源笔录句,越南人目标subtitle句子)。我们还使用强基础进行了经验实验,发现传统的“级联”方法仍然优于现代“端到端”方法。据我们所知,这是第一个大规模的英语 - 越南语音翻译研究。我们希望我们的公开数据集和研究都可以作为未来研究和英语语音翻译应用的起点。我们的数据集可从https://github.com/vinairesearch/phost获得
translated by 谷歌翻译
We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models are freely available.
translated by 谷歌翻译
低资源语言的自动语音识别(ASR)改善了语言少数群体的访问,以便人工智能(AI)提供的技术优势。在本文中,我们通过创建一个新的粤语数据集来解决香港广东语言的数据稀缺问题。我们的数据集多域粤语语料库(MDCC)由73.6小时的清洁阅读语音与成绩单配对,从香港的粤语有声读物收集。它结合了哲学,政治,教育,文化,生活方式和家庭领域,涵盖了广泛的主题。我们还查看所有现有的粤语数据集,并在两个最大的数据集(MDCC和公共语音ZH-HK)上执行实验。我们根据其语音类型,数据源,总大小和可用性分析现有数据集。使用Fairseq S2T变压器,最先进的ASR模型进行实验结果,显示了我们数据集的有效性。此外,我们通过在MDCC和常见的声音ZH-HK上应用多数据集学习来创建一个强大而强大的粤语ASR模型。
translated by 谷歌翻译
我们介绍了一种无线文字语音转换(S2ST)系统,可以将来自一种语言的语音转换为另一种语言,并且可以在不需要任何文本数据的情况下构建。与文献中的现有工作不同,我们解决了模拟多扬声器目标语音的挑战,并用现实世界的S2ST数据训练系统。我们方法的关键是一种自我监督的单位语音标准化技术,该标准化技术将预先训练的语音编码器具有来自多个扬声器的配对声音,以及单个参考扬声器,以减少由于复印件引起的变化,同时保留词汇内容。只有10分钟的语音标准化的配对数据,我们在培训\ vp〜s2st数据集上的S2ST模型时获得平均3.2 BLEU增益,而不是在未标准化的语音目标上培训的基线。我们还将自动开采的S2ST数据纳入并显示额外的2.0 BLEU增益。据我们所知,我们是第一个建立无线的S2ST技术,可以用真实世界的数据培训,并为多种语言配对工作。
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
我们提出了一种用于计算自动语音识别(ASR)中错误率的新方法。这个新的指标是针对包含半字符的语言,可以以不同形式编写相同的字符。我们在印地语中实施了我们的方法论,这是指示上下文中的主要语言之一,我们认为这种方法可扩展到包含大型字符集的其他类似语言。我们称我们的指标替代单词错误率(AWER)和替代字符错误率(ACER)。我们使用wav2Vec 2.0 \ cite {baevski2020wav2vec}训练我们的ASR模型。此外,我们使用语言模型来改善我们的模型性能。我们的结果表明,在分析单词和角色级别的错误率方面有了显着提高,ASR系统的可解释性提高了高达$ 3 $ \%的AWER,印地语的ACER $ 7 $ \%。我们的实验表明,在具有复杂发音的语言中,有多种写单词而不改变其含义的方式。在这种情况下,Awer和Acer将更有用,而不是将其作为指标。此外,我们通过新的公制脚本为印地语开了一个21小时的新基准测试数据集。
translated by 谷歌翻译
Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a $120$ hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a test-bed for multi-domain evaluation of Greek ASR systems. In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.
translated by 谷歌翻译
Bibletts是一种在撒哈拉以南非洲使用的十种语言的大型,高质量的开放语音数据集。该语料库包含每语言最多86个小时的对齐,工作室质量的48kHz单扬声器唱片,从而能够开发高质量的文本到语音模型。代表的十种语言是:Akuapem Twi,Asante Twi,Chichewa,Ewe,Hausa,Kikuyu,Lingala,Luganda,Luganda,Luo和Yoruba。该语料库是由Biblica的Open.Bible Project制作和发行的圣经录音的衍生作品。我们已经对齐,清洁和过滤了原始录音,并还对每种语言的对齐子进行了手工检查。我们为具有Coqui TTS的文本到语音模型提供了结果。数据是根据商业友好的CC-SA许可发布的。
translated by 谷歌翻译
由于少量转录的音频数据,为低资源语言开发自动语音识别(ASR)是一个挑战。对于许多这样的语言,音频和文本可单独使用,但没有带有抄录的音频。使用文本,可以通过文本到语音(TTS)系统综合生产语音。但是,许多低资源语言也没有质量的TTS系统。我们提出了一种替代方案:通过通过训练有素的TTS系统运行来自目标语言的文本来制作综合音频,用于高资源枢轴语言。我们研究了该技术在低资源环境中最有效的何时以及如何有效。在我们的实验中,使用数千种合成TTS文本语音对并复制真实数据来平衡可产生最佳结果。我们的发现表明,搜索一组候选枢轴语言可能会导致边际改进,令人惊讶的是,ASR性能可能会受到测量的TTS质量的提高而受到的伤害。这些发现的应用将ASR分别提高了64.5 \%和45.0 \%的字符误差率(CERR),分别对两种低资源语言:瓜兰\'i和suba。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
translated by 谷歌翻译