Designing a natural voice interface rely mostly on Speech recognition for interaction between human and their modern digital life equipment. In addition, speech recognition narrows the gap between monolingual individuals to better exchange communication. However, the field lacks wide support for several universal languages and their dialects, while most of the daily conversations are carried out using them. This paper comes to inspect the viability of designing an Automatic Speech Recognition model for the Sudanese dialect, which is one of the Arabic Language dialects, and its complexity is a product of historical and social conditions unique to its speakers. This condition is reflected in both the form and content of the dialect, so this paper gives an overview of the Sudanese dialect and the tasks of collecting represented resources and pre-processing performed to construct a modest dataset to overcome the lack of annotated data. Also proposed end- to-end speech recognition model, the design of the model was formed using Convolution Neural Networks. The Sudanese dialect dataset would be a stepping stone to enable future Natural Language Processing research targeting the dialect. The designed model provided some insights into the current recognition task and reached an average Label Error Rate of 73.67%.
translated by 谷歌翻译
构建可用的无线电监控自动语音识别(ASR)系统是资源不足的语言的一项挑战性任务,但这在广播是公众沟通和讨论的主要媒介的社会中至关重要。联合国在乌干达的最初努力证明了如何理解被社交媒体排除在社交媒体中的农村人的看法在国家规划中很重要。但是,由于缺乏转录的语音数据集,这些努力正受到挑战。在本文中,Makerere人工智能研究实验室发布了155小时的Luganda Radio演讲语料库。据我们所知,这是撒哈拉以南非洲第一个公开可用的广播数据集。本文描述了语音语料库的开发,并使用开源语音识别工具包Coqui STT Toolkit提出了基线Luganda ASR绩效结果。
translated by 谷歌翻译
本文介绍了阿拉伯语多方面自动语音识别的设计与开发。深度神经网络正在成为解决顺序数据问题的有效工具,特别是采用系统的端到端培训。阿拉伯语语音识别是一个复杂的任务,因为存在多种方言,非可用性的大型语言和遗失的声音。因此,这项工作的第一种贡献是开发具有完全或至少部分发声转录的大型多方面语料库。此外,开源语料库已从多个源收集,通过定义公共字符集来对转录中的非标准阿拉伯字母表进行标准化。第二款贡献是开发框架,用于培训实现最先进的性能的声学模型。网络架构包括卷积和复发层的组合。音频数据的频谱图特征在频率VS时域中提取并在网络中馈送。通过复发模型产生的输出帧进一步训练以使音频特征与其相应的转录序列对齐。使用具有Tetra-Gram语言模型的波束搜索解码器来执行序列对准。所提出的系统实现了14%的错误率,以前优于以前的系统。
translated by 谷歌翻译
接受社会辅助机器人的基本功能之一是其与环境中其他代理商的通信能力。在Robin项目的背景下,调查了通过与机器人的语音互动的情境对话。本文介绍了具有深度神经网络的不同语音识别实验,专注于生产快速(从网络本身的100ms延迟下),而仍然可靠的型号。即使关键所需特性之一是低延迟,最终的深度神经网络模型也能实现识别罗马尼亚语的最新状态,以获得9.91%的字错误率(WER),当与语言模型相结合,从而改善以前的结果同时提供了改进的运行时性能。此外,我们探索了两个模块,用于校正ASR输出(连字符和大写恢复和未知单词校正),针对Robin项目的目标(在封闭的微观世界中对话)。我们根据API设计模块化架构,允许整合引擎(机器人或外部)根据需要将可用模块链接在一起。最后,我们通过将其集成在相关平台中并通过上传文件或录制新的语音来测试所提出的设计。
translated by 谷歌翻译
开发语音技术是对低资源语言的挑战,其中注释和原始语音数据稀疏。马耳他是一种这样的语言。近年来,对马耳他的计算处理有所增加,包括语音技术,但后者的资源仍然稀疏。在本文中,我们考虑提高这些语言的语音识别的数据增强技术,专注于马耳他作为测试用例。我们考虑三种不同类型的数据增强:无监督的培训,多语言培训和合成演讲的使用作为培训数据。目标是确定这些技术或它们的组合,是改善起始点是大约7小时转录语音的语言的语言的最有效。我们的结果表明,在这里研究了三种数据增强技术,导致我们在不使用语言模型的情况下实现15%的绝对增长。
translated by 谷歌翻译
移动设备正在转换人们与计算机交互的方式,以及对应用程序的语音接口更重要。最近发布的自动语音识别系统非常准确,但通常需要强大的机械(专业图形处理单元)推断,这使得它们在商品设备上运行不切实际,特别是在流模式下运行。通过对(Khassanov等人,2021)的基线哈萨克斯坦模型的推理时间(Khassanov等,2021)的推理时间留下了深刻的印象,我们训练了一个新的基线声学模型(在与上述纸上相同的数据集上)和三种语言模型用于COQUI STT框架。结果看起来很有希望,但进一步训练和参数扫描的时期,或者是限制ASR系统必须支持的词汇,以达到生产水平精度。
translated by 谷歌翻译
低资源语言的自动语音识别(ASR)改善了语言少数群体的访问,以便人工智能(AI)提供的技术优势。在本文中,我们通过创建一个新的粤语数据集来解决香港广东语言的数据稀缺问题。我们的数据集多域粤语语料库(MDCC)由73.6小时的清洁阅读语音与成绩单配对,从香港的粤语有声读物收集。它结合了哲学,政治,教育,文化,生活方式和家庭领域,涵盖了广泛的主题。我们还查看所有现有的粤语数据集,并在两个最大的数据集(MDCC和公共语音ZH-HK)上执行实验。我们根据其语音类型,数据源,总大小和可用性分析现有数据集。使用Fairseq S2T变压器,最先进的ASR模型进行实验结果,显示了我们数据集的有效性。此外,我们通过在MDCC和常见的声音ZH-HK上应用多数据集学习来创建一个强大而强大的粤语ASR模型。
translated by 谷歌翻译
已经证明了深度学习技术在各种任务中有效,特别是在语音识别系统的发展中,即旨在以一系列写词中的音频句子转录音频句子的系统。尽管该地区进展,但语音识别仍然可以被认为是困难的,特别是对于缺乏可用数据的语言,例如巴西葡萄牙语(BP)。从这个意义上讲,这项工作介绍了仅使用打开可用的音频数据的公共自动语音识别(ASR)系统的开发,从Wav2Vec 2.0 XLSR-53模型的微调,在许多语言中,通过BP数据进行了多种。最终模型在7个不同的数据集中呈现12.4%的平均误差率(在应用语言模型时10.5%)。根据我们的知识,这是开放ASR系统中BP的最佳结果。
translated by 谷歌翻译
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
translated by 谷歌翻译
The Common Voice corpus is a massively-multilingual collection of transcribed speech intended for speech technology research and development. Common Voice is designed for Automatic Speech Recognition purposes but can be useful in other domains (e.g. language identification). To achieve scale and sustainability, the Common Voice project employs crowdsourcing for both data collection and data validation. The most recent release includes 29 languages, and as of November 2019 there are a total of 38 languages collecting data. Over 50,000 individuals have participated so far, resulting in 2,500 hours of collected audio. To our knowledge this is the largest audio corpus in the public domain for speech recognition, both in terms of number of hours and number of languages. As an example use case for Common Voice, we present speech recognition experiments using Mozilla's DeepSpeech Speech-to-Text toolkit. By applying transfer learning from a source English model, we find an average Character Error Rate improvement of 5.99 ± 5.48 for twelve target languages (German, French, Italian, Turkish, Catalan, Slovenian, Welsh, Irish, Breton, Tatar, Chuvash, and Kabyle). For most of these languages, these are the first ever published results on end-to-end Automatic Speech Recognition.
translated by 谷歌翻译
自动语音识别和文本到语音系统主要以监督方式培训,需要高质量,准确标记的语音数据集。在这项工作中,我们研究语音数据的常见问题,并为语音数据集的构建和交互式错误分析引入工具箱。施工工具基于K \“urzinger等。工作,并且,尽我们所知,数据集探索工具是世界上第一个这类开源工具。我们演示了如何应用这些工具来创建一个俄语语音数据集并分析现有语音数据集(多语种LibrisPeech,Mozilla Common语音)。该工具是开放的,作为Nemo框架的一部分。
translated by 谷歌翻译
自动语音识别(ASR)是一个复杂和具有挑战性的任务。近年来,该地区出现了重大进展。特别是对于巴西葡萄牙语(BP)语言,在2020年的下半年,有大约376小时的公众可供ASR任务。在2021年初发布新数据集,这个数字增加到574小时。但是,现有资源由仅包含读取和准备的演讲的Audios组成。缺少数据集包括自发性语音,这在不同的ASR应用中是必不可少的。本文介绍了Coraa(注释Audios语料库)V1。使用290.77小时,在包含验证对(音频转录)的BP中ASR的公共可用数据集。科拉还含有欧洲葡萄牙音像(4.69小时)。我们还提供了一个基于Wav2VEC 2.0 XLSR-53的公共ASR模型,并通过CoraA进行微调。我们的模型在CoraA测试集中实现了24.18%的单词误差率,并且在常见的语音测试集上为20.08%。测量字符错误率时,我们分别获得11.02%和6.34%,分别为CoraA和常见声音。 Coraa Corpora在自发言论中与BP中的改进ASR模型进行了组装,并激励年轻研究人员开始研究葡萄牙语的ASR。所有Corpora都在CC By-NC-ND 4.0许可证下公开提供Https://github.com/nilc-nlp/coraa。
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
自动语音识别(ASR)是新服务的关键元素,可帮助用户与自动化系统进行交互。深度学习方法使得用单词错误率低于5%的英语ASR部署系统成为可能。但是,这些方法的使用仅适用于具有数百或数千小时音频及其相应转录的语言。为了使所谓的低资源语言加快可以改善其ASR系统性能的资源的可用性,正在研究基于现有的资源来创建新资源的方法。在本文中,我们描述了我们的数据增强方法,以改善低资源和凝集性语言的ASR模型的结果。我们使用Wav2letter ++模型进行了为Quechua开发ASR的实验。通过我们的基本模型方法,我们将WER降低了8.73%。由此产生的ASR模型获得了22.75%的WER,并接受了99小时的原始资源和99小时的合成数据的培训,并结合了文本增强和合成语音发电
translated by 谷歌翻译
我们提出了一种用于计算自动语音识别(ASR)中错误率的新方法。这个新的指标是针对包含半字符的语言,可以以不同形式编写相同的字符。我们在印地语中实施了我们的方法论,这是指示上下文中的主要语言之一,我们认为这种方法可扩展到包含大型字符集的其他类似语言。我们称我们的指标替代单词错误率(AWER)和替代字符错误率(ACER)。我们使用wav2Vec 2.0 \ cite {baevski2020wav2vec}训练我们的ASR模型。此外,我们使用语言模型来改善我们的模型性能。我们的结果表明,在分析单词和角色级别的错误率方面有了显着提高,ASR系统的可解释性提高了高达$ 3 $ \%的AWER,印地语的ACER $ 7 $ \%。我们的实验表明,在具有复杂发音的语言中,有多种写单词而不改变其含义的方式。在这种情况下,Awer和Acer将更有用,而不是将其作为指标。此外,我们通过新的公制脚本为印地语开了一个21小时的新基准测试数据集。
translated by 谷歌翻译
在本文中,我们使用语言数据收集的现场方法讨论了四种低资源印度语语言的演讲语料库的过程中的工作 - Awadhi,Bhojpuri,Braj和Magahi。目前,语料库的总大小约为18小时(每种语言约4-5小时),并用语法信息进行转录和注释,例如词性标签,形态学特征和普遍的依赖关系。我们讨论了以这些语言收集数据的方法,其中大多数是在Covid-19大流行中心进行的,其中之一是为低收入群体带来一些额外的收入,说这些语言。在本文中,我们还讨论了这些语言中自动语音识别系统的基线实验的结果。
translated by 谷歌翻译
This paper describes a simple yet efficient repetition-based modular system for speeding up air-traffic controllers (ATCos) training. E.g., a human pilot is still required in EUROCONTROL's ESCAPE lite simulator (see https://www.eurocontrol.int/simulator/escape) during ATCo training. However, this need can be substituted by an automatic system that could act as a pilot. In this paper, we aim to develop and integrate a pseudo-pilot agent into the ATCo training pipeline by merging diverse artificial intelligence (AI) powered modules. The system understands the voice communications issued by the ATCo, and, in turn, it generates a spoken prompt that follows the pilot's phraseology to the initial communication. Our system mainly relies on open-source AI tools and air traffic control (ATC) databases, thus, proving its simplicity and ease of replicability. The overall pipeline is composed of the following: (1) a submodule that receives and pre-processes the input stream of raw audio, (2) an automatic speech recognition (ASR) system that transforms audio into a sequence of words; (3) a high-level ATC-related entity parser, which extracts relevant information from the communication, i.e., callsigns and commands, and finally, (4) a speech synthesizer submodule that generates responses based on the high-level ATC entities previously extracted. Overall, we show that this system could pave the way toward developing a real proof-of-concept pseudo-pilot system. Hence, speeding up the training of ATCos while drastically reducing its overall cost.
translated by 谷歌翻译
人民的言论是自由下载的30,000小时,并在CC-BY-SA下进行学术和商业用途的许可的受监管的会话英语语音识别数据集(具有CC-by子集)。通过使用现有转录搜索适当许可的音频数据来通过搜索互联网来收集数据。我们描述了我们的数据收集方法,并在Apache 2.0许可证下发布了我们的数据收集系统。我们表明,在此数据集上培训的模型在Librispeech的测试清洁测试集上实现了9.98%的单词错误率。最后,我们讨论了围绕创建一个相当大量的机器学习的法律和道德问题,并计划继续维护项目的计划根据MLCommons的赞助。
translated by 谷歌翻译
Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
translated by 谷歌翻译
本文介绍了一种新的普通话 - 英语代码转换语音识别的语料库 - 塔尔奇语料库,适用于培训和评估代码转换语音识别系统。滑石乐谱来自TAL教育小组中真正的在线在线一对一的英语教学场景,其中包含大约587个小时的语音采样16 kHz。据我们所知,滑石科目是世界上标签最大的普通话 - 英语密码开关开源自动语音识别(ASR)数据集。在本文中,我们将详细介绍录制过程,包括捕获设备和语料库环境的音频。并且滑石场可以根据允许许可证免费下载。我们使用滑石乐谱,在两个流行的语音识别工具包中进行ASR实验,以制造包括ESPNET和WENET在内的基线系统。在滑石粉中比较了两个语音识别工具包中的混合错误率(MER)性能。实验结果表明,音频记录和转录的质量是有希望的,基线系统是可行的。
translated by 谷歌翻译