This paper describes a simple yet efficient repetition-based modular system for speeding up air-traffic controllers (ATCos) training. E.g., a human pilot is still required in EUROCONTROL's ESCAPE lite simulator (see https://www.eurocontrol.int/simulator/escape) during ATCo training. However, this need can be substituted by an automatic system that could act as a pilot. In this paper, we aim to develop and integrate a pseudo-pilot agent into the ATCo training pipeline by merging diverse artificial intelligence (AI) powered modules. The system understands the voice communications issued by the ATCo, and, in turn, it generates a spoken prompt that follows the pilot's phraseology to the initial communication. Our system mainly relies on open-source AI tools and air traffic control (ATC) databases, thus, proving its simplicity and ease of replicability. The overall pipeline is composed of the following: (1) a submodule that receives and pre-processes the input stream of raw audio, (2) an automatic speech recognition (ASR) system that transforms audio into a sequence of words; (3) a high-level ATC-related entity parser, which extracts relevant information from the communication, i.e., callsigns and commands, and finally, (4) a speech synthesizer submodule that generates responses based on the high-level ATC entities previously extracted. Overall, we show that this system could pave the way toward developing a real proof-of-concept pseudo-pilot system. Hence, speeding up the training of ATCos while drastically reducing its overall cost.
translated by 谷歌翻译
Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
translated by 谷歌翻译
Automatic Speech Recognition (ASR) for air traffic control is generally trained by pooling Air Traffic Controller (ATCO) and pilot data into one set. This is motivated by the fact that pilot's voice communications are more scarce than ATCOs. Due to this data imbalance and other reasons (e.g., varying acoustic conditions), the speech from ATCOs is usually recognized more accurately than from pilots. Automatically identifying the speaker roles is a challenging task, especially in the case of the noisy voice recordings collected using Very High Frequency (VHF) receivers or due to the unavailability of the push-to-talk (PTT) signal, i.e., both audio channels are mixed. In this work, we propose to (1) automatically segment the ATCO and pilot data based on an intuitive approach exploiting ASR transcripts and (2) subsequently consider an automatic recognition of ATCOs' and pilots' voice as two separate tasks. Our work is performed on VHF audio data with high noise levels, i.e., signal-to-noise (SNR) ratios below 15 dB, as this data is recognized to be helpful for various speech-based machine-learning tasks. Specifically, for the speaker role identification task, the module is represented by a simple yet efficient knowledge-based system exploiting a grammar defined by the International Civil Aviation Organization (ICAO). The system accepts text as the input, either manually verified annotations or automatically generated transcripts. The developed approach provides an average accuracy in speaker role identification of about 83%. Finally, we show that training an acoustic model for ASR tasks separately (i.e., separate models for ATCOs and pilots) or using a multitask approach is well suited for the noisy data and outperforms the traditional ASR system where all data is pooled together.
translated by 谷歌翻译
构建可用的无线电监控自动语音识别(ASR)系统是资源不足的语言的一项挑战性任务,但这在广播是公众沟通和讨论的主要媒介的社会中至关重要。联合国在乌干达的最初努力证明了如何理解被社交媒体排除在社交媒体中的农村人的看法在国家规划中很重要。但是,由于缺乏转录的语音数据集,这些努力正受到挑战。在本文中,Makerere人工智能研究实验室发布了155小时的Luganda Radio演讲语料库。据我们所知,这是撒哈拉以南非洲第一个公开可用的广播数据集。本文描述了语音语料库的开发,并使用开源语音识别工具包Coqui STT Toolkit提出了基线Luganda ASR绩效结果。
translated by 谷歌翻译
接受社会辅助机器人的基本功能之一是其与环境中其他代理商的通信能力。在Robin项目的背景下,调查了通过与机器人的语音互动的情境对话。本文介绍了具有深度神经网络的不同语音识别实验,专注于生产快速(从网络本身的100ms延迟下),而仍然可靠的型号。即使关键所需特性之一是低延迟,最终的深度神经网络模型也能实现识别罗马尼亚语的最新状态,以获得9.91%的字错误率(WER),当与语言模型相结合,从而改善以前的结果同时提供了改进的运行时性能。此外,我们探索了两个模块,用于校正ASR输出(连字符和大写恢复和未知单词校正),针对Robin项目的目标(在封闭的微观世界中对话)。我们根据API设计模块化架构,允许整合引擎(机器人或外部)根据需要将可用模块链接在一起。最后,我们通过将其集成在相关平台中并通过上传文件或录制新的语音来测试所提出的设计。
translated by 谷歌翻译
In this paper, we perform an exhaustive evaluation of different representations to address the intent classification problem in a Spoken Language Understanding (SLU) setup. We benchmark three types of systems to perform the SLU intent detection task: 1) text-based, 2) lattice-based, and a novel 3) multimodal approach. Our work provides a comprehensive analysis of what could be the achievable performance of different state-of-the-art SLU systems under different circumstances, e.g., automatically- vs. manually-generated transcripts. We evaluate the systems on the publicly available SLURP spoken language resource corpus. Our results indicate that using richer forms of Automatic Speech Recognition (ASR) outputs allows SLU systems to improve in comparison to the 1-best setup (4% relative improvement). However, crossmodal approaches, i.e., learning from acoustic and text embeddings, obtains performance similar to the oracle setup, and a relative improvement of 18% over the 1-best configuration. Thus, crossmodal architectures represent a good alternative to overcome the limitations of working purely automatically generated textual data.
translated by 谷歌翻译
确保适当的标点符号和字母外壳是朝向应用复杂的自然语言处理算法的关键预处理步骤。这对于缺少标点符号和壳体的文本源,例如自动语音识别系统的原始输出。此外,简短的短信和微博的平台提供不可靠且经常错误的标点符号和套管。本调查概述了历史和最先进的技术,用于恢复标点符号和纠正单词套管。此外,突出了当前的挑战和研究方向。
translated by 谷歌翻译
口语语言理解(SLU)任务涉及从语音音频信号映射到语义标签。鉴于此类任务的复杂性,可能预期良好的性能需要大量标记的数据集,这很难为每个新任务和域收集。但是,最近的自我监督讲话表现的进步使得考虑使用有限标记的数据学习SLU模型是可行的。在这项工作中,我们专注于低资源讨论(ner)并解决问题:超越自我监督的预培训,我们如何使用未为任务注释的外部语音和/或文本数据?我们借鉴了各种方法,包括自我训练,知识蒸馏和转移学习,并考虑其对端到端模型和管道(语音识别后跟文本型号)的适用性。我们发现,这些方法中的几种方法可以在资源受限的环境中提高绩效,超出了训练有素的表示的福利。与事先工作相比,我们发现改进的F1分数高达16%。虽然最好的基线模型是一种管道方法,但使用外部数据时最终通过端到端模型实现的最佳性能。我们提供了详细的比较和分析,例如,端到端模型能够专注于更加立列人的单词。
translated by 谷歌翻译
开发语音技术是对低资源语言的挑战,其中注释和原始语音数据稀疏。马耳他是一种这样的语言。近年来,对马耳他的计算处理有所增加,包括语音技术,但后者的资源仍然稀疏。在本文中,我们考虑提高这些语言的语音识别的数据增强技术,专注于马耳他作为测试用例。我们考虑三种不同类型的数据增强:无监督的培训,多语言培训和合成演讲的使用作为培训数据。目标是确定这些技术或它们的组合,是改善起始点是大约7小时转录语音的语言的语言的最有效。我们的结果表明,在这里研究了三种数据增强技术,导致我们在不使用语言模型的情况下实现15%的绝对增长。
translated by 谷歌翻译
自动语音识别和文本到语音系统主要以监督方式培训,需要高质量,准确标记的语音数据集。在这项工作中,我们研究语音数据的常见问题,并为语音数据集的构建和交互式错误分析引入工具箱。施工工具基于K \“urzinger等。工作,并且,尽我们所知,数据集探索工具是世界上第一个这类开源工具。我们演示了如何应用这些工具来创建一个俄语语音数据集并分析现有语音数据集(多语种LibrisPeech,Mozilla Common语音)。该工具是开放的,作为Nemo框架的一部分。
translated by 谷歌翻译
自动语音识别(ASR)是一个复杂和具有挑战性的任务。近年来,该地区出现了重大进展。特别是对于巴西葡萄牙语(BP)语言,在2020年的下半年,有大约376小时的公众可供ASR任务。在2021年初发布新数据集,这个数字增加到574小时。但是,现有资源由仅包含读取和准备的演讲的Audios组成。缺少数据集包括自发性语音,这在不同的ASR应用中是必不可少的。本文介绍了Coraa(注释Audios语料库)V1。使用290.77小时,在包含验证对(音频转录)的BP中ASR的公共可用数据集。科拉还含有欧洲葡萄牙音像(4.69小时)。我们还提供了一个基于Wav2VEC 2.0 XLSR-53的公共ASR模型,并通过CoraA进行微调。我们的模型在CoraA测试集中实现了24.18%的单词误差率,并且在常见的语音测试集上为20.08%。测量字符错误率时,我们分别获得11.02%和6.34%,分别为CoraA和常见声音。 Coraa Corpora在自发言论中与BP中的改进ASR模型进行了组装,并激励年轻研究人员开始研究葡萄牙语的ASR。所有Corpora都在CC By-NC-ND 4.0许可证下公开提供Https://github.com/nilc-nlp/coraa。
translated by 谷歌翻译
已经证明了深度学习技术在各种任务中有效,特别是在语音识别系统的发展中,即旨在以一系列写词中的音频句子转录音频句子的系统。尽管该地区进展,但语音识别仍然可以被认为是困难的,特别是对于缺乏可用数据的语言,例如巴西葡萄牙语(BP)。从这个意义上讲,这项工作介绍了仅使用打开可用的音频数据的公共自动语音识别(ASR)系统的开发,从Wav2Vec 2.0 XLSR-53模型的微调,在许多语言中,通过BP数据进行了多种。最终模型在7个不同的数据集中呈现12.4%的平均误差率(在应用语言模型时10.5%)。根据我们的知识,这是开放ASR系统中BP的最佳结果。
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
通过共享数据集和基准,已经促进了语音处理的进展。历史上,这些都集中在自动语音识别(ASR),扬声器标识或其他较低级别的任务上。兴趣在更高层次的口语中越来越多,理解任务,包括使用端到端模型,但是此类任务的注释数据集较少。与此同时,最近的工作显示了预先培训通用表示的可能性,然后使用相对较少标记的数据进行微调的多个任务。我们建议为口语语言理解(屠宰)创建一套基准任务,由有限尺寸标记的培训集和相应的评估集组成。该资源将允许研究界跟踪进度,评估高级任务的预先接受预期的表示,并研究开放的问题,例如管道与端到端方法的实用性。我们介绍了雪橇基准套件的第一阶段,包括指定实体识别,情感分析和相应数据集上的ASR。我们专注于自然产生的(未读取或综合)语音和自由可用的数据集。我们为VoxceReb和Voxpopuli数据集的子集提供新的转录和注释,基线模型的评估指标和结果,以及重现基线的开源工具包,并评估新模型。
translated by 谷歌翻译
口语理解(SLU)是大多数人机相互作用系统中的核心任务。随着智能家居,智能手机和智能扬声器的出现,SLU已成为该行业的关键技术。在经典的SLU方法中,自动语音识别(ASR)模块将语音信号转录为文本表示,自然语言理解(NLU)模块从中提取语义信息。最近,基于深神经网络的端到端SLU(E2E SLU)已经获得了动力,因为它受益于ASR和NLU部分的联合优化,因此限制了管道架构的误差效应的级联反应。但是,对于E2E模型用于预测语音输入的概念和意图的实际语言特性知之甚少。在本文中,我们提出了一项研究,以确定E2E模型执行SLU任务的信号特征和其他语言特性。该研究是在必须处理非英语(此处法语)语音命令的智能房屋的应用领域进行的。结果表明,良好的E2E SLU性能并不总是需要完美的ASR功能。此外,结果表明,与管道模型相比,E2E模型在处理背景噪声和句法变化方面具有出色的功能。最后,更细粒度的分析表明,E2E模型使用输入信号的音调信息来识别语音命令概念。本文概述的结果和方法提供了一个跳板,以进一步分析语音处理中的E2E模型。
translated by 谷歌翻译
本文介绍了蒙古人的高质量开源文本到语音(TTS)合成数据集,蒙古是一种低资源的语言,该语言是全球超过1000万人所讲的。该数据集名为MNTTS,由一位22岁专业女性蒙古播音员说的大约8个小时的录音录音组成。它是第一个开发的公开数据集,旨在促进学术界和行业中的蒙古TTS应用程序。在本文中,我们通过描述数据集开发程序并面临挑战来分享我们的经验。为了证明数据集的可靠性,我们建立了一个基于FastSpeech2模型和HIFI-GAN Vocoder的强大的非自动回调基线系统,并使用主观平均意见分数(MOS)和实时因素(RTF)指标对其进行了评估。评估结果表明,在我们的数据集上训练的功能强大的基线系统可在4和RTF上获得MOS,大约3.30美元\ times10^{ - 1} $,这使其适用于实际使用。数据集,培训配方和预估计的TTS模型是免费可用的\ footNote {\ label {github} \ url {https://github.com/walker.com/walker-hyf/mntts}}}。
translated by 谷歌翻译
最近的言语和语言技术的方法预先rain非常大型模型,用于特定任务。然而,这种大型模型的好处通常仅限于世界上少数资源丰富的语言。在这项工作中,我们对来自印度次大陆的低资源语言构建ASR系统进行多种贡献。首先,我们从各种领域策划40个印度语言的17,000小时的原始语音数据,包括教育,新闻,技术和金融。其次,使用这种原始语音数据,我们预先存在于40个印度语言的Wav2Vec样式模型的多个变体。第三,我们分析佩带的模型以查找关键特点:码本矢量的类似探测音素在语言中共享,跨层的表示是语言系列的判别,并且注意力头通常会在小型本地窗口中注意。第四,我们微调了9种语言的下游ASR模型,并在3个公共数据集上获得最先进的结果,包括非常低的资源语言,如Sinhala和Nepali。我们的工作建立了多语言预介质是建立ASR系统的有效策略,为印度次大陆的语言上不同的扬声器建立ASR系统。
translated by 谷歌翻译
本文介绍了阿拉伯语多方面自动语音识别的设计与开发。深度神经网络正在成为解决顺序数据问题的有效工具,特别是采用系统的端到端培训。阿拉伯语语音识别是一个复杂的任务,因为存在多种方言,非可用性的大型语言和遗失的声音。因此,这项工作的第一种贡献是开发具有完全或至少部分发声转录的大型多方面语料库。此外,开源语料库已从多个源收集,通过定义公共字符集来对转录中的非标准阿拉伯字母表进行标准化。第二款贡献是开发框架,用于培训实现最先进的性能的声学模型。网络架构包括卷积和复发层的组合。音频数据的频谱图特征在频率VS时域中提取并在网络中馈送。通过复发模型产生的输出帧进一步训练以使音频特征与其相应的转录序列对齐。使用具有Tetra-Gram语言模型的波束搜索解码器来执行序列对准。所提出的系统实现了14%的错误率,以前优于以前的系统。
translated by 谷歌翻译
本文介绍了WenetsPeech,一个由10000多小时的高质量标记语音组成的多域普通话语料库,2400多小时弱贴言论,大约100万小时的语音,总共22400多小时。我们收集来自YouTube和Podcast的数据,涵盖各种演讲样式,场景,域名,主题和嘈杂的条件。引入了基于光学字符识别(OCR)的方法,以在其对应的视频字幕上为YouTube数据生成音频/文本分段候选,而高质量的ASR转录系统用于为播客数据生成音频/文本对候选。然后我们提出了一种新的端到端标签错误检测方法,可以进一步验证和过滤候选者。我们还提供三个手动标记的高质量测试集,以及WenetsPeech进行评估 - 开发用于训练中的交叉验证目的,从互联网收集的匹配测试,并从真实会议中记录的测试\ _MEETING,以获得更具挑战性的不匹配测试。使用有线exeeEX培训的基线系统,用于三个流行的语音识别工具包,即Kaldi,Espnet和Wenet,以及三个测试集的识别结果也被提供为基准。据我们所知,WenetsPeech是目前最大的开放式普通话语音语料库,其中有利于生产级语音识别的研究。
translated by 谷歌翻译
在空中交通管制(ATC)控制器飞行员谈话的自动语音指令的理解(SIU)不仅需要认识到的演讲词和语义,但也确定了演讲者的角色。然而,很少有在空中交通通信专注于扬声器的作用识别(SRI)自动认识系统发表的作品。在本文中,我们制定管制员 - 驾驶员通信的SRI任务作为二元分类问题。提出此外,基于文本的,基于语音和语音和文本为基础的多模态的方法来达到SRI任务的全面比较。消融的比较方法的影响,各种先进的神经网络架构应用进行优化的,基于语音的基于文本和方法的实现。最重要的是,多模态扬声器的作用识别网络(MMSRINet)设计同时考虑语音和文本模式功能实现的SRI任务。聚集形态特征,模态融合模块提出了保险丝和模态注意机制和自我关注池层,分别挤声音和文本表示。最后,比较的方法进行验证从现实世界ATC环境中收集的语料库ATCSpeech。实验结果表明,所有的比较方法是对SRI任务分别工作,并提议MMSRINet显示出比上都看到和看不到数据的其他方法的有竞争力的性能和稳定性,达到98.56%,98.08和%的准确度。
translated by 谷歌翻译