This paper presents a class of new fast non-trainable entropy-based confidence estimation methods for automatic speech recognition. We show how per-frame entropy values can be normalized and aggregated to obtain a confidence measure per unit and per word for Connectionist Temporal Classification (CTC) and Recurrent Neural Network Transducer (RNN-T) models. Proposed methods have similar computational complexity to the traditional method based on the maximum per-frame probability, but they are more adjustable, have a wider effective threshold range, and better push apart the confidence distributions of correct and incorrect words. We evaluate the proposed confidence measures on LibriSpeech test sets, and show that they are up to 2 and 4 times better than confidence estimation based on the maximum per-frame probability at detecting incorrect words for Conformer-CTC and Conformer-RNN-T models, respectively.
translated by 谷歌翻译
虽然现代自动语音识别(ASR)系统可以实现高性能,但它们可能会产生削弱读者体验并对下游任务造成伤害的错误。为了提高ASR假设的准确性和可靠性,我们提出了一种用于语音识别器的跨模型后处理系统,其中1)熔断来自不同方式的声学特征和文本特征,2)接合置信度估计器和多个误差校正器任务学习时尚和3)统一纠错和话语抑制模块。与单模或单任务模型相比,我们提出的系统被证明更有效和高效。实验结果表明,我们的后处理系统导致对工业ASR系统的单扬声器和多扬声器语音相对降低的10%相对减少,每个令牌约为1.7ms延迟确保在流语音识别中可以接受后处理引入的额外延迟。
translated by 谷歌翻译
This paper proposes a modification to RNN-Transducer (RNN-T) models for automatic speech recognition (ASR). In standard RNN-T, the emission of a blank symbol consumes exactly one input frame; in our proposed method, we introduce additional blank symbols, which consume two or more input frames when emitted. We refer to the added symbols as big blanks, and the method multi-blank RNN-T. For training multi-blank RNN-Ts, we propose a novel logit under-normalization method in order to prioritize emissions of big blanks. With experiments on multiple languages and datasets, we show that multi-blank RNN-T methods could bring relative speedups of over +90%/+139% to model inference for English Librispeech and German Multilingual Librispeech datasets, respectively. The multi-blank RNN-T method also improves ASR accuracy consistently. We will release our implementation of the method in the NeMo (\url{https://github.com/NVIDIA/NeMo}) toolkit.
translated by 谷歌翻译
本文介绍了新颖的加权有限态传感器(WFST)拓扑,以实现连接的时间分类(CTC)类似于自动语音识别的算法。提出了三个新的CTC变体:(1)“紧凑型CTC”,其中单位之间的直接过渡被<epsilon>退回过渡代替;(2)“最小ctc”,仅在wfst composition中使用时才添加<blank>自我;(3)“无私的CTC”变体,它不允许自动浮动对非空时单位。Compact-CTC允许较小的WFST解码图较小的1.5倍,并在使用LF-MMI目标训练CTC模型的情况下将内存消耗减少两次,而不会损害识别精度。最小CTC可将图形的大小和记忆消耗降低两次和四次,以使精度下降的成本下降。使用无私CTC可以提高宽上下文窗口模型的准确性。
translated by 谷歌翻译
深度学习(DL)在数字病理应用中表现出很大的潜力。诊断DL的解决方案的鲁棒性对于安全的临床部署至关重要。在这项工作中,我们通过增加数字病理学中的DL预测的不确定性估计,可以通过提高一般预测性能或通过检测错误预测性来导致临床应用的价值增加。我们将模型 - 集成方法(MC辍学和深度集成)的有效性与模型 - 不可知方法(测试时间增强,TTA)进行比较。此外,比较了四个不确定性度量。我们的实验专注于两个域改变情景:转移到不同的医疗中心和癌症的不足亚型。我们的结果表明,不确定性估计可以增加一些可靠性并降低对分类阈值选择的敏感性。虽然高级指标和深度集合在我们的比较中表现最佳,但更简单的度量和TTA的附加值很小。重要的是,所有评估的不确定度估计方法的益处通过域移位减少。
translated by 谷歌翻译
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
translated by 谷歌翻译
可以通过组合自动语音识别(ASR)和文本摘要(TS)来实现来自语音的文本摘要的语音摘要。通过这种级联方法,我们可以利用最先进的模型和大型训练数据集,用于两个子任务,即变压器和TS的ASR和双向编码器表示的变压器。但是,ASR错误直接影响级联方法的输出概要的质量。我们提出了一个级联语音摘要模型,它对ASR错误具有强大,并且利用ASR生成的多个假设来衰减摘要摘要的效果。我们调查了几个方案来组合ASR假设。首先,我们建议使用由ASR系统提供的其后部值作为基于BERT的TS系统的输入来加权的子字嵌入向量的总和。然后,我们介绍了一种更一般的方案,它使用添加到预先训练的BERT模块的关注的融合模块来对齐并组合几个ASR假设。最后,我们在How2 DataSet上执行语音摘要实验和我们将使用本文发布的新组合的基于TED的数据集。这些实验表明,通过这些方案再培训基于伯特的TS系统可以改善总结性能,并且基于注意的熔融模块特别有效。
translated by 谷歌翻译
Automatic Speech Recognition (ASR) systems frequently use a search-based decoding strategy aiming to find the best attainable transcript by considering multiple candidates. One prominent speech recognition decoding heuristic is beam search, which seeks the transcript with the greatest likelihood computed using the predicted distribution. While showing substantial performance gains in various tasks, beam search loses some of its effectiveness when the predicted probabilities are highly confident, i.e., the predicted distribution is massed for a single or very few classes. We show that recently proposed Self-Supervised Learning (SSL)-based ASR models tend to yield exceptionally confident predictions that may hamper beam search from truly considering a diverse set of candidates. We perform a layer analysis to reveal and visualize how predictions evolve, and propose a decoding procedure that improves the performance of fine-tuned ASR models. Our proposed approach does not require further training beyond the original fine-tuning, nor additional model parameters. In fact, we find that our proposed method requires significantly less inference computation than current approaches. We propose aggregating the top M layers, potentially leveraging useful information encoded in intermediate layers, and relaxing model confidence. We demonstrate the effectiveness of our approach by conducting an empirical study on varying amounts of labeled resources and different model sizes, showing consistent improvements in particular when applied to low-resource scenarios.
translated by 谷歌翻译
在本文中,我们研究了现代神经网络的事后校准,这个问题近年来引起了很多关注。已经为任务提出了许多不同复杂性的校准方法,但是关于这些任务的表达方式尚无共识。我们专注于置信度缩放的任务,特别是在概括温度缩放的事后方法上,我们将其称为自适应温度缩放家族。我们分析了改善校准并提出可解释方法的表达功能。我们表明,当有大量数据复杂模型(例如神经网络)产生更好的性能时,但是当数据量受到限制时,很容易失败,这是某些事后校准应用(例如医学诊断)的常见情况。我们研究表达方法在理想条件和设计更简单的方法下学习但对这些表现良好的功能具有强烈的感应偏见的功能。具体而言,我们提出了基于熵的温度缩放,这是一种简单的方法,可根据其熵缩放预测的置信度。结果表明,与其他方法相比,我们的方法可获得最先进的性能,并且与复杂模型不同,它对数据稀缺是可靠的。此外,我们提出的模型可以更深入地解释校准过程。
translated by 谷歌翻译
In this work we propose a novel token-based training strategy that improves Transformer-Transducer (T-T) based speaker change detection (SCD) performance. The conventional T-T based SCD model loss optimizes all output tokens equally. Due to the sparsity of the speaker changes in the training data, the conventional T-T based SCD model loss leads to sub-optimal detection accuracy. To mitigate this issue, we use a customized edit-distance algorithm to estimate the token-level SCD false accept (FA) and false reject (FR) rates during training and optimize model parameters to minimize a weighted combination of the FA and FR, focusing the model on accurately predicting speaker changes. We also propose a set of evaluation metrics that align better with commercial use cases. Experiments on a group of challenging real-world datasets show that the proposed training method can significantly improve the overall performance of the SCD model with the same number of parameters.
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i.e., its predictive probabilities are a good indication of how likely a prediction is to be correct. Correctness is commonly estimated against the human majority class. Recently, calibration to human majority has been measured on tasks where humans inherently disagree about which class applies. We show that measuring calibration to human majority given inherent disagreements is theoretically problematic, demonstrate this empirically on the ChaosNLI dataset, and derive several instance-level measures of calibration that capture key statistical properties of human judgements - class frequency, ranking and entropy.
translated by 谷歌翻译
注释数据是用于培训和评估机器学习模型的自然语言处理中的重要成分。因此,注释具有高质量是非常理想的。但是,最近的工作表明,几个流行的数据集包含令人惊讶的注释错误或不一致之处。为了减轻此问题,多年来已经设计了许多注释错误检测方法。尽管研究人员表明他们的方法在新介绍的数据集上效果很好,但他们很少将其方法与以前的工作或同一数据集进行比较。这引起了人们对方法的一般表现的强烈关注,并且使他们的优势和劣势很难解决。因此,我们重新实现18种检测潜在注释错误的方法,并在9个英语数据集上对其进行评估,以进行文本分类以及令牌和跨度标签。此外,我们定义了统一的评估设置,包括注释错误检测任务,评估协议和一般最佳实践的新形式化。为了促进未来的研究和可重复性,我们将数据集和实施释放到易于使用和开源软件包中。
translated by 谷歌翻译
口语理解(SLU)是大多数人机相互作用系统中的核心任务。随着智能家居,智能手机和智能扬声器的出现,SLU已成为该行业的关键技术。在经典的SLU方法中,自动语音识别(ASR)模块将语音信号转录为文本表示,自然语言理解(NLU)模块从中提取语义信息。最近,基于深神经网络的端到端SLU(E2E SLU)已经获得了动力,因为它受益于ASR和NLU部分的联合优化,因此限制了管道架构的误差效应的级联反应。但是,对于E2E模型用于预测语音输入的概念和意图的实际语言特性知之甚少。在本文中,我们提出了一项研究,以确定E2E模型执行SLU任务的信号特征和其他语言特性。该研究是在必须处理非英语(此处法语)语音命令的智能房屋的应用领域进行的。结果表明,良好的E2E SLU性能并不总是需要完美的ASR功能。此外,结果表明,与管道模型相比,E2E模型在处理背景噪声和句法变化方面具有出色的功能。最后,更细粒度的分析表明,E2E模型使用输入信号的音调信息来识别语音命令概念。本文概述的结果和方法提供了一个跳板,以进一步分析语音处理中的E2E模型。
translated by 谷歌翻译
上下文ASR将偏见项列表与音频一起列出,随着ASR使用变得更加普遍,最近引起了最新的兴趣。我们正在发布上下文偏见列表,以伴随Enation21数据集,为此任务创建公共基准。我们使用WENET工具包中预处理的端到端ASR模型在此基准测试上介绍了基线结果。我们显示了应用于两种不同解码算法的浅融合上下文偏置的结果。我们的基线结果证实了观察到的观察,即端到端模型尤其是在训练过程中很少见或从未见过的单词,并且现有的浅融合技术不能充分解决这个问题。我们提出了一个替代拼写预测模型,与没有其他拼写的上下文偏见相比,相对相对,将稀有单词相对34.7%,而访问量的单词相对97.2%。该模型在概念上与先前工作中使用的模型相似,但是更容易实现,因为它不依赖发音字典或现有的文本对语音系统。
translated by 谷歌翻译
Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
translated by 谷歌翻译
扬声器日流是一个标签音频或视频录制的任务,与扬声器身份或短暂的任务标记对应于扬声器标识的类,以识别“谁谈到何时发表讲话”。在早期,对MultiSpeaker录音的语音识别开发了扬声器日益衰退算法,以使扬声器自适应处理能够实现扬声器自适应处理。这些算法还将自己的价值作为独立应用程序随着时间的推移,为诸如音频检索等下游任务提供特定于扬声器的核算。最近,随着深度学习技术的出现,这在讲话应用领域的研究和实践中引起了革命性的变化,对扬声器日益改善已经进行了快速进步。在本文中,我们不仅审查了扬声器日益改善技术的历史发展,而且还审查了神经扬声器日益改善方法的最新进步。此外,我们讨论了扬声器日复速度系统如何与语音识别应用相结合,以及最近深度学习的激增是如何引领联合建模这两个组件互相互补的方式。通过考虑这种令人兴奋的技术趋势,我们认为本文对社区提供了有价值的贡献,以通过巩固具有神经方法的最新发展,从而促进更有效的扬声器日益改善进一步进展。
translated by 谷歌翻译
梁搜索是端到端模型的主要ASR解码算法,生成树结构化假设。但是,最近的研究表明,通过假设合并进行解码可以通过可比或更好的性能实现更有效的搜索。但是,复发网络中的完整上下文与假设合并不兼容。我们建议在RNN传感器的预测网络中使用矢量定量的长期记忆单元(VQ-LSTM)。通过与ASR网络共同培训离散表示形式,可以积极合并假设以生成晶格。我们在总机语料库上进行的实验表明,提出的VQ RNN传感器改善了具有常规预测网络的换能器的ASR性能,同时还产生了具有相同光束尺寸的Oracle Word错误率(WER)的密集晶格。其他语言模型撤退实验还证明了拟议的晶格生成方案的有效性。
translated by 谷歌翻译
口语理解(SLU)系统提取文本成绩单和语义与意图和插槽相关的语言。 SLU系统通常由(1)自动语音识别(ASR)模块组成,(2)接口来自ASR相关输出的接口模块,以及(3)自然语言理解(NLU)模块。 SLU系统中的接口随附文本转录或更丰富的信息(例如从ASR到NLU)的信息。在本文中,我们研究界面如何影响与口语理解的联合培训。最值得注意的是,我们在公开可用的50小时SLURP数据集中获得了最新结果。我们首先利用通过文本界面连接的大型ASR和NLU模型,然后通过序列损耗函数共同训练这两个模型。对于未利用预位模型的场景,使用更丰富的神经界面通过联合序列损失训练获得了最佳结果。最后,我们显示了利用预期模型随培训数据规模增加的总体减少影响。
translated by 谷歌翻译
对自然和人制过程的研究通常会导致长时间有序值的长序列,也就是时间序列(TS)。这样的过程通常由多个状态组成,例如机器的操作模式,使观测过程中的状态变化会导致测量值形状的分布变化。时间序列分割(TSS)试图发现TS事后的这种变化,以推断数据生成过程的变化。通常将TSS视为无监督的学习问题,目的是识别某些统计属性可区分的细分。 TSS的当前算法要求用户设置依赖域的超参数,对TS值分布进行假设或可检测更改的类型,以限制其适用性。常见的超参数是段均匀性和变更点的数量的度量,对于每个数据集,这尤其难以调节。我们提出了TSS的一种新颖,高度准确,无参数和域的无义方法的方法。扣子分层将TS分为两个部分。更改点是通过训练每个可能的拆分点的二进制TS分类器来确定的,并选择最能识别从任何一个分区的子序列的一个拆分。 CLASP使用两种新颖的定制算法从数据中学习了其主要的两个模型参数。在我们使用115个数据集的基准测试的实验评估中,我们表明,扣子优于准确性,并且可以快速且可扩展。此外,我们使用几个现实世界的案例研究强调了扣子的特性。
translated by 谷歌翻译