在本文中,提出了一种用于加权预测误差(WPE)方法的Kalman滤波变体的神经网络增强算法。滤波器随机变化是通过使用过滤器残留误差和信号特性端对端的深神经网络(DNN)预测的。提出的框架允许在类似于Whamr!的单渠道嘈杂的混响数据集上进行稳健的编织。当目标语音功率频谱密度不完全了解并且观察值嘈杂时,Kalman过滤WPE仅预测剩余误差的滤波器变化时,才会在增强信号中引入失真。提出的方法通过以数据驱动的方式纠正滤波器变化估计来避免这些扭曲,从而将方法的鲁棒性增加到噪声方案。此外,与DNN支持的递归最小二乘正方形变体相比,它产生了强烈的脊椎和脱氧性能,尤其是对于高度嘈杂的输入。
translated by 谷歌翻译
采用深层神经网络(DNN)直接学习多通道语音增强的过滤器,这可能是将线性空间过滤器与独立的节奏光谱后过滤器相结合的传统方法的两个关键优势:1)非线性空间过滤器克服源自线性处理模型的潜在限制和2)空间和速度光谱信息的关节处理可以利用不同信息来源之间的相互依赖性。最近提出了各种基于DNN的非线性过滤器,报告了良好的增强性能。但是,对于将网络体系结构设计变成机会游戏的内部机制知之甚少。因此,在本文中,我们执行实验,以更好地了解基于DNN的非线性过滤器对空间,光谱和时间信息的内部处理。一方面,我们在艰难的语音提取方案中的实验证实了非线性空间滤波的重要性,该空间过滤的重要性超过了Oracle线性空间滤波器,高于0.24 POLQA得分。另一方面,我们证明了联合处理导致较大的性能差距,除了空间信息之外,在利用光谱与时间信息的网络体系结构之间得分为0.4 POLQA得分。
translated by 谷歌翻译
本文提出了一种单通道语音增强方法,以减少低信噪比(SNR)水平和非平稳噪声条件下的噪声并增强语音。具体而言,我们专注于使用高斯混合模型(GMM)基于具有参数Wiener滤波器的多阶段过程来建模噪声。提出的噪声模型估计了更准确的噪声功率频谱密度(PSD),并且与传统的Wiener滤波方法相比,在各种噪声条件下可以更好地概括。模拟表明,所提出的方法可以在低SNR级别的语音质量(PESQ)和清晰度(Stoi)方面取得更好的性能。
translated by 谷歌翻译
使用多个麦克风进行语音增强的主要优点是,可以使用空间滤波来补充节奏光谱处理。在传统的环境中,通常单独执行线性空间滤波(波束形成)和单通道后过滤。相比之下,采用深层神经网络(DNN)有一种趋势来学习联合空间和速度 - 光谱非线性滤波器,这意味着对线性处理模型的限制以及空间和节奏单独处理的限制光谱信息可能可以克服。但是,尚不清楚导致此类数据驱动的过滤器以良好性能进行多通道语音增强的内部机制。因此,在这项工作中,我们通过仔细控制网络可用的信息源(空间,光谱和时间)来分析由DNN实现的非线性空间滤波器的性质及其与时间和光谱处理的相互依赖性。我们确认了非线性空间处理模型的优越性,该模型在挑战性的扬声器提取方案中优于Oracle线性空间滤波器,以低于0.24的POLQA得分,较少数量的麦克风。我们的分析表明,在特定的光谱信息中应与空间信息共同处理,因为这会提高过滤器的空间选择性。然后,我们的系统评估会导致一个简单的网络体系结构,该网络体系结构在扬声器提取任务上的最先进的网络体系结构优于0.22 POLQA得分,而CHIME3数据上的POLQA得分为0.32。
translated by 谷歌翻译
Diffusion-based generative models have had a high impact on the computer vision and speech processing communities these past years. Besides data generation tasks, they have also been employed for data restoration tasks like speech enhancement and dereverberation. While discriminative models have traditionally been argued to be more powerful e.g. for speech enhancement, generative diffusion approaches have recently been shown to narrow this performance gap considerably. In this paper, we systematically compare the performance of generative diffusion models and discriminative approaches on different speech restoration tasks. For this, we extend our prior contributions on diffusion-based speech enhancement in the complex time-frequency domain to the task of bandwith extension. We then compare it to a discriminatively trained neural network with the same network architecture on three restoration tasks, namely speech denoising, dereverberation and bandwidth extension. We observe that the generative approach performs globally better than its discriminative counterpart on all tasks, with the strongest benefit for non-additive distortion models, like in dereverberation and bandwidth extension. Code and audio examples can be found online at https://uhh.de/inf-sp-sgmsemultitask
translated by 谷歌翻译
本文介绍了增强现实耳机的嘈杂语音识别,该耳机有助于在真实的多方对话环境中进行口头交流。在模拟环境中积极研究的一种主要方法是,基于以监督方式训练的深神经网络(DNNS),依次执行语音增强和自动语音识别(ASR)。但是,在我们的任务中,由于培训和测试条件与用户的头部移动之间的不匹配,因此这种预处理的系统无法正常工作。为了仅增强目标扬声器的话语,我们基于基于DNN的语音掩码估计器使用束构造,该估计量可以适应地提取与头部相关特定方向相对应的语音组件。我们提出了一种半监督的适应方法,该方法使用带有地面真实转录和嘈杂的语音信号的干净语音信号在运行时共同更新蒙版估计器和ASR模型,并具有高度固定的估计转录。使用最先进的语音识别系统的比较实验表明,所提出的方法显着改善了ASR性能。
translated by 谷歌翻译
In a scenario with multiple persons talking simultaneously, the spatial characteristics of the signals are the most distinct feature for extracting the target signal. In this work, we develop a deep joint spatial-spectral non-linear filter that can be steered in an arbitrary target direction. For this we propose a simple and effective conditioning mechanism, which sets the initial state of the filter's recurrent layers based on the target direction. We show that this scheme is more effective than the baseline approach and increases the flexibility of the filter at no performance cost. The resulting spatially selective non-linear filters can also be used for speech separation of an arbitrary number of speakers and enable very accurate multi-speaker localization as we demonstrate in this paper.
translated by 谷歌翻译
使用Denoisis扩散概率模型(DDPM)的神经声码器已通过适应给定的声学特征的扩散噪声分布来改善。在这项研究中,我们提出了适应扩散噪声的素描,以使其随时间变化的光谱包络变得接近条件对数 - 摩尔光谱图。随着时变的过滤这种适应可改善声音质量,尤其是在高频带中。它是在时频域中处理的,以使计算成本几乎与常规DDPM基于DDPM的神经声码器相同。实验结果表明,在分析合成和语音增强方案中,Specgrad比常规DDPM的神经声码器产生比常规DDPM的更高的语音波形。音频演示可在wavegrad.github.io/specgrad/上获得。
translated by 谷歌翻译
最近,基于扩散的生成模型已引入语音增强的任务。干净的语音损坏被建模为固定的远期过程,其中逐渐添加了越来越多的噪声。通过学习以嘈杂的输入为条件的迭代方式扭转这一过程,可以产生干净的语音。我们以先前的工作为基础,并在随机微分方程的形式主义中得出训练任务。我们对基础分数匹配目标进行了详细的理论综述,并探索了不同的采样器配置,以解决测试时的反向过程。通过使用自然图像生成文献的复杂网络体系结构,与以前的出版物相比,我们可以显着提高性能。我们还表明,我们可以与最近的判别模型竞争,并在评估与培训不同的语料库时获得更好的概括。我们通过主观的听力测试对评估结果进行补充,其中我们提出的方法是最好的。此外,我们表明所提出的方法在单渠道语音覆盖中实现了出色的最新性能。我们的代码和音频示例可在线获得,请参见https://uhh.de/inf-sp-sgmse
translated by 谷歌翻译
Diffusion models have shown a great ability at bridging the performance gap between predictive and generative approaches for speech enhancement. We have shown that they may even outperform their predictive counterparts for non-additive corruption types or when they are evaluated on mismatched conditions. However, diffusion models suffer from a high computational burden, mainly as they require to run a neural network for each reverse diffusion step, whereas predictive approaches only require one pass. As diffusion models are generative approaches they may also produce vocalizing and breathing artifacts in adverse conditions. In comparison, in such difficult scenarios, predictive models typically do not produce such artifacts but tend to distort the target speech instead, thereby degrading the speech quality. In this work, we present a stochastic regeneration approach where an estimate given by a predictive model is provided as a guide for further diffusion. We show that the proposed approach uses the predictive model to remove the vocalizing and breathing artifacts while producing very high quality samples thanks to the diffusion model, even in adverse conditions. We further show that this approach enables to use lighter sampling schemes with fewer diffusion steps without sacrificing quality, thus lifting the computational burden by an order of magnitude. Source code and audio examples are available online (https://uhh.de/inf-sp-storm).
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
Single-channel deep speech enhancement approaches often estimate a single multiplicative mask to extract clean speech without a measure of its accuracy. Instead, in this work, we propose to quantify the uncertainty associated with clean speech estimates in neural network-based speech enhancement. Predictive uncertainty is typically categorized into aleatoric uncertainty and epistemic uncertainty. The former accounts for the inherent uncertainty in data and the latter corresponds to the model uncertainty. Aiming for robust clean speech estimation and efficient predictive uncertainty quantification, we propose to integrate statistical complex Gaussian mixture models (CGMMs) into a deep speech enhancement framework. More specifically, we model the dependency between input and output stochastically by means of a conditional probability density and train a neural network to map the noisy input to the full posterior distribution of clean speech, modeled as a mixture of multiple complex Gaussian components. Experimental results on different datasets show that the proposed algorithm effectively captures predictive uncertainty and that combining powerful statistical models and deep learning also delivers a superior speech enhancement performance.
translated by 谷歌翻译
本文介绍了一种无监督的基于分段的稳健语音活动检测方法(RVAD)。该方法包括两个去噪之后的传递,然后是语音活动检测(VAD)阶段。在第一通道中,通过使用后验信噪比(SNR)加权能量差来检测语音信号中的高能段,并且如果在段内没有检测到间距,则该段被认为是高能量噪声段并设置为零。在第二种通过中,语音信号由语音增强方法进行去噪,探索了几种方法。接下来,具有间距的相邻帧被分组在一起以形成音调段,并且基于语音统计,俯仰段进一步从两端延伸,以便包括浊音和发声声音和可能的非语音部分。最后,将后验SNR加权能量差应用于用于检测语音活动的去噪语音信号的扩展桨距片段。我们使用两个数据库,大鼠和极光-2评估所提出的方法的VAD性能,该方法包含大量噪声条件。在扬声器验证性能方面进一步评估RVAD方法,在Reddots 2016挑战数据库及其噪声损坏版本方面。实验结果表明,RVAD与许多现有方法有利地比较。此外,我们介绍了一种修改版的RVAD,其中通过计算有效的光谱平坦度计算替换计算密集的俯仰提取。修改的版本显着降低了适度较低的VAD性能成本的计算复杂性,这是在处理大量数据并在低资源设备上运行时的优势。 RVAD的源代码被公开可用。
translated by 谷歌翻译
频域神经波束形成器是最近多通道语音分离模型的主流方法。尽管有明确定义的行为和有效性,但这种频域波束形成器仍然具有有界Oracle性能的局限性以及为复值操作设计适当网络的困难。在本文中,我们提出了一个时域广义维纳滤波器(TD-GWF),传统频域波束形成器的扩展,具有更高的Oracle性能,并且仅涉及实际的操作。我们还提供关于TD-GWF如何连接到传统频域波束形成器的讨论。实验结果表明,通过在最近提出的连续神经波束形成管道中取代TD-GWF的频域波线形成器,可以实现显着性能改进。
translated by 谷歌翻译
以前的研究已经证实了利用明晰度信息达到改善的语音增强(SE)性能的有效性。通过使用铰接特征的地点/方式增强原始声学特征,可以引导SE过程考虑执行增强时输入语音的剖视特性。因此,我们认为关节属性的上下文信息应包括有用的信息,并可以进一步利用不同的语言。在这项研究中,我们提出了一个SE系统,通过优化英语和普通话的增强演讲中的上下文清晰度信息来提高其性能。我们通过联合列车与端到端的自动语音识别(E2E ASR)模型进行联合列车,预测广播序列(BPC)而不是单词序列的序列。同时,开发了两种培训策略,以基于基于BPC的ASR:多任务学习和深度特征培训策略来培训SE系统。 Timit和TMhint DataSet上的实验结果证实了上下文化学信息促进了SE系统,以实现比传统声学模型(AM)更好的结果。此外,与用单声道ASR培训的另一SE系统相比,基于BPC的ASR(提供上下文化学信息)可以在不同的信噪比(SNR)下更有效地改善SE性能。
translated by 谷歌翻译
基于分数的生成模型(SGM)最近显示了难以生成的任务的令人印象深刻的结果,例如自然图像和音频信号的无条件生成和条件生成。在这项工作中,我们将这些模型扩展到复杂的短时傅立叶变换(STFT)域,并提出了使用复杂值的深神经网络来增强语音的新型训练任务。我们在随机微分方程(SDE)的形式主义中得出了这项训练任务,从而实现了预测器 - 矫正器采样器的使用。我们提供了以前出版物启发的替代配方,以使用生成扩散模型来增强语音,从而避免了对噪声分布的任何先前假设的需求,并使训练任务纯粹是生成纯生成的,这是我们所显示的,从而改善了增强性能。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
在本文中,我们介绍了在单个神经网络中执行同时扬声器分离,DERE失眠和扬声器识别的盲言语分离和DERERATERATION(BSSD)网络。扬声器分离由一组预定义的空间线索引导。通过使用神经波束成形进行DERERATERATION,通过嵌入向量和三联挖掘来辅助扬声器识别。我们介绍了一种使用复值神经网络的频域模型,以及在潜伏空间中执行波束成形的时域变体。此外,我们提出了一个块在线模式来处理更长的录音,因为它们在会议场景中发生。我们在规模独立信号方面评估我们的系统,以失真率(SI-SI-SIS),字错误率(WER)和相等的错误率(eer)。
translated by 谷歌翻译
最近的单声道源分离的工作表明,通过使用短窗户的完全学习过滤器组可以提高性能。另一方面,广泛众所周知,对于传统的波束成形技术,性能随着长分析窗口而增加。这也适用于最依赖于深神经网络(DNN)来估计空间协方差矩阵的大多数混合神经波束形成方法。在这项工作中,我们尝试弥合这两个世界之间的差距,并探索完全端到端的混合神经波束形成,而不是使用短时傅里叶变换,而不是使用DNN共同学习分析和合成滤波器拦截器。详细说明,我们探索了两种不同类型的学习过滤博客:完全学习和分析。我们使用最近的清晰度挑战数据执行详细分析,并显示通过使用学习的默认覆盖机,可以超越基于Oracle掩码的短窗口的波束成形。
translated by 谷歌翻译