Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
Continuous long-term monitoring of motor health is crucial for the early detection of abnormalities such as bearing faults (up to 51% of motor failures are attributed to bearing faults). Despite numerous methodologies proposed for bearing fault detection, most of them require normal (healthy) and abnormal (faulty) data for training. Even with the recent deep learning (DL) methodologies trained on the labeled data from the same machine, the classification accuracy significantly deteriorates when one or few conditions are altered. Furthermore, their performance suffers significantly or may entirely fail when they are tested on another machine with entirely different healthy and faulty signal patterns. To address this need, in this pilot study, we propose a zero-shot bearing fault detection method that can detect any fault on a new (target) machine regardless of the working conditions, sensor parameters, or fault characteristics. To accomplish this objective, a 1D Operational Generative Adversarial Network (Op-GAN) first characterizes the transition between normal and fault vibration signals of (a) source machine(s) under various conditions, sensor parameters, and fault types. Then for a target machine, the potential faulty signals can be generated, and over its actual healthy and synthesized faulty signals, a compact, and lightweight 1D Self-ONN fault detector can then be trained to detect the real faulty condition in real time whenever it occurs. To validate the proposed approach, a new benchmark dataset is created using two different motors working under different conditions and sensor locations. Experimental results demonstrate that this novel approach can accurately detect any bearing fault achieving an average recall rate of around 89% and 95% on two target machines regardless of its type, severity, and location.
translated by 谷歌翻译
恢复质量差的图像与一组混合伪影对于可靠的诊断起着至关重要的作用。现有的研究集中在特定的恢复问题上,例如图像过度,去核和暴露校正,通常对伪影类型和严重性有很强的假设。作为盲X射线恢复的先驱研究,我们提出了一个通用图像恢复和分类的联合模型:恢复分类为分类的生成对抗网络(R2C-GAN)。这种共同优化的模型使恢复后保持任何疾病完整。因此,由于X射线图像质量的提高,这自然会导致更高的诊断性能。为了实现这一关键目标,我们将恢复任务定义为图像到图像的翻译问题,从差异,模糊或暴露不足/暴露不足的图像到高质量的图像域。提出的R2C-GAN模型能够使用未配对的训练样本在两个域之间学习前进和逆变换。同时,联合分类在恢复过程中保留了疾病标签。此外,R2C-GAN配备了操作层/神经元,可降低网络深度,并进一步增强恢复和分类性能。拟议的联合模型对2019年冠状病毒病(COVID-19)分类的卡塔-COV19数据集进行了广泛的评估。拟议的恢复方法达到了90%以上的F1得分,这显着高于任何深层模型的性能。此外,在定性分析中,R2C-GAN的恢复性能得到了一群医生的批准。我们在https://github.com/meteahishali/r2c-gan上共享软件实施。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
人类脑中脑中的背景利用异质感官信息,以有效地执行包括视觉和听力的认知任务。例如,在鸡尾酒会党的情况下,人类听觉Cortex上下文中的视听(AV)提示才能更好地感知言论。最近的研究表明,与音频SE模型相比,AV语音增强(SE)模型可以显着提高信噪比(SNR)环境的极低信号的语音质量和可懂度。然而,尽管在AV SE的领域进行了显着的研究,但具有低延迟的实时处理模型的开发仍然是一个强大的技术挑战。在本文中,我们为低延迟扬声器的独立AV SE提供了一种新颖的框架,可以概括一系列视觉和声学噪声。特别地,提出了一种生成的对抗性网络(GaN)来解决AV SE的视觉缺陷的实际问题。此外,我们提出了一种基于神经网络的深度神经网络的实时AV SE模型,考虑到从GaN的清洁的视觉语音输出来提供更强大的SE。拟议的框架使用客观语音质量和可懂度指标和主观上市测试对合成和真实嘈杂的AV语料库进行评估。比较仿真结果表明,我们的实时AV SE框架优于最先进的SE方法,包括最近的基于DNN的SE模型。
translated by 谷歌翻译
以前的研究已经证实了利用明晰度信息达到改善的语音增强(SE)性能的有效性。通过使用铰接特征的地点/方式增强原始声学特征,可以引导SE过程考虑执行增强时输入语音的剖视特性。因此,我们认为关节属性的上下文信息应包括有用的信息,并可以进一步利用不同的语言。在这项研究中,我们提出了一个SE系统,通过优化英语和普通话的增强演讲中的上下文清晰度信息来提高其性能。我们通过联合列车与端到端的自动语音识别(E2E ASR)模型进行联合列车,预测广播序列(BPC)而不是单词序列的序列。同时,开发了两种培训策略,以基于基于BPC的ASR:多任务学习和深度特征培训策略来培训SE系统。 Timit和TMhint DataSet上的实验结果证实了上下文化学信息促进了SE系统,以实现比传统声学模型(AM)更好的结果。此外,与用单声道ASR培训的另一SE系统相比,基于BPC的ASR(提供上下文化学信息)可以在不同的信噪比(SNR)下更有效地改善SE性能。
translated by 谷歌翻译
尽管在基于生成的对抗网络(GAN)的声音编码器中,该模型在MEL频谱图中生成原始波形,但在各种录音环境中为众多扬声器合成高保真音频仍然具有挑战性。在这项工作中,我们介绍了Bigvgan,这是一款通用的Vocoder,在零照片环境中在各种看不见的条件下都很好地概括了。我们将周期性的非线性和抗氧化表现引入到发电机中,这带来了波形合成所需的感应偏置,并显着提高了音频质量。根据我们改进的生成器和最先进的歧视器,我们以最大的规模训练我们的Gan Vocoder,最高到1.12亿个参数,这在文献中是前所未有的。特别是,我们识别并解决了该规模特定的训练不稳定性,同时保持高保真输出而不过度验证。我们的Bigvgan在各种分布场景中实现了最先进的零拍性能,包括新的扬声器,新颖语言,唱歌声音,音乐和乐器音频,在看不见的(甚至是嘈杂)的录制环境中。我们将在以下网址发布我们的代码和模型:https://github.com/nvidia/bigvgan
translated by 谷歌翻译
最近,基于扩散的生成模型已引入语音增强的任务。干净的语音损坏被建模为固定的远期过程,其中逐渐添加了越来越多的噪声。通过学习以嘈杂的输入为条件的迭代方式扭转这一过程,可以产生干净的语音。我们以先前的工作为基础,并在随机微分方程的形式主义中得出训练任务。我们对基础分数匹配目标进行了详细的理论综述,并探索了不同的采样器配置,以解决测试时的反向过程。通过使用自然图像生成文献的复杂网络体系结构,与以前的出版物相比,我们可以显着提高性能。我们还表明,我们可以与最近的判别模型竞争,并在评估与培训不同的语料库时获得更好的概括。我们通过主观的听力测试对评估结果进行补充,其中我们提出的方法是最好的。此外,我们表明所提出的方法在单渠道语音覆盖中实现了出色的最新性能。我们的代码和音频示例可在线获得,请参见https://uhh.de/inf-sp-sgmse
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译
Diffusion models have shown a great ability at bridging the performance gap between predictive and generative approaches for speech enhancement. We have shown that they may even outperform their predictive counterparts for non-additive corruption types or when they are evaluated on mismatched conditions. However, diffusion models suffer from a high computational burden, mainly as they require to run a neural network for each reverse diffusion step, whereas predictive approaches only require one pass. As diffusion models are generative approaches they may also produce vocalizing and breathing artifacts in adverse conditions. In comparison, in such difficult scenarios, predictive models typically do not produce such artifacts but tend to distort the target speech instead, thereby degrading the speech quality. In this work, we present a stochastic regeneration approach where an estimate given by a predictive model is provided as a guide for further diffusion. We show that the proposed approach uses the predictive model to remove the vocalizing and breathing artifacts while producing very high quality samples thanks to the diffusion model, even in adverse conditions. We further show that this approach enables to use lighter sampling schemes with fewer diffusion steps without sacrificing quality, thus lifting the computational burden by an order of magnitude. Source code and audio examples are available online (https://uhh.de/inf-sp-storm).
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
最近,卷积增强的变压器(构象异构体)在自动语音识别(ASR)和时间域语音增强(SE)中实现了有希望的表现,因为它可以捕获语音信号中的本地和全局依赖性。在本文中,我们在时间频率(TF)域中提出了SE的基于构型的度量生成对抗网络(CMGAN)。在发电机中,我们利用两阶段的构象体块来通过对时间和频率依赖性进行建模来汇总所有幅度和复杂的频谱图。大小和复杂谱图的估计在解码器阶段被解耦,然后共同掺入以重建增强的语音。此外,通过优化相应的评估评分,采用了度量歧视器来进一步提高增强估计语音的质量。语音库+需求数据集的定量分析表明,CMGAN在优于以前的模型的功能,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
基于深度学习(DL)的语音增强方法通常优化,以最小化干净和增强语音功能之间的距离。这些经常导致语音质量改善,但它们缺乏普遍化,并且可能无法在实际嘈杂情况下提供所需的语音可懂度。为了解决这些挑战,研究人员已经探索了智能性(I-O)丢失函数和用于更强大的语音增强(SE)的视听(AV)信息的集成。在本文中,我们介绍了基于DL的I-O SE算法利用AV信息,这是一种新颖且以前未开发的研究方向。具体而言,我们介绍了一个完全卷积的AV SE模型,它使用改进的短时客观可懂度(STOI)度量作为培训成本函数。据我们所知,这是第一个利用基于I-O的I-O的损耗函数的AV模式集成的第一项工作。比较实验结果表明,我们提出的I-O AV SE框架优于与传统距离的损耗功能训练的仅音频(AO)和AV模型,就标准客观的扬声器和噪声处理。
translated by 谷歌翻译
视频到语音是从口语说话视频中重建音频演讲的过程。此任务的先前方法依赖于两个步骤的过程,该过程从视频中推断出中间表示,然后使用Vocoder或波形重建算法将中间表示形式解码为波形音频。在这项工作中,我们提出了一个基于生成对抗网络(GAN)的新的端到端视频到语音模型,该模型将口语视频转换为波形端到端,而无需使用任何中间表示或单独的波形合成算法。我们的模型由一个编码器架构组成,该体系结构接收原始视频作为输入并生成语音,然后将其馈送到波形评论家和权力评论家。基于这两个批评家的对抗损失的使用可以直接综合原始音频波形并确保其现实主义。此外,我们的三个比较损失的使用有助于建立生成的音频和输入视频之间的直接对应关系。我们表明,该模型能够用诸如网格之类的受约束数据集重建语音,并且是第一个为LRW(野外唇读)生成可理解的语音的端到端模型,以数百名扬声器为特色。完全记录在“野外”。我们使用四个客观指标来评估两种不同的情况下生成的样本,这些客观指标衡量了人工语音的质量和清晰度。我们证明,所提出的方法在Grid和LRW上的大多数指标上都优于以前的所有作品。
translated by 谷歌翻译
Music discovery services let users identify songs from short mobile recordings. These solutions are often based on Audio Fingerprinting, and rely more specifically on the extraction of spectral peaks in order to be robust to a number of distortions. Few works have been done to study the robustness of these algorithms to background noise captured in real environments. In particular, AFP systems still struggle when the signal to noise ratio is low, i.e when the background noise is strong. In this project, we tackle this problematic with Deep Learning. We test a new hybrid strategy which consists of inserting a denoising DL model in front of a peak-based AFP algorithm. We simulate noisy music recordings using a realistic data augmentation pipeline, and train a DL model to denoise them. The denoising model limits the impact of background noise on the AFP system's extracted peaks, improving its robustness to noise. We further propose a novel loss function to adapt the DL model to the considered AFP system, increasing its precision in terms of retrieved spectral peaks. To the best of our knowledge, this hybrid strategy has not been tested before.
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
除了极其非线性的情况外,如果不是数十亿个参数来解决或至少要获得良好的解决方案,并且众所周知,众所周知,众所周知,并且通过深化和扩大其拓扑来实现复杂性的神经网络增加更好近似所需的非线性水平。然而,紧凑的拓扑始终优先于更深的拓扑,因为它们提供了使用较少计算单元和更少参数的优势。这种兼容性以减少的非线性的价格出现,因此有限的解决方案搜索空间。我们提出了使用自动多项式内核估计的1维多项式神经网络(1DPNN)模型,用于1维卷积神经网络(1dcnns),并且从第一层引入高度的非线性,这可以补偿深度的需要和/或宽拓扑。我们表明,这种非线性使得模型能够产生比与音频信号相关的各种分类和回归问题的常规1dcnn的计算和空间复杂性更好的结果,即使它在神经元水平上引入了更多的计算和空间复杂性。实验在三个公共数据集中进行,并证明,在解决的问题上,所提出的模型可以在更少的时间内从数据中提取比1dcnn更多的相关信息,并且存储器较少。
translated by 谷歌翻译