我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
从语音音频中删除背景噪音一直是大量研究和努力的主题,尤其是由于虚拟沟通和业余声音录制的兴起,近年来。然而,背景噪声并不是唯一可以防止可理解性的不愉快干扰:混响,剪裁,编解码器工件,有问题的均衡,有限的带宽或不一致的响度同样令人不安且无处不在。在这项工作中,我们建议将言语增强的任务视为一项整体努力,并提出了一种普遍的语音增强系统,同时解决了55种不同的扭曲。我们的方法由一种使用基于得分的扩散的生成模型以及一个多分辨率调节网络,该网络通过混合密度网络进行增强。我们表明,这种方法在专家听众执行的主观测试中大大优于艺术状态。我们还表明,尽管没有考虑任何特定的快速采样策略,但它仅通过4-8个扩散步骤就可以实现竞争性的目标得分。我们希望我们的方法论和技术贡献都鼓励研究人员和实践者采用普遍的语音增强方法,可能将其作为一项生成任务。
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
尽管在基于生成的对抗网络(GAN)的声音编码器中,该模型在MEL频谱图中生成原始波形,但在各种录音环境中为众多扬声器合成高保真音频仍然具有挑战性。在这项工作中,我们介绍了Bigvgan,这是一款通用的Vocoder,在零照片环境中在各种看不见的条件下都很好地概括了。我们将周期性的非线性和抗氧化表现引入到发电机中,这带来了波形合成所需的感应偏置,并显着提高了音频质量。根据我们改进的生成器和最先进的歧视器,我们以最大的规模训练我们的Gan Vocoder,最高到1.12亿个参数,这在文献中是前所未有的。特别是,我们识别并解决了该规模特定的训练不稳定性,同时保持高保真输出而不过度验证。我们的Bigvgan在各种分布场景中实现了最先进的零拍性能,包括新的扬声器,新颖语言,唱歌声音,音乐和乐器音频,在看不见的(甚至是嘈杂)的录制环境中。我们将在以下网址发布我们的代码和模型:https://github.com/nvidia/bigvgan
translated by 谷歌翻译
Music discovery services let users identify songs from short mobile recordings. These solutions are often based on Audio Fingerprinting, and rely more specifically on the extraction of spectral peaks in order to be robust to a number of distortions. Few works have been done to study the robustness of these algorithms to background noise captured in real environments. In particular, AFP systems still struggle when the signal to noise ratio is low, i.e when the background noise is strong. In this project, we tackle this problematic with Deep Learning. We test a new hybrid strategy which consists of inserting a denoising DL model in front of a peak-based AFP algorithm. We simulate noisy music recordings using a realistic data augmentation pipeline, and train a DL model to denoise them. The denoising model limits the impact of background noise on the AFP system's extracted peaks, improving its robustness to noise. We further propose a novel loss function to adapt the DL model to the considered AFP system, increasing its precision in terms of retrieved spectral peaks. To the best of our knowledge, this hybrid strategy has not been tested before.
translated by 谷歌翻译
尽管近年来取得了惊人的进步,但最先进的音乐分离系统会产生具有显着感知缺陷的源估计,例如增加无关噪声或消除谐波。我们提出了一个后处理模型(MAKE听起来不错(MSG)后处理器),以增强音乐源分离系统的输出。我们将我们的后处理模型应用于最新的基于波形和基于频谱图的音乐源分离器,包括在训练过程中未见的分离器。我们对源分离器产生的误差的分析表明,波形模型倾向于引入更多高频噪声,而频谱图模型倾向于丢失瞬变和高频含量。我们引入了客观措施来量化这两种错误并显示味精改善了两种错误的源重建。众包主观评估表明,人类的听众更喜欢由MSG进行后处理的低音和鼓的来源估计。
translated by 谷歌翻译
在这项工作中,我们提出了清洁nunet,这是原始波形上的因果语音deno的模型。所提出的模型基于编码器架构,并结合了几个自我注意块,以完善其瓶颈表示,这对于获得良好的结果至关重要。该模型通过在波形和多分辨率光谱图上定义的一组损失进行了优化。所提出的方法在各种客观和主观评估指标中的言语质量方面优于最先进的模型。
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
Previous works (Donahue et al., 2018a;Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.
translated by 谷歌翻译
我们介绍了视觉匹配任务,其中音频剪辑被转换为听起来像是在目标环境中记录的。鉴于目标环境的图像和源音频的波形,目标是重新合成音频,以匹配目标室声音的可见几何形状和材料所建议的。为了解决这一新颖的任务,我们提出了一个跨模式变压器模型,该模型使用视听注意力将视觉属性注入音频并生成真实的音频输出。此外,我们设计了一个自我监督的训练目标,尽管他们缺乏声学上不匹配的音频,但可以从野外网络视频中学习声学匹配。我们证明,我们的方法成功地将人类的言语转化为图像中描绘的各种现实环境,表现优于传统的声学匹配和更严格的监督基线。
translated by 谷歌翻译
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the melspectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart. IntroductionVoice is one of the most frequent and naturally used communication interfaces for humans. With recent developments in technology, voice is being used as a main interface in artificial intelligence (AI) voice assistant services such as Amazon Alexa, and it is also widely used in automobiles, smart homes and so forth. Accordingly, with the increase in demand for people to converse with machines, technology that synthesizes natural speech like human speech is being actively studied.Recently, with the development of neural networks, speech synthesis technology has made a rapid progress. Most neural speech synthesis models use a two-stage pipeline: 1) predicting a low resolution intermediate representation such as mel-spectrograms (
translated by 谷歌翻译
事实证明,神经网络是以非常低的比特率解决语音编码问题的强大工具。但是,可以在现实世界中可以强大操作的神经编码器的设计仍然是一个重大挑战。因此,我们提出了神经末端2端语音编解码器(NESC),可用于3 kbps的高质量宽带语音编码的稳定,可扩展的端到端神经语音编解码器。编码器使用一种新的体系结构配置,该配置依赖于我们提出的双PATHCONVRNN(DPCRNN)层,而解码器体系结构基于我们以前的工作streamwise-stylemelgan。我们对干净和嘈杂的语音的主观听力测试表明,NESC对于看不见的条件和信号扰动特别强大。
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
基于生成对抗神经网络(GAN)的神经声码器由于其快速推理速度和轻量级网络而被广泛使用,同时产生了高质量的语音波形。由于感知上重要的语音成分主要集中在低频频段中,因此大多数基于GAN的神经声码器进行了多尺度分析,以评估降压化采样的语音波形。这种多尺度分析有助于发电机提高语音清晰度。然而,在初步实验中,我们观察到,重点放在低频频段的多尺度分析会导致意外的伪影,例如,混叠和成像伪像,这些文物降低了合成的语音波形质量。因此,在本文中,我们研究了这些伪影与基于GAN的神经声码器之间的关系,并提出了一个基于GAN的神经声码器,称为Avocodo,该机器人允许合成具有减少伪影的高保真语音。我们介绍了两种歧视者,以各种视角评估波形:协作多波段歧视者和一个子兰歧视器。我们还利用伪正常的镜像滤波器库来获得下采样的多频段波形,同时避免混音。实验结果表明,在语音和唱歌语音合成任务中,鳄梨的表现优于常规的基于GAN的神经声码器,并且可以合成无伪影的语音。尤其是,鳄梨甚至能够复制看不见的扬声器的高质量波形。
translated by 谷歌翻译
通常,音频超分辨率模型固定了初始采样率和目标采样率,这需要对每对采样率进行训练的模型。我们介绍了NU-WAVE 2,这是一种用于神经音频上采样的扩散模型,该模型可以通过单个模型从各种采样率的输入中生成48 kHz音频信号。基于Nu-Wave的架构,NU-WAVE 2使用短时傅立叶卷积(STFC)生成谐波来解决NU-WAVE的主要故障模式,并结合带宽光谱特征变换(BSFT)来调节带宽的带宽频域中的输入。我们在实验上证明,NU-WAVE 2可产生高分辨率音频,而不论输入的采样率如何,同时需要的参数少于其他模型。官方代码和音频样本可在https://mindslab-ai.github.io/nuwave2上找到。
translated by 谷歌翻译
卷积神经网络包含强大的先验,用于产生自然的图像[1]。这些先验可以以无监督的方式启用图像降解,超级分辨率和灌输。以前尝试在音频中展示类似想法的尝试,即深度音频先验,(i)使用诸如谐波卷积之类的手挑选的体系结构,(ii)仅使用频谱输入工作,并且(iii)主要用于消除高斯噪声[2]。在这项工作中,我们表明,即使在使用原始波形时,现有的音频源分离的SOTA体系结构也包含深度先验。可以通过训练神经网络来发现深度先验,以产生单个损坏的信号,因为将白噪声作为输入。具有相关深度先验的网络可能会在损坏的信号收敛之前生成更清洁的信号版本。我们通过几种损坏证明了这种恢复效果:背景噪声,混响和信号中的差距(音频介绍)。
translated by 谷歌翻译
最近,基于扩散的生成模型已引入语音增强的任务。干净的语音损坏被建模为固定的远期过程,其中逐渐添加了越来越多的噪声。通过学习以嘈杂的输入为条件的迭代方式扭转这一过程,可以产生干净的语音。我们以先前的工作为基础,并在随机微分方程的形式主义中得出训练任务。我们对基础分数匹配目标进行了详细的理论综述,并探索了不同的采样器配置,以解决测试时的反向过程。通过使用自然图像生成文献的复杂网络体系结构,与以前的出版物相比,我们可以显着提高性能。我们还表明,我们可以与最近的判别模型竞争,并在评估与培训不同的语料库时获得更好的概括。我们通过主观的听力测试对评估结果进行补充,其中我们提出的方法是最好的。此外,我们表明所提出的方法在单渠道语音覆盖中实现了出色的最新性能。我们的代码和音频示例可在线获得,请参见https://uhh.de/inf-sp-sgmse
translated by 谷歌翻译
传统上,音乐混合涉及以干净,单个曲目的形式录制乐器,并使用音频效果和专家知识(例如,混合工程师)将它们融合到最终混合物中。近年来,音乐制作任务的自动化已成为一个新兴领域,基于规则的方法和机器学习方法已被探索。然而,缺乏干燥或干净的仪器记录限制了这种模型的性能,这与专业的人造混合物相去甚远。我们探索是否可以使用室外数据,例如潮湿或加工的多轨音乐录音,并将其重新利用以训练有监督的深度学习模型,以弥合自动混合质量的当前差距。为了实现这一目标,我们提出了一种新型的数据预处理方法,该方法允许模型执行自动音乐混合。我们还重新设计了一种用于评估音乐混合系统的听力测试方法。我们使用经验丰富的混合工程师作为参与者来验证结果。
translated by 谷歌翻译