尽管在基于生成的对抗网络(GAN)的声音编码器中,该模型在MEL频谱图中生成原始波形,但在各种录音环境中为众多扬声器合成高保真音频仍然具有挑战性。在这项工作中,我们介绍了Bigvgan,这是一款通用的Vocoder,在零照片环境中在各种看不见的条件下都很好地概括了。我们将周期性的非线性和抗氧化表现引入到发电机中,这带来了波形合成所需的感应偏置,并显着提高了音频质量。根据我们改进的生成器和最先进的歧视器,我们以最大的规模训练我们的Gan Vocoder,最高到1.12亿个参数,这在文献中是前所未有的。特别是,我们识别并解决了该规模特定的训练不稳定性,同时保持高保真输出而不过度验证。我们的Bigvgan在各种分布场景中实现了最先进的零拍性能,包括新的扬声器,新颖语言,唱歌声音,音乐和乐器音频,在看不见的(甚至是嘈杂)的录制环境中。我们将在以下网址发布我们的代码和模型:https://github.com/nvidia/bigvgan
translated by 谷歌翻译
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the melspectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart. IntroductionVoice is one of the most frequent and naturally used communication interfaces for humans. With recent developments in technology, voice is being used as a main interface in artificial intelligence (AI) voice assistant services such as Amazon Alexa, and it is also widely used in automobiles, smart homes and so forth. Accordingly, with the increase in demand for people to converse with machines, technology that synthesizes natural speech like human speech is being actively studied.Recently, with the development of neural networks, speech synthesis technology has made a rapid progress. Most neural speech synthesis models use a two-stage pipeline: 1) predicting a low resolution intermediate representation such as mel-spectrograms (
translated by 谷歌翻译
基于生成对抗神经网络(GAN)的神经声码器由于其快速推理速度和轻量级网络而被广泛使用,同时产生了高质量的语音波形。由于感知上重要的语音成分主要集中在低频频段中,因此大多数基于GAN的神经声码器进行了多尺度分析,以评估降压化采样的语音波形。这种多尺度分析有助于发电机提高语音清晰度。然而,在初步实验中,我们观察到,重点放在低频频段的多尺度分析会导致意外的伪影,例如,混叠和成像伪像,这些文物降低了合成的语音波形质量。因此,在本文中,我们研究了这些伪影与基于GAN的神经声码器之间的关系,并提出了一个基于GAN的神经声码器,称为Avocodo,该机器人允许合成具有减少伪影的高保真语音。我们介绍了两种歧视者,以各种视角评估波形:协作多波段歧视者和一个子兰歧视器。我们还利用伪正常的镜像滤波器库来获得下采样的多频段波形,同时避免混音。实验结果表明,在语音和唱歌语音合成任务中,鳄梨的表现优于常规的基于GAN的神经声码器,并且可以合成无伪影的语音。尤其是,鳄梨甚至能够复制看不见的扬声器的高质量波形。
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
Previous works (Donahue et al., 2018a;Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.
translated by 谷歌翻译
大多数GaN(生成的对抗网络)基于高保真波形的方法,严重依赖于鉴别者来提高其性能。然而,该GaN方法的过度使用引入了生成过程中的许多不确定性,并且通常导致音调和强度不匹配,当使用诸如唱歌语音合成(SVS)敏感时,这是致命的。为了解决这个问题,我们提出了一种高保真神经声码器的Refinegan,具有更快的实时发电能力,并专注于鲁棒性,俯仰和强度精度和全带音频生成。我们采用了一种具有基于多尺度谱图的损耗功能的播放引导的细化架构,以帮助稳定训练过程,并在使用基于GaN的训练方法的同时保持神经探测器的鲁棒性。与地面真实音频相比,使用此方法生成的音频显示在主观测试中更好的性能。该结果表明,通过消除由扬声器和记录过程产生的缺陷,在波形重建期间甚至改善了保真度。此外,进一步的研究表明,在特定类型的数据上培训的模型可以在完全看不见的语言和看不见的扬声器上相同地执行。生成的样本对在https://timedomain-tech.github.io/refinegor上提供。
translated by 谷歌翻译
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audio in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations. 1 * Contributed to the work during an internship at Baidu Research, USA. 1 Audio samples are in: https://diffwave-demo.github.io/
translated by 谷歌翻译
最近,基于GAN的神经声码器(如平行Wavegan,Melgan,Hifigan和Univnet)由于其轻巧和平行的结构而变得流行,从而导致具有高保真性的实时合成波形,即使在CPU上也是如此。 Hifigan和Univnet是两个Sota Vocoders。尽管它们质量很高,但仍有改进的余地。在本文中,由计算机视觉的视觉望远镜结构的激励,我们采用了一个类似的想法,并提出了一个有效且轻巧的神经声码器,称为Wolonet。在该网络中,我们开发了一个新颖的轻质块,该块使用位于曲线的动态凝胶核的位置变化,与通道无关和深度动态卷积内核。为了证明我们方法的有效性和概括性,我们进行了一项消融研究,以验证我们的新型设计,并与典型的基于GAN的歌手进行主观和客观的比较。结果表明,我们的Wolonet达到了最佳的一代质量,同时需要的参数少于两个神经SOTA声码器Hifigan和Univnet。
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
从语音音频中删除背景噪音一直是大量研究和努力的主题,尤其是由于虚拟沟通和业余声音录制的兴起,近年来。然而,背景噪声并不是唯一可以防止可理解性的不愉快干扰:混响,剪裁,编解码器工件,有问题的均衡,有限的带宽或不一致的响度同样令人不安且无处不在。在这项工作中,我们建议将言语增强的任务视为一项整体努力,并提出了一种普遍的语音增强系统,同时解决了55种不同的扭曲。我们的方法由一种使用基于得分的扩散的生成模型以及一个多分辨率调节网络,该网络通过混合密度网络进行增强。我们表明,这种方法在专家听众执行的主观测试中大大优于艺术状态。我们还表明,尽管没有考虑任何特定的快速采样策略,但它仅通过4-8个扩散步骤就可以实现竞争性的目标得分。我们希望我们的方法论和技术贡献都鼓励研究人员和实践者采用普遍的语音增强方法,可能将其作为一项生成任务。
translated by 谷歌翻译
YOUTTS为零拍摄多扬声器TTS的任务带来了多语言方法的力量。我们的方法在VITS模型上构建,并为零拍摄的多扬声器和多语言训练增加了几种新颖的修改。我们实现了最先进的(SOTA)导致零拍摄的多扬声器TTS以及与VCTK数据集上的零拍语音转换中的SOTA相当的结果。此外,我们的方法可以实现具有单扬声器数据集的目标语言的有希望的结果,以低资源语言为零拍摄多扬声器TTS和零拍语音转换系统的开放可能性。最后,可以微调言论不到1分钟的言论,并实现最先进的语音相似性和合理的质量。这对于允许具有非常不同的语音或从训练期间的记录特征的讲话来合成非常重要。
translated by 谷歌翻译
神经文本到语音研究的最新进展是利用低级中间语音表示(例如MEL-光谱图)的两阶段管道主导的。但是,这种预定的特征从根本上受到限制,因为它们不允许通过学习隐藏表示形式来利用数据驱动方法的全部潜力。因此,已经提出了几种端到端方法。但是,这样的模型更难训练,并且需要大量具有转录的高质量录音。在这里,我们提出了WavThruvec-一种两阶段的架构,通过使用高维WAV2VEC 2.0嵌入作为中间语音表示,可以解决瓶颈。由于这些隐藏的激活提供了高级语言特征,因此它们对噪音更强大。这使我们能够利用质量较低的注释语音数据集来训练第一阶段模块。同时,由于WAV2VEC 2.0的嵌入已经进行了时间对齐,因此可以在大规模未转录的音频语料库上对第二阶段组件进行培训。这导致了对量表词的概括能力的提高,以及对看不见的说话者的更好概括。我们表明,所提出的模型不仅与最新神经模型的质量相匹配,而且还介绍了有用的属性,可以实现语音转换或零弹性合成的任务。
translated by 谷歌翻译
本文介绍了一个统一的源滤波器网络,具有谐波源源激发生成机制。在以前的工作中,我们提出了统一的源滤波器gan(USFGAN),用于开发具有统一源滤波器神经网络体系结构的灵活语音可控性的高保真神经声码器。但是,USFGAN对Aperiodic源激发信号进行建模的能力不足,并且自然语音和生成的语音之间的声音质量仍然存在差距。为了改善源激发建模和产生的声音质量,提出了一个新的源激励生成网络,分别生成周期性和大约组件。还采用了Hifigan的高级对抗训练程序来代替原始USFGAN中使用的平行波甘的训练。客观和主观评估结果都表明,经过修改的USFGAN可显着提高基本USFGAN的声音质量,同时保持语音可控性。
translated by 谷歌翻译
用于将音频信号的光谱表示转换为波形的神经声学器是语音合成管道中的常用组件。它侧重于合成来自低维表示的波形,例如MEL-谱图。近年来,已经引入了不同的方法来开发这种声音。但是,评估这些新的声音仪并将其表达与以前的声学相比,它变得更具挑战性。为了解决这个问题,我们呈现VOCBENCH,这是一个框架,该框架是基于最先进的神经声码器的性能。 VOCBENCH使用系统研究来评估共享环境中的不同神经探测器,使它们能够进行公平比较。在我们的实验中,我们对所有神经副探测器的数据集,培训管道和评估指标使用相同的设置。我们执行主观和客观评估,以比较每个声码器沿不同的轴的性能。我们的结果表明,该框架能够为每种声学器提供竞争的疗效和合成样品的质量。 Vocebench框架可在https://github.com/facebookResearch/Vocoder-Benchmark中获得。
translated by 谷歌翻译
神经声码器(NVS)的发展导致了高质量和快速的波形。但是,常规的NV靶向单个采样率,并在应用于不同采样率时需要重新训练。由于语音质量和发电速度之间的权衡,合适的采样率因应用到应用而异。在这项研究中,我们提出了一种处理单个NV中多个采样率的方法,称为MSR-NV。通过从低采样率开始生成波形,MSR-NV可以有效地了解每个频段的特征,并以多个采样率合成高质量的语音。它可以被视为先前提出的NVS的扩展,在这项研究中,我们扩展了平行波甘(PWG)的结构。实验评估结果表明,所提出的方法比在16、24和48 kHz分别训练的原始PWG实现的主观质量明显更高,而没有增加推理时间。我们还表明,MSR-NV可以利用较低的采样率来利用语音来进一步提高合成语音的质量。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
降级扩散概率模型(DDPM)最近在许多生成任务中都取得了领先的性能。但是,继承的迭代采样过程成本阻碍了他们的应用程序到文本到语音部署。通过有关扩散模型参数化的初步研究,我们发现以前基于梯度的TTS模型需要数百或数千个迭代以保证高样本质量,这对加速采样带来了挑战。在这项工作中,我们提出了Prodiff的建议,以用于高质量文本到语音的渐进快速扩散模型。与以前的估计数据密度梯度的工作不同,Prodiff通过直接预测清洁数据来避免在加速采样时避免明显的质量降解来参数化denoising模型。为了通过减少扩散迭代来应对模型收敛挑战,Prodiff通过知识蒸馏减少目标位点的数据差异。具体而言,Denoising模型使用N-Step DDIM教师的生成的MEL光谱图作为训练目标,并将行为提炼成具有N/2步的新模型。因此,它允许TTS模型做出尖锐的预测,并通过数量级进一步减少采样时间。我们的评估表明,Prodiff仅需要两次迭代即可合成高保真性MEL光谱图,同时使用数百个步骤保持样本质量和多样性与最先进的模型竞争。 Prodiff在单个NVIDIA 2080TI GPU上的采样速度比实时快24倍,这使得扩散模型实际上是第一次适用于文本到语音综合部署。我们广泛的消融研究表明,Prodiff中的每种设计都是有效的,我们进一步表明,Prodiff可以轻松扩展到多扬声器设置。音频样本可在\ url {https://prodiff.github.io/。}上找到
translated by 谷歌翻译
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译