We propose Parallel WaveGAN, a distillation-free, fast, and smallfootprint waveform generation method using a generative adversarial network. In the proposed method, a non-autoregressive WaveNet is trained by jointly optimizing multi-resolution spectrogram and adversarial loss functions, which can effectively capture the time-frequency distribution of the realistic speech waveform. As our method does not require density distillation used in the conventional teacher-student framework, the entire model can be easily trained. Furthermore, our model is able to generate highfidelity speech even with its compact architecture. In particular, the proposed Parallel WaveGAN has only 1.44 M parameters and can generate 24 kHz speech waveform 28.68 times faster than realtime on a single GPU environment. Perceptual listening test results verify that our proposed method achieves 4.16 mean opinion score within a Transformer-based text-to-speech framework, which is comparative to the best distillation-based Parallel WaveNet system.
translated by 谷歌翻译
神经声码器(NVS)的发展导致了高质量和快速的波形。但是,常规的NV靶向单个采样率,并在应用于不同采样率时需要重新训练。由于语音质量和发电速度之间的权衡,合适的采样率因应用到应用而异。在这项研究中,我们提出了一种处理单个NV中多个采样率的方法,称为MSR-NV。通过从低采样率开始生成波形,MSR-NV可以有效地了解每个频段的特征,并以多个采样率合成高质量的语音。它可以被视为先前提出的NVS的扩展,在这项研究中,我们扩展了平行波甘(PWG)的结构。实验评估结果表明,所提出的方法比在16、24和48 kHz分别训练的原始PWG实现的主观质量明显更高,而没有增加推理时间。我们还表明,MSR-NV可以利用较低的采样率来利用语音来进一步提高合成语音的质量。
translated by 谷歌翻译
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the melspectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart. IntroductionVoice is one of the most frequent and naturally used communication interfaces for humans. With recent developments in technology, voice is being used as a main interface in artificial intelligence (AI) voice assistant services such as Amazon Alexa, and it is also widely used in automobiles, smart homes and so forth. Accordingly, with the increase in demand for people to converse with machines, technology that synthesizes natural speech like human speech is being actively studied.Recently, with the development of neural networks, speech synthesis technology has made a rapid progress. Most neural speech synthesis models use a two-stage pipeline: 1) predicting a low resolution intermediate representation such as mel-spectrograms (
translated by 谷歌翻译
本文介绍了一个统一的源滤波器网络,具有谐波源源激发生成机制。在以前的工作中,我们提出了统一的源滤波器gan(USFGAN),用于开发具有统一源滤波器神经网络体系结构的灵活语音可控性的高保真神经声码器。但是,USFGAN对Aperiodic源激发信号进行建模的能力不足,并且自然语音和生成的语音之间的声音质量仍然存在差距。为了改善源激发建模和产生的声音质量,提出了一个新的源激励生成网络,分别生成周期性和大约组件。还采用了Hifigan的高级对抗训练程序来代替原始USFGAN中使用的平行波甘的训练。客观和主观评估结果都表明,经过修改的USFGAN可显着提高基本USFGAN的声音质量,同时保持语音可控性。
translated by 谷歌翻译
最近,基于GAN的神经声码器(如平行Wavegan,Melgan,Hifigan和Univnet)由于其轻巧和平行的结构而变得流行,从而导致具有高保真性的实时合成波形,即使在CPU上也是如此。 Hifigan和Univnet是两个Sota Vocoders。尽管它们质量很高,但仍有改进的余地。在本文中,由计算机视觉的视觉望远镜结构的激励,我们采用了一个类似的想法,并提出了一个有效且轻巧的神经声码器,称为Wolonet。在该网络中,我们开发了一个新颖的轻质块,该块使用位于曲线的动态凝胶核的位置变化,与通道无关和深度动态卷积内核。为了证明我们方法的有效性和概括性,我们进行了一项消融研究,以验证我们的新型设计,并与典型的基于GAN的歌手进行主观和客观的比较。结果表明,我们的Wolonet达到了最佳的一代质量,同时需要的参数少于两个神经SOTA声码器Hifigan和Univnet。
translated by 谷歌翻译
诸如FastSpeech之类的非自动回归文本(TTS)模型可以比以前具有可比性的自回归模型合成语音的速度要快得多。 FastSpeech模型的培训依赖于持续时间预测的自回归教师模型(提供更多信息作为输入)和知识蒸馏(以简化输出中的数据分布),这可以缓解一对多的映射问题(即多个多个映射问题语音变化对应于TTS中的同一文本)。但是,FastSpeech有几个缺点:1)教师学生的蒸馏管线很复杂且耗时,2)从教师模型中提取的持续时间不够准确,并且从教师模型中提取的目标MEL光谱图会遭受信息损失的影响。由于数据的简化,两者都限制了语音质量。在本文中,我们提出了FastSpeech 2,它解决了FastSpeech中的问题,并更好地解决了TTS中的一对一映射问题1)直接用地面实现目标直接训练该模型,而不是教师的简化输出,以及2 )作为条件输入,引入更多语音信息(例如,音高,能量和更准确的持续时间)。具体而言,我们从语音波形中提取持续时间,音高和能量,并将其直接作为训练中的条件输入,并在推理中使用预测的值。我们进一步设计了FastSpeech 2s,这是首次尝试从文本中直接生成语音波形的尝试,从而享受完全端到端推断的好处。实验结果表明,1)FastSpeech 2在FastSpeech上实现了3倍的训练,而FastSpeech 2s的推理速度甚至更快; 2)FastSpeech 2和2S的语音质量优于FastSpeech,而FastSpeech 2甚至可以超越自回归型号。音频样本可在https://speechresearch.github.io/fastspeech2/上找到。
translated by 谷歌翻译
GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise WaveGAN vocoder achieves significantly higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.
translated by 谷歌翻译
在神经文本到语音(TTS)中,两阶段系统或一系列单独学习的模型显示出接近人类语音的合成质量。例如,FastSpeech2将输入文本转换为MEL-SPECTROGRAM,然后HIFI-GAN从MEL-Spectogram产生了原始波形,它们分别称为声学特征发生器和神经声码器。但是,他们的训练管道有些麻烦,因为它需要进行微调和准确的语音文本对齐,以实现最佳性能。在这项工作中,我们提出了端到端的文本到语音(E2E-TTS)模型,该模型具有简化的训练管道,并优于单独学习的模型。具体而言,我们提出的模型是经过对齐模块的联合训练的FastSpeech2和HIFI-GAN。由于训练和推理之间没有声学特征不匹配,因此不需要微调。此外,我们通过在联合培训框架中采用对齐学习目标来消除对外部语音文本对齐工具的依赖。在LJSpeech语料库上进行的实验表明,所提出的模型优于公开可用的模型,ESPNET2-TT在主观评估(MOS)(MOS)和一些客观评估中的最新实现。
translated by 谷歌翻译
Several solutions for lightweight TTS have shown promising results. Still, they either rely on a hand-crafted design that reaches non-optimum size or use a neural architecture search but often suffer training costs. We present Nix-TTS, a lightweight TTS achieved via knowledge distillation to a high-quality yet large-sized, non-autoregressive, and end-to-end (vocoder-free) TTS teacher model. Specifically, we offer module-wise distillation, enabling flexible and independent distillation to the encoder and decoder module. The resulting Nix-TTS inherited the advantageous properties of being non-autoregressive and end-to-end from the teacher, yet significantly smaller in size, with only 5.23M parameters or up to 89.34% reduction of the teacher model; it also achieves over 3.04x and 8.36x inference speedup on Intel-i7 CPU and Raspberry Pi 3B respectively and still retains a fair voice naturalness and intelligibility compared to the teacher model. We provide pretrained models and audio samples of Nix-TTS.
translated by 谷歌翻译
用于将音频信号的光谱表示转换为波形的神经声学器是语音合成管道中的常用组件。它侧重于合成来自低维表示的波形,例如MEL-谱图。近年来,已经引入了不同的方法来开发这种声音。但是,评估这些新的声音仪并将其表达与以前的声学相比,它变得更具挑战性。为了解决这个问题,我们呈现VOCBENCH,这是一个框架,该框架是基于最先进的神经声码器的性能。 VOCBENCH使用系统研究来评估共享环境中的不同神经探测器,使它们能够进行公平比较。在我们的实验中,我们对所有神经副探测器的数据集,培训管道和评估指标使用相同的设置。我们执行主观和客观评估,以比较每个声码器沿不同的轴的性能。我们的结果表明,该框架能够为每种声学器提供竞争的疗效和合成样品的质量。 Vocebench框架可在https://github.com/facebookResearch/Vocoder-Benchmark中获得。
translated by 谷歌翻译
尽管在基于生成的对抗网络(GAN)的声音编码器中,该模型在MEL频谱图中生成原始波形,但在各种录音环境中为众多扬声器合成高保真音频仍然具有挑战性。在这项工作中,我们介绍了Bigvgan,这是一款通用的Vocoder,在零照片环境中在各种看不见的条件下都很好地概括了。我们将周期性的非线性和抗氧化表现引入到发电机中,这带来了波形合成所需的感应偏置,并显着提高了音频质量。根据我们改进的生成器和最先进的歧视器,我们以最大的规模训练我们的Gan Vocoder,最高到1.12亿个参数,这在文献中是前所未有的。特别是,我们识别并解决了该规模特定的训练不稳定性,同时保持高保真输出而不过度验证。我们的Bigvgan在各种分布场景中实现了最先进的零拍性能,包括新的扬声器,新颖语言,唱歌声音,音乐和乐器音频,在看不见的(甚至是嘈杂)的录制环境中。我们将在以下网址发布我们的代码和模型:https://github.com/nvidia/bigvgan
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
降级扩散概率模型(DDPM)最近在许多生成任务中都取得了领先的性能。但是,继承的迭代采样过程成本阻碍了他们的应用程序到文本到语音部署。通过有关扩散模型参数化的初步研究,我们发现以前基于梯度的TTS模型需要数百或数千个迭代以保证高样本质量,这对加速采样带来了挑战。在这项工作中,我们提出了Prodiff的建议,以用于高质量文本到语音的渐进快速扩散模型。与以前的估计数据密度梯度的工作不同,Prodiff通过直接预测清洁数据来避免在加速采样时避免明显的质量降解来参数化denoising模型。为了通过减少扩散迭代来应对模型收敛挑战,Prodiff通过知识蒸馏减少目标位点的数据差异。具体而言,Denoising模型使用N-Step DDIM教师的生成的MEL光谱图作为训练目标,并将行为提炼成具有N/2步的新模型。因此,它允许TTS模型做出尖锐的预测,并通过数量级进一步减少采样时间。我们的评估表明,Prodiff仅需要两次迭代即可合成高保真性MEL光谱图,同时使用数百个步骤保持样本质量和多样性与最先进的模型竞争。 Prodiff在单个NVIDIA 2080TI GPU上的采样速度比实时快24倍,这使得扩散模型实际上是第一次适用于文本到语音综合部署。我们广泛的消融研究表明,Prodiff中的每种设计都是有效的,我们进一步表明,Prodiff可以轻松扩展到多扬声器设置。音频样本可在\ url {https://prodiff.github.io/。}上找到
translated by 谷歌翻译
Previous works (Donahue et al., 2018a;Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.
translated by 谷歌翻译
深度学习算法的兴起引领许多研究人员使用经典信号处理方法来发声。深度学习模型已经实现了富有富有的语音合成,现实的声音纹理和虚拟乐器的音符。然而,最合适的深度学习架构仍在调查中。架构的选择紧密耦合到音频表示。声音的原始波形可以太密集和丰富,用于深入学习模型,以有效处理 - 复杂性提高培训时间和计算成本。此外,它不代表声音以其所感知的方式。因此,在许多情况下,原始音频已经使用上采样,特征提取,甚至采用波形的更高级别的图示来转换为压缩和更有意义的形式。此外,研究了所选择的形式,另外的调节表示,不同的模型架构以及用于评估重建声音的许多度量的条件。本文概述了应用于使用深度学习的声音合成的音频表示。此外,它呈现了使用深度学习模型开发和评估声音合成架构的最重要方法,始终根据音频表示。
translated by 谷歌翻译
基于生成对抗神经网络(GAN)的神经声码器由于其快速推理速度和轻量级网络而被广泛使用,同时产生了高质量的语音波形。由于感知上重要的语音成分主要集中在低频频段中,因此大多数基于GAN的神经声码器进行了多尺度分析,以评估降压化采样的语音波形。这种多尺度分析有助于发电机提高语音清晰度。然而,在初步实验中,我们观察到,重点放在低频频段的多尺度分析会导致意外的伪影,例如,混叠和成像伪像,这些文物降低了合成的语音波形质量。因此,在本文中,我们研究了这些伪影与基于GAN的神经声码器之间的关系,并提出了一个基于GAN的神经声码器,称为Avocodo,该机器人允许合成具有减少伪影的高保真语音。我们介绍了两种歧视者,以各种视角评估波形:协作多波段歧视者和一个子兰歧视器。我们还利用伪正常的镜像滤波器库来获得下采样的多频段波形,同时避免混音。实验结果表明,在语音和唱歌语音合成任务中,鳄梨的表现优于常规的基于GAN的神经声码器,并且可以合成无伪影的语音。尤其是,鳄梨甚至能够复制看不见的扬声器的高质量波形。
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
通过语音转换(VC)的数据增强已成功应用于仅可用于目标扬声器的中性数据时,已成功地应用于低资源表达文本到语音(TTS)。尽管VC的质量对于这种方法至关重要,但学习稳定的VC模型是一项挑战,因为在低资源场景中的数据量受到限制,并且高度表达的语音具有很大的声学变化。为了解决这个问题,我们提出了一种新型的数据增强方法,该方法结合了变化和VC技术。由于换挡数据的增强功能可以覆盖各种音高动态,因此即使只有目标扬声器中性数据的1000个话语,它也可以极大地稳定VC和TTS模型的训练。主观测试结果表明,与常规方法相比,具有拟议方法的基于快速2的情绪TTS系统改善了自然性和情绪相似性。
translated by 谷歌翻译
本文介绍了一个端到端的文本到语音系统,CPU延迟低,适用于实时应用。该系统由基于自回归关注的序列到序列声学模型和用于波形生成的LPCNet声码器组成。提出了一种采用塔克罗伦1和2型号的模块的声学模型架构,而通过使用最近提出的基于位置的注意机制来确保稳定性,适用于任意句子长度。在推断期间,解码器是展开的,并且以流式方式执行声学特征生成,允许与句子长度无关的几乎恒定的延迟。实验结果表明,声学模型可以产生比计算机CPU上的实时大约31倍的功能序列,移动CPU上的6.5倍,使其能够满足两个设备上实时应用所需的条件。全端到端系统可以通过听证测试来验证几乎是自然的质量语音。
translated by 谷歌翻译
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audio in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations. 1 * Contributed to the work during an internship at Baidu Research, USA. 1 Audio samples are in: https://diffwave-demo.github.io/
translated by 谷歌翻译