降级扩散概率模型(DDPM)最近在许多生成任务中都取得了领先的性能。但是,继承的迭代采样过程成本阻碍了他们的应用程序到文本到语音部署。通过有关扩散模型参数化的初步研究,我们发现以前基于梯度的TTS模型需要数百或数千个迭代以保证高样本质量,这对加速采样带来了挑战。在这项工作中,我们提出了Prodiff的建议,以用于高质量文本到语音的渐进快速扩散模型。与以前的估计数据密度梯度的工作不同,Prodiff通过直接预测清洁数据来避免在加速采样时避免明显的质量降解来参数化denoising模型。为了通过减少扩散迭代来应对模型收敛挑战,Prodiff通过知识蒸馏减少目标位点的数据差异。具体而言,Denoising模型使用N-Step DDIM教师的生成的MEL光谱图作为训练目标,并将行为提炼成具有N/2步的新模型。因此,它允许TTS模型做出尖锐的预测,并通过数量级进一步减少采样时间。我们的评估表明,Prodiff仅需要两次迭代即可合成高保真性MEL光谱图,同时使用数百个步骤保持样本质量和多样性与最先进的模型竞争。 Prodiff在单个NVIDIA 2080TI GPU上的采样速度比实时快24倍,这使得扩散模型实际上是第一次适用于文本到语音综合部署。我们广泛的消融研究表明,Prodiff中的每种设计都是有效的,我们进一步表明,Prodiff可以轻松扩展到多扬声器设置。音频样本可在\ url {https://prodiff.github.io/。}上找到
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
过去十年已经开发了各种各样的深度生成模型。然而,这些模型通常同时努力解决三个关键要求,包括:高样本质量,模式覆盖和快速采样。我们称之为这些要求所征收的挑战是生成的学习Trielemma,因为现有模型经常为他人交易其中一些。特别是,去噪扩散模型表明了令人印象深刻的样本质量和多样性,但它们昂贵的采样尚未允许它们在许多现实世界应用中应用。在本文中,我们认为这些模型中的缓慢采样基本上归因于去噪步骤中的高斯假设,这些假设仅针对小型尺寸的尺寸。为了使得具有大步骤的去噪,从而减少去噪步骤的总数,我们建议使用复杂的多模态分布来模拟去噪分布。我们引入了去噪扩散生成的对抗网络(去噪扩散GANS),其使用多模式条件GaN模拟每个去噪步骤。通过广泛的评估,我们表明去噪扩散GAN获得原始扩散模型的样本质量和多样性,而在CIFAR-10数据集中是2000 $ \时代。与传统的GAN相比,我们的模型表现出更好的模式覆盖和样本多样性。据我们所知,去噪扩散GaN是第一模型,可在扩散模型中降低采样成本,以便允许它们廉价地应用于现实世界应用。项目页面和代码:https://nvlabs.github.io/denoising-diffusion-gan
translated by 谷歌翻译
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audio in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations. 1 * Contributed to the work during an internship at Baidu Research, USA. 1 Audio samples are in: https://diffwave-demo.github.io/
translated by 谷歌翻译
扩散模型最近显示出对生成建模的巨大希望,在密度估计下的感知质量和自回归模型上的表现优于gan。剩余的缺点是它们的缓慢采样时间:生成高质量的样品需要数百或数千次模型评估。在这里,我们做出了两项贡献,以帮助消除这一缺点:首先,我们提出了扩散模型的新参数化,这些参数在使用几个采样步骤时提供了增加的稳定性。其次,我们提出了一种使用许多步骤提炼训练有素的确定性扩散采样器的方法,将其采用一半的采样步骤。然后,我们继续逐步将此蒸馏过程应用于我们的模型,每次将所需的采样步骤的数量减半。在CIFAR-10,Imagenet和LSUN等标准图像生成基准上,我们从最先进的采样器开始采用多达8192步,并且能够将其蒸馏到型号中,而不会丢失4个步骤多种感知质量;例如,以4个步骤在CIFAR-10上实现3.0的FID。最后,我们表明,完整的渐进式蒸馏过程不需要花费更多的时间来训练原始模型,从而代表了在火车和测试时间使用扩散的生成建模的有效解决方案。
translated by 谷歌翻译
建立唱歌语音合成(SVS)系统以合成高质量和表达歌唱语音,其中声学模型在给定音乐分数时产生声学特征(例如,熔点)。以前的歌唱声学模型采用简单的损失(例如,L1和L2)或生成的对抗网络(GaN)来重建声学特征,同时它们分别遭受过平滑和不稳定的训练问题,这阻碍了合成歌曲的自然性。在这项工作中,我们提出了基于扩散概率模型的SVS的衍射指唱者。 Diffsinger是一个参数化的马尔可夫链,可迭代地将噪声转换为麦克波图条件的音乐分数。通过隐式优化变分界,Diffsinger可以稳定地训练并产生现实的输出。为了进一步提高语音质量和速度推断,我们引入了浅扩散机制,以更好地利用简单损失所学到的先验知识。具体地,根据地面真实熔点的扩散轨迹的交叉点,差异指针在小于扩散步骤的总数的浅步骤中开始产生,并且通过简单的熔融谱图解码器预测的那个。此外,我们提出了边界预测方法来定位交叉点并自适应地确定浅步。对中国歌唱数据集进行的评估表明Diffsinger优于最先进的SVS工作。扩展实验还证明了我们对语音致辞任务(DiffSeech)的方法的概括。音频样本可通过\ url {https://diffsinger.github.io}获得。
translated by 谷歌翻译
诸如FastSpeech之类的非自动回归文本(TTS)模型可以比以前具有可比性的自回归模型合成语音的速度要快得多。 FastSpeech模型的培训依赖于持续时间预测的自回归教师模型(提供更多信息作为输入)和知识蒸馏(以简化输出中的数据分布),这可以缓解一对多的映射问题(即多个多个映射问题语音变化对应于TTS中的同一文本)。但是,FastSpeech有几个缺点:1)教师学生的蒸馏管线很复杂且耗时,2)从教师模型中提取的持续时间不够准确,并且从教师模型中提取的目标MEL光谱图会遭受信息损失的影响。由于数据的简化,两者都限制了语音质量。在本文中,我们提出了FastSpeech 2,它解决了FastSpeech中的问题,并更好地解决了TTS中的一对一映射问题1)直接用地面实现目标直接训练该模型,而不是教师的简化输出,以及2 )作为条件输入,引入更多语音信息(例如,音高,能量和更准确的持续时间)。具体而言,我们从语音波形中提取持续时间,音高和能量,并将其直接作为训练中的条件输入,并在推理中使用预测的值。我们进一步设计了FastSpeech 2s,这是首次尝试从文本中直接生成语音波形的尝试,从而享受完全端到端推断的好处。实验结果表明,1)FastSpeech 2在FastSpeech上实现了3倍的训练,而FastSpeech 2s的推理速度甚至更快; 2)FastSpeech 2和2S的语音质量优于FastSpeech,而FastSpeech 2甚至可以超越自回归型号。音频样本可在https://speechresearch.github.io/fastspeech2/上找到。
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
用于将音频信号的光谱表示转换为波形的神经声学器是语音合成管道中的常用组件。它侧重于合成来自低维表示的波形,例如MEL-谱图。近年来,已经引入了不同的方法来开发这种声音。但是,评估这些新的声音仪并将其表达与以前的声学相比,它变得更具挑战性。为了解决这个问题,我们呈现VOCBENCH,这是一个框架,该框架是基于最先进的神经声码器的性能。 VOCBENCH使用系统研究来评估共享环境中的不同神经探测器,使它们能够进行公平比较。在我们的实验中,我们对所有神经副探测器的数据集,培训管道和评估指标使用相同的设置。我们执行主观和客观评估,以比较每个声码器沿不同的轴的性能。我们的结果表明,该框架能够为每种声学器提供竞争的疗效和合成样品的质量。 Vocebench框架可在https://github.com/facebookResearch/Vocoder-Benchmark中获得。
translated by 谷歌翻译
深度学习表现出巨大的生成任务潜力。生成模型是可以根据某些隐含参数随机生成观测值的模型类。最近,扩散模型由于其发电能力而成为一类生成模型。如今,已经取得了巨大的成就。除了计算机视觉,语音产生,生物信息学和自然语言处理外,还需要在该领域探索更多应用。但是,扩散模型具有缓慢生成过程的自然缺点,从而导致许多增强的作品。该调查总结了扩散模型的领域。我们首先说明了两项具有里程碑意义的作品的主要问题-DDPM和DSM。然后,我们提供各种高级技术,以加快扩散模型 - 训练时间表,无训练采样,混合模型以及得分和扩散统一。关于现有模型,我们还根据特定的NFE提供了FID得分的基准和NLL。此外,引入了带有扩散模型的应用程序,包括计算机视觉,序列建模,音频和科学AI。最后,该领域以及局限性和进一步的方向都进行了摘要。
translated by 谷歌翻译
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion.
translated by 谷歌翻译
大多数神经文本到语音(TTS)模型需要<语音,转录器>来自所需扬声器的成对数据,以获得高质量的语音合成,这限制了大量未经过滤的训练数据的使用。在这项工作中,我们呈现导向TTS,这是一种高质量的TTS模型,用于从未筛选的语音数据生成语音。引导TTS将无条件扩散概率模型与单独培训的音素分类器组合以进行文本到语音。通过对语音的无条件分配建模,我们的模型可以利用未经筛选的培训数据。对于文本到语音合成,我们通过音素分类指导无条件DDPM的生成过程,以产生来自给定转录物的条件分布的MEL-谱图。我们表明,导向TTS与现有的方法实现了可比性的性能,而没有LJSpeech的任何成绩单。我们的结果进一步表明,在MultiSpeaker大规模数据上培训的单个扬声器相关的音素分类器可以指导针对各种扬声器执行TTS的无条件DDPM。
translated by 谷歌翻译
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models),the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples (https://speechresearch.github.io/binauralgrad) and code (https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad) are available online.
translated by 谷歌翻译
扩散概率模型采用前向马尔可夫扩散链逐渐将数据映射到噪声分布,学习如何通过推断一个反向马尔可夫扩散链来生成数据以颠倒正向扩散过程。为了实现竞争性数据生成性能,他们需要一条长长的扩散链,这使它们在培训中不仅在培训中而且发电。为了显着提高计算效率,我们建议通过废除将数据扩散到随机噪声的要求来截断正向扩散链。因此,我们从隐式生成分布而不是随机噪声启动逆扩散链,并通过将其与截断的正向扩散链损坏的数据的分布相匹配来学习其参数。实验结果表明,就发电性能和所需的逆扩散步骤的数量而言,我们的截短扩散概率模型对未截断的概率模型提供了一致的改进。
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
产生人类想要的声音效果是一个重要的话题。但是,在这一领域,很少有研究声音发电。在这项研究中,我们调查了以文本提示为条件的声音,并提出了一个新型的文本对生成框架,该框架由文本编码器组成,矢量量化了变异自动编码器(VQ-VAE),解码器和歌手。该框架首先使用解码器将从文本编码器提取的文本特征传递到借助VQ-VAE的MEL光谱图中,然后使用Vocoder将生成的MEL光谱图转换为波形。我们发现,解码器显着影响发电性能。因此,我们专注于在这项研究中设计一个好的解码器。我们从传统的自动回解码器开始,该解码器已被证明是以前的Sound Generation Works中的最先进方法。但是,AR解码器始终按顺序预测MEL-SPECTROGIN图令牌,这引入了单向偏见和错误问题的积累。此外,使用AR解码器,声音生成时间随着声音持续时间线性增加。为了克服AR解码器引入的缺点,我们提出了一个基于离散扩散模型的非自动回形解码器,称为DiffSound。具体而言,DIFFSOUND可以在一个步骤中预测所有MEL光谱图令牌,然后在下一步中完善预测的令牌,因此可以在几个步骤后获得最优于预测的结果。我们的实验表明,与AR解码器相比,我们提出的差异不仅产生更好的文本到单一生成结果,而且还具有更快的生成速度,例如MOS:3.56 \ textit {v.s} 2.786,并且生成速度为五个比AR解码器快的时间。
translated by 谷歌翻译
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. We release our code at https://github.com/openai/guided-diffusion.
translated by 谷歌翻译
Despite the recent visually-pleasing results achieved, the massive computational cost has been a long-standing flaw for diffusion probabilistic models (DPMs), which, in turn, greatly limits their applications on resource-limited platforms. Prior methods towards efficient DPM, however, have largely focused on accelerating the testing yet overlooked their huge complexity and sizes. In this paper, we make a dedicated attempt to lighten DPM while striving to preserve its favourable performance. We start by training a small-sized latent diffusion model (LDM) from scratch, but observe a significant fidelity drop in the synthetic images. Through a thorough assessment, we find that DPM is intrinsically biased against high-frequency generation, and learns to recover different frequency components at different time-steps. These properties make compact networks unable to represent frequency dynamics with accurate high-frequency estimation. Towards this end, we introduce a customized design for slim DPM, which we term as Spectral Diffusion (SD), for light-weight image synthesis. SD incorporates wavelet gating in its architecture to enable frequency dynamic feature extraction at every reverse steps, and conducts spectrum-aware distillation to promote high-frequency recovery by inverse weighting the objective based on spectrum magni tudes. Experimental results demonstrate that, SD achieves 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks while retaining competitive image fidelity.
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
理想的音乐合成器应具有互动性和表现力,并实时产生高保真音频,以进行任意组合仪器和音符。最近的神经合成器在特定于域的模型之间表现出了折衷,这些模型仅对特定仪器或可以训练所有音乐训练但最小的控制和缓慢发电的原始波形模型提供了详细的控制。在这项工作中,我们专注于神经合成器的中间立场,这些基础可以从MIDI序列中产生音频,并实时使用仪器的任意组合。这使得具有单个模型的各种转录数据集的培训,这又提供了对各种仪器的组合和仪器的控制级别的控制。我们使用一个简单的两阶段过程:MIDI到具有编码器变压器的频谱图,然后使用生成对抗网络(GAN)频谱图逆变器将频谱图到音频。我们将训练解码器作为自回归模型进行了比较,并将其视为一种脱氧扩散概率模型(DDPM),并发现DDPM方法在定性上是优越的,并且通过音频重建和fr \'echet距离指标来衡量。鉴于这种方法的互动性和普遍性,我们发现这是迈向互动和表达性神经综合的有前途的第一步,以实现工具和音符的任意组合。
translated by 谷歌翻译