This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
translated by 谷歌翻译
This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and F0 features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.
translated by 谷歌翻译
Previous works (Donahue et al., 2018a;Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.
translated by 谷歌翻译
深度学习算法的兴起引领许多研究人员使用经典信号处理方法来发声。深度学习模型已经实现了富有富有的语音合成,现实的声音纹理和虚拟乐器的音符。然而,最合适的深度学习架构仍在调查中。架构的选择紧密耦合到音频表示。声音的原始波形可以太密集和丰富,用于深入学习模型,以有效处理 - 复杂性提高培训时间和计算成本。此外,它不代表声音以其所感知的方式。因此,在许多情况下,原始音频已经使用上采样,特征提取,甚至采用波形的更高级别的图示来转换为压缩和更有意义的形式。此外,研究了所选择的形式,另外的调节表示,不同的模型架构以及用于评估重建声音的许多度量的条件。本文概述了应用于使用深度学习的声音合成的音频表示。此外,它呈现了使用深度学习模型开发和评估声音合成架构的最重要方法,始终根据音频表示。
translated by 谷歌翻译
我们介绍Audiolm,这是具有长期一致性高质量音频产生的框架。 Audiolm将输入音频映射到一系列离散令牌,并将音频生成作为此表示空间中的语言建模任务。我们展示了现有的音频令牌如何在重建质量和长期结构之间提供不同的权衡,我们提出了一个混合代币化计划来实现这两个目标。也就是说,我们利用在音频中预先训练的蒙版语言模型的离散激活来捕获长期结构和神经音频编解码器产生的离散代码,以实现高质量的合成。通过培训大型原始音频波形,Audiolm学会了在简短的提示下产生自然和连贯的连续性。当接受演讲训练时,没有任何笔录或注释,Audiolm会在句法和语义上产生可行的语音连续性,同时还为看不见的说话者保持说话者身份和韵律。此外,我们演示了我们的方法如何通过产生连贯的钢琴音乐连续性来超越语音,尽管受过训练而没有任何象征性的音乐代表。
translated by 谷歌翻译
神经文本到语音研究的最新进展是利用低级中间语音表示(例如MEL-光谱图)的两阶段管道主导的。但是,这种预定的特征从根本上受到限制,因为它们不允许通过学习隐藏表示形式来利用数据驱动方法的全部潜力。因此,已经提出了几种端到端方法。但是,这样的模型更难训练,并且需要大量具有转录的高质量录音。在这里,我们提出了WavThruvec-一种两阶段的架构,通过使用高维WAV2VEC 2.0嵌入作为中间语音表示,可以解决瓶颈。由于这些隐藏的激活提供了高级语言特征,因此它们对噪音更强大。这使我们能够利用质量较低的注释语音数据集来训练第一阶段模块。同时,由于WAV2VEC 2.0的嵌入已经进行了时间对齐,因此可以在大规模未转录的音频语料库上对第二阶段组件进行培训。这导致了对量表词的概括能力的提高,以及对看不见的说话者的更好概括。我们表明,所提出的模型不仅与最新神经模型的质量相匹配,而且还介绍了有用的属性,可以实现语音转换或零弹性合成的任务。
translated by 谷歌翻译
In this paper, we explore the inclusion of latent random variables into the hidden state of a recurrent neural network (RNN) by combining the elements of the variational autoencoder. We argue that through the use of high-level latent random variables, the variational RNN (VRNN) 1 can model the kind of variability observed in highly structured sequential data such as natural speech. We empirically evaluate the proposed model against other related sequential models on four speech datasets and one handwriting dataset. Our results show the important roles that latent random variables can play in the RNN dynamics.
translated by 谷歌翻译
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audio in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations. 1 * Contributed to the work during an internship at Baidu Research, USA. 1 Audio samples are in: https://diffwave-demo.github.io/
translated by 谷歌翻译
In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces.
translated by 谷歌翻译
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the melspectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart. IntroductionVoice is one of the most frequent and naturally used communication interfaces for humans. With recent developments in technology, voice is being used as a main interface in artificial intelligence (AI) voice assistant services such as Amazon Alexa, and it is also widely used in automobiles, smart homes and so forth. Accordingly, with the increase in demand for people to converse with machines, technology that synthesizes natural speech like human speech is being actively studied.Recently, with the development of neural networks, speech synthesis technology has made a rapid progress. Most neural speech synthesis models use a two-stage pipeline: 1) predicting a low resolution intermediate representation such as mel-spectrograms (
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
从语音音频中删除背景噪音一直是大量研究和努力的主题,尤其是由于虚拟沟通和业余声音录制的兴起,近年来。然而,背景噪声并不是唯一可以防止可理解性的不愉快干扰:混响,剪裁,编解码器工件,有问题的均衡,有限的带宽或不一致的响度同样令人不安且无处不在。在这项工作中,我们建议将言语增强的任务视为一项整体努力,并提出了一种普遍的语音增强系统,同时解决了55种不同的扭曲。我们的方法由一种使用基于得分的扩散的生成模型以及一个多分辨率调节网络,该网络通过混合密度网络进行增强。我们表明,这种方法在专家听众执行的主观测试中大大优于艺术状态。我们还表明,尽管没有考虑任何特定的快速采样策略,但它仅通过4-8个扩散步骤就可以实现竞争性的目标得分。我们希望我们的方法论和技术贡献都鼓励研究人员和实践者采用普遍的语音增强方法,可能将其作为一项生成任务。
translated by 谷歌翻译
本文介绍了语音(TTS)系统的Microsoft端到端神经文本:暴风雪挑战2021。这一挑战的目标是从文本中综合自然和高质量的演讲,并在两个观点中接近这一目标:首先是直接模型,并在48 kHz采样率下产生波形,这比以前具有16 kHz或24 kHz采样率的先前系统带来更高的感知质量;第二个是通过系统设计来模拟语音中的变化信息,从而提高了韵律和自然。具体而言,对于48 kHz建模,我们预测声学模型中的16 kHz熔点 - 谱图,并提出称为HIFINET的声码器直接从预测的16kHz MEL谱图中产生48kHz波形,这可以更好地促进培训效率,建模稳定性和语音。质量。我们从显式(扬声器ID,语言ID,音高和持续时间)和隐式(话语级和音素级韵律)视角系统地模拟变化信息:1)对于扬声器和语言ID,我们在培训和推理中使用查找嵌入; 2)对于音高和持续时间,我们在训练中提取来自成对的文本语音数据的值,并使用两个预测器来预测推理中的值; 3)对于话语级和音素级韵律,我们使用两个参考编码器来提取训练中的值,并使用两个单独的预测器来预测推理中的值。此外,我们介绍了一个改进的符合子块,以更好地模拟声学模型中的本地和全局依赖性。对于任务SH1,DelightFultts在MOS测试中获得4.17均匀分数,4.35在SMOS测试中,表明我们所提出的系统的有效性
translated by 谷歌翻译
在本文中,介绍了文本到读取/唱歌系统,可以适应任何扬声器的声音。它利用基于TacoTron的多级箱子声学模型在只读语音数据训练,并且在音素级别提供韵律控制。还研究了基于传统DSP算法的数据集增强和额外的韵律操纵。神经TTS模型对看不见的扬声器的有限录音进行了微调,允许与目标的扬声器语音进行敲击/歌唱合成。描述了系统的详细管道,其包括从Capella歌曲的目标音调和持续时间值提取,并将其转换为在合成之前的目标扬声器的有效音符范围内。还研究了通过WSOLA输出的输出的韵律操纵的另外的阶段,以便更好地匹配目标持续时间值。合成的话语可以与乐器伴奏轨道混合以产生完整的歌曲。通过主观聆听测试评估所提出的系统,以及与可用的备用系统相比,该系统还旨在从只读训练数据产生合成歌唱语音。结果表明,该拟议的方法可以产生高质量的敲击/歌声,具有增加的自然。
translated by 谷歌翻译
在本文中,我们提出了一个神经端到端系统,用于保存视频的语音,唇部同步翻译。该系统旨在将多个组件模型结合在一起,并以目标语言的目标语言与目标语言的原始扬声器演讲的视频与目标语音相结合,但在语音,语音特征,面对原始扬声器的视频中保持着重点。管道从自动语音识别开始,包括重点检测,然后是翻译模型。然后,翻译后的文本由文本到语音模型合成,该模型重新创建了原始句子映射的原始重点。然后,使用语音转换模型将结果的合成语音映射到原始扬声器的声音。最后,为了将扬声器的嘴唇与翻译的音频同步,有条件的基于对抗网络的模型生成了相对于输入面图像以及语音转换模型的输出的适应性唇部运动的帧。最后,系统将生成的视频与转换后的音频结合在一起,以产生最终输出。结果是一个扬声器用另一种语言说话的视频而不真正知道。为了评估我们的设计,我们介绍了完整系统的用户研究以及对单个组件的单独评估。由于没有可用的数据集来评估我们的整个系统,因此我们收集了一个测试集并在此测试集上评估我们的系统。结果表明,我们的系统能够生成令人信服的原始演讲者的视频,同时保留原始说话者的特征。收集的数据集将共享。
translated by 谷歌翻译
神经音频/语音编码表明其能力比最近的传统方法低得多的比特率。但是,现有的神经音频/语音编解码器采用声学特征或具有卷积神经网络的学术盲功能来编码,通过该特征,编码功能中仍有时间冗余。本文将潜在域预测性编码引入VQ-VAE框架中,以完全删除此类冗余,并以端到端的方式提出了低延迟神经语音编码的TF-CODEC。具体而言,提取的特征是根据过去量化潜在框架的预测进行编码的,以便进一步删除时间相关性。更重要的是,我们在时间频输入上引入了可学习的压缩,以适应对不同比特率的主要频率和细节的关注。提出了一种基于距离映射和Gumbel-softmax的可区分矢量量化方案,以更好地模拟具有速率约束的潜在分布。多语言语音数据集的主观结果表明,在40ms的潜伏期中,提议的1kbps的TF-Codec可以比Opus 9Kbps和3Kbps的TF-Codec取得更好的质量,而3Kbps的表现都优于EVS 9.6kbps和Opus 12kbps。进行了许多研究以显示这些技术的有效性。
translated by 谷歌翻译
Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast twodimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
用于将音频信号的光谱表示转换为波形的神经声学器是语音合成管道中的常用组件。它侧重于合成来自低维表示的波形,例如MEL-谱图。近年来,已经引入了不同的方法来开发这种声音。但是,评估这些新的声音仪并将其表达与以前的声学相比,它变得更具挑战性。为了解决这个问题,我们呈现VOCBENCH,这是一个框架,该框架是基于最先进的神经声码器的性能。 VOCBENCH使用系统研究来评估共享环境中的不同神经探测器,使它们能够进行公平比较。在我们的实验中,我们对所有神经副探测器的数据集,培训管道和评估指标使用相同的设置。我们执行主观和客观评估,以比较每个声码器沿不同的轴的性能。我们的结果表明,该框架能够为每种声学器提供竞争的疗效和合成样品的质量。 Vocebench框架可在https://github.com/facebookResearch/Vocoder-Benchmark中获得。
translated by 谷歌翻译
大多数神经文本到语音(TTS)模型需要<语音,转录器>来自所需扬声器的成对数据,以获得高质量的语音合成,这限制了大量未经过滤的训练数据的使用。在这项工作中,我们呈现导向TTS,这是一种高质量的TTS模型,用于从未筛选的语音数据生成语音。引导TTS将无条件扩散概率模型与单独培训的音素分类器组合以进行文本到语音。通过对语音的无条件分配建模,我们的模型可以利用未经筛选的培训数据。对于文本到语音合成,我们通过音素分类指导无条件DDPM的生成过程,以产生来自给定转录物的条件分布的MEL-谱图。我们表明,导向TTS与现有的方法实现了可比性的性能,而没有LJSpeech的任何成绩单。我们的结果进一步表明,在MultiSpeaker大规模数据上培训的单个扬声器相关的音素分类器可以指导针对各种扬声器执行TTS的无条件DDPM。
translated by 谷歌翻译