事实证明,神经网络是以非常低的比特率解决语音编码问题的强大工具。但是,可以在现实世界中可以强大操作的神经编码器的设计仍然是一个重大挑战。因此,我们提出了神经末端2端语音编解码器(NESC),可用于3 kbps的高质量宽带语音编码的稳定,可扩展的端到端神经语音编解码器。编码器使用一种新的体系结构配置,该配置依赖于我们提出的双PATHCONVRNN(DPCRNN)层,而解码器体系结构基于我们以前的工作streamwise-stylemelgan。我们对干净和嘈杂的语音的主观听力测试表明,NESC对于看不见的条件和信号扰动特别强大。
translated by 谷歌翻译
我们提出了一种可扩展高效的神经波形编码系统,用于语音压缩。我们将语音编码问题作为一种自动汇总任务,其中卷积神经网络(CNN)在其前馈例程期间执行编码和解码作为神经波形编解码器(NWC)。所提出的NWC还将量化和熵编码定义为可培训模块,因此在优化过程期间处理编码伪像和比特率控制。通过将紧凑的模型组件引入NWC,如Gated Reseal Networks和深度可分离卷积,我们实现了效率。此外,所提出的模型具有可扩展的架构,跨模块残差学习(CMRL),以覆盖各种比特率。为此,我们采用残余编码概念来连接多个NWC自动汇总模块,其中每个NWC模块执行残差编码以恢复其上一模块已创建的任何重建损失。 CMRL也可以缩小以覆盖下比特率,因为它采用线性预测编码(LPC)模块作为其第一自动化器。混合设计通过将LPC的量化作为可分散的过程重新定义LPC和NWC集成,使系统培训端到端的方式。所提出的系统的解码器在低至中等比特率范围(12至20kbps)或高比特率(32kbps)中的两个NWC中的一个NWC(0.12百万个参数)。尽管解码复杂性尚不低于传统语音编解码器的复杂性,但是从其他神经语音编码器(例如基于WVENET的声码器)显着降低。对于宽带语音编码质量,我们的系统对AMR-WB的性能相当或卓越的性能,并在低和中等比特率下的速度试验话题上的表现。所提出的系统可以扩展到更高的比特率以实现近透明性能。
translated by 谷歌翻译
神经音频/语音编码表明其能力比最近的传统方法低得多的比特率。但是,现有的神经音频/语音编解码器采用声学特征或具有卷积神经网络的学术盲功能来编码,通过该特征,编码功能中仍有时间冗余。本文将潜在域预测性编码引入VQ-VAE框架中,以完全删除此类冗余,并以端到端的方式提出了低延迟神经语音编码的TF-CODEC。具体而言,提取的特征是根据过去量化潜在框架的预测进行编码的,以便进一步删除时间相关性。更重要的是,我们在时间频输入上引入了可学习的压缩,以适应对不同比特率的主要频率和细节的关注。提出了一种基于距离映射和Gumbel-softmax的可区分矢量量化方案,以更好地模拟具有速率约束的潜在分布。多语言语音数据集的主观结果表明,在40ms的潜伏期中,提议的1kbps的TF-Codec可以比Opus 9Kbps和3Kbps的TF-Codec取得更好的质量,而3Kbps的表现都优于EVS 9.6kbps和Opus 12kbps。进行了许多研究以显示这些技术的有效性。
translated by 谷歌翻译
GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise WaveGAN vocoder achieves significantly higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.
translated by 谷歌翻译
从语音音频中删除背景噪音一直是大量研究和努力的主题,尤其是由于虚拟沟通和业余声音录制的兴起,近年来。然而,背景噪声并不是唯一可以防止可理解性的不愉快干扰:混响,剪裁,编解码器工件,有问题的均衡,有限的带宽或不一致的响度同样令人不安且无处不在。在这项工作中,我们建议将言语增强的任务视为一项整体努力,并提出了一种普遍的语音增强系统,同时解决了55种不同的扭曲。我们的方法由一种使用基于得分的扩散的生成模型以及一个多分辨率调节网络,该网络通过混合密度网络进行增强。我们表明,这种方法在专家听众执行的主观测试中大大优于艺术状态。我们还表明,尽管没有考虑任何特定的快速采样策略,但它仅通过4-8个扩散步骤就可以实现竞争性的目标得分。我们希望我们的方法论和技术贡献都鼓励研究人员和实践者采用普遍的语音增强方法,可能将其作为一项生成任务。
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
比特率可伸缩性是实时通信中音频编码的理想功能。现有的神经音频编解码器通常在训练过程中强制执行特定的比特率,因此需要为每个目标比特率对不同的模型进行培训,这增加了发送者的内存足迹,并且接收器侧和反编码通常需要用于支持多个接收器。在本文中,我们引入了跨尺度可扩展矢量量化方案(CSVQ),其中多尺度特征通过逐步特征融合和改进逐渐编码。这样,如果仅接收到一部分bitstream,则重建粗级信号,并且随着更多的可用位而逐渐改善质量。提出的CSVQ方案可以灵活地应用于具有镜像自动编码器结构的任何神经音频编码网络,以实现比特量的可伸缩性。主观结果表明,所提出的方案的表现优于经典残差VQ(RVQ)。此外,拟议的3 kbps的CSVQ以9 kbps的价格优于3kbps的lyra,它可以随着比特率的增加提供优雅的质量提升。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
语音编码有助于以最小的失真方式传播语音在低频带宽度网络上的传播。基于神经网络的语音编解码器最近表现出与传统方法相对于传统方法的显着改善。尽管这一新一代的编解码器能够综合高保真语音,但它们对经常性或卷积层的使用通常会限制其有效的接受场,从而阻止他们有效地压缩语音。我们建议通过使用经过预定的变压器进一步降低神经语音编解码器的比特率,该变压器能够由于其电感偏置而在输入信号中利用长距离依赖性。因此,我们与卷积编码器同时使用了经过验证的变压器,该卷积编码器是通过量化器和生成的对抗性净解码器进行训练的端到端。我们的数值实验表明,补充神经语音编解码器的卷积编码器,用变压器语音嵌入嵌入的语音编解码器,比特率为$ 600 \,\ m athrm {bps} $,在合成的语音质量中均超过原始的神经言语编解码器,当相同的比特率。主观的人类评估表明,所得编解码器的质量比运行率的三到四倍的传统编解码器的质量可比或更好。
translated by 谷歌翻译
Previous works (Donahue et al., 2018a;Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks.
translated by 谷歌翻译
我们介绍Audiolm,这是具有长期一致性高质量音频产生的框架。 Audiolm将输入音频映射到一系列离散令牌,并将音频生成作为此表示空间中的语言建模任务。我们展示了现有的音频令牌如何在重建质量和长期结构之间提供不同的权衡,我们提出了一个混合代币化计划来实现这两个目标。也就是说,我们利用在音频中预先训练的蒙版语言模型的离散激活来捕获长期结构和神经音频编解码器产生的离散代码,以实现高质量的合成。通过培训大型原始音频波形,Audiolm学会了在简短的提示下产生自然和连贯的连续性。当接受演讲训练时,没有任何笔录或注释,Audiolm会在句法和语义上产生可行的语音连续性,同时还为看不见的说话者保持说话者身份和韵律。此外,我们演示了我们的方法如何通过产生连贯的钢琴音乐连续性来超越语音,尽管受过训练而没有任何象征性的音乐代表。
translated by 谷歌翻译
In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces.
translated by 谷歌翻译
神经文本到语音研究的最新进展是利用低级中间语音表示(例如MEL-光谱图)的两阶段管道主导的。但是,这种预定的特征从根本上受到限制,因为它们不允许通过学习隐藏表示形式来利用数据驱动方法的全部潜力。因此,已经提出了几种端到端方法。但是,这样的模型更难训练,并且需要大量具有转录的高质量录音。在这里,我们提出了WavThruvec-一种两阶段的架构,通过使用高维WAV2VEC 2.0嵌入作为中间语音表示,可以解决瓶颈。由于这些隐藏的激活提供了高级语言特征,因此它们对噪音更强大。这使我们能够利用质量较低的注释语音数据集来训练第一阶段模块。同时,由于WAV2VEC 2.0的嵌入已经进行了时间对齐,因此可以在大规模未转录的音频语料库上对第二阶段组件进行培训。这导致了对量表词的概括能力的提高,以及对看不见的说话者的更好概括。我们表明,所提出的模型不仅与最新神经模型的质量相匹配,而且还介绍了有用的属性,可以实现语音转换或零弹性合成的任务。
translated by 谷歌翻译
This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and F0 features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.
translated by 谷歌翻译
在本文中,介绍了文本到读取/唱歌系统,可以适应任何扬声器的声音。它利用基于TacoTron的多级箱子声学模型在只读语音数据训练,并且在音素级别提供韵律控制。还研究了基于传统DSP算法的数据集增强和额外的韵律操纵。神经TTS模型对看不见的扬声器的有限录音进行了微调,允许与目标的扬声器语音进行敲击/歌唱合成。描述了系统的详细管道,其包括从Capella歌曲的目标音调和持续时间值提取,并将其转换为在合成之前的目标扬声器的有效音符范围内。还研究了通过WSOLA输出的输出的韵律操纵的另外的阶段,以便更好地匹配目标持续时间值。合成的话语可以与乐器伴奏轨道混合以产生完整的歌曲。通过主观聆听测试评估所提出的系统,以及与可用的备用系统相比,该系统还旨在从只读训练数据产生合成歌唱语音。结果表明,该拟议的方法可以产生高质量的敲击/歌声,具有增加的自然。
translated by 谷歌翻译
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
宽带音频波形评估网络(Wawenets)是直接在宽带音频波形上运行的卷积神经网络,以便对这些波形进行评估。在目前的工作中,这些评估赋予了电信语音的素质(例如嘈杂,清晰度,整体语音质量)。 Wawenets是无引用网络,因为它们不需要他们评估的波形的``参考''(原始或未经证实的)版本。我们最初的Wawenet出版物引入了四个Wawenets,并模拟了已建立的全参考语音质量或清晰度估计算法的输出。我们已经更新了Wawenet架构,以提高效率和有效性。在这里,我们提出了一个密切跟踪七个不同质量和可理解性值的单一Wawenet。我们创建了第二个网络,该网络还跟踪四个主观语音质量维度。我们提供第三个网络,专注于公正的质量分数并达到很高的共识。这项工作用13种语言利用了334小时的演讲,超过200万个全参考目标值和超过93,000个主观意见分数。我们还解释了Wawenets的操作,并使用信号处理的语言确定其操作的关键:Relus从战略上将光谱信息从非DC组件移动到DC组件中。 96输出信号的直流值在96-D潜在空间中定义了一个向量,然后将该向量映射到输入波形的质量或清晰度值。
translated by 谷歌翻译