随着最近的研究进展,深度学习模型已成为实时电信应用程序中声学回声取消(AEC)的有吸引力的选择。由于声学回声是音频质量差的主要来源之一,因此提出了各种各样的深层模型。但是,对良好回声取消质量的重要但经常忽略的要求是麦克风和远端信号的同步。通常,使用基于互相关的经典算法实现,对齐模块是具有已知设计限制的单独功能块。在我们的工作中,我们提出了一个基于内置自我注意的对准的深度学习体系结构,该架构能够处理不结盟的输入,从而改善了回声取消性能,同时简化了通信管道。此外,我们表明我们的方法可以在AEC挑战数据集中的真实记录上进行困难的延迟估计案例实现重大改进。
translated by 谷歌翻译
设备方向听到需要从给定方向的音频源分离,同时实现严格的人类难以察觉的延迟要求。虽然神经网络可以实现比传统的波束形成器的性能明显更好,但所有现有型号都缺乏对计算受限的可穿戴物的低延迟因果推断。我们展示了一个混合模型,将传统的波束形成器与定制轻质神经网络相结合。前者降低了后者的计算负担,并且还提高了其普遍性,而后者旨在进一步降低存储器和计算开销,以实现实时和低延迟操作。我们的评估显示了合成数据上最先进的因果推断模型的相当性能,同时实现了模型尺寸的5倍,每秒计算的4倍,处理时间减少5倍,更好地概括到真实的硬件数据。此外,我们的实时混合模型在为低功耗可穿戴设备设计的移动CPU上运行8毫秒,并实现17.5毫秒的端到端延迟。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
我们提出了一种可扩展高效的神经波形编码系统,用于语音压缩。我们将语音编码问题作为一种自动汇总任务,其中卷积神经网络(CNN)在其前馈例程期间执行编码和解码作为神经波形编解码器(NWC)。所提出的NWC还将量化和熵编码定义为可培训模块,因此在优化过程期间处理编码伪像和比特率控制。通过将紧凑的模型组件引入NWC,如Gated Reseal Networks和深度可分离卷积,我们实现了效率。此外,所提出的模型具有可扩展的架构,跨模块残差学习(CMRL),以覆盖各种比特率。为此,我们采用残余编码概念来连接多个NWC自动汇总模块,其中每个NWC模块执行残差编码以恢复其上一模块已创建的任何重建损失。 CMRL也可以缩小以覆盖下比特率,因为它采用线性预测编码(LPC)模块作为其第一自动化器。混合设计通过将LPC的量化作为可分散的过程重新定义LPC和NWC集成,使系统培训端到端的方式。所提出的系统的解码器在低至中等比特率范围(12至20kbps)或高比特率(32kbps)中的两个NWC中的一个NWC(0.12百万个参数)。尽管解码复杂性尚不低于传统语音编解码器的复杂性,但是从其他神经语音编码器(例如基于WVENET的声码器)显着降低。对于宽带语音编码质量,我们的系统对AMR-WB的性能相当或卓越的性能,并在低和中等比特率下的速度试验话题上的表现。所提出的系统可以扩展到更高的比特率以实现近透明性能。
translated by 谷歌翻译
隔离架构在语音分离中显示出非常好的结果。像其他学习的编码器模型一样,它使用了短帧,因为它们已被证明在这些情况下可以获得更好的性能。这导致输入处有大量帧,这是有问题的。由于隔离器是基于变压器的,因此其计算复杂性随着较长的序列而大大增加。在本文中,我们在语音增强任务中采用了隔离器,并表明,通过以短期傅立叶变换(STFT)表示替换学习式编码器的功能,我们可以使用长帧而不会损害感知增强性能。我们获得了同等的质量和清晰度评估得分,同时将10秒的话语减少了大约8倍。
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
在本文中,我们介绍了在单个神经网络中执行同时扬声器分离,DERE失眠和扬声器识别的盲言语分离和DERERATERATION(BSSD)网络。扬声器分离由一组预定义的空间线索引导。通过使用神经波束成形进行DERERATERATION,通过嵌入向量和三联挖掘来辅助扬声器识别。我们介绍了一种使用复值神经网络的频域模型,以及在潜伏空间中执行波束成形的时域变体。此外,我们提出了一个块在线模式来处理更长的录音,因为它们在会议场景中发生。我们在规模独立信号方面评估我们的系统,以失真率(SI-SI-SIS),字错误率(WER)和相等的错误率(eer)。
translated by 谷歌翻译
Acoustic echo cancellation (AEC) is designed to remove echoes, reverberation, and unwanted added sounds from the microphone signal while maintaining the quality of the near-end speaker's speech. This paper proposes adaptive speech quality complex neural networks to focus on specific tasks for real-time acoustic echo cancellation. In specific, we propose a complex modularize neural network with different stages to focus on feature extraction, acoustic separation, and mask optimization receptively. Furthermore, we adopt the contrastive learning framework and novel speech quality aware loss functions to further improve the performance. The model is trained with 72 hours for pre-training and then 72 hours for fine-tuning. The proposed model outperforms the state-of-the-art performance.
translated by 谷歌翻译
雷达传感器逐渐成为道路车辆的广泛设备,在自主驾驶和道路安全中发挥着至关重要的作用。广泛采用雷达传感器增加了不同车辆的传感器之间干扰的可能性,产生损坏的范围曲线和范围 - 多普勒地图。为了从范围 - 多普勒地图中提取多个目标的距离和速度,需要减轻影响每个范围分布的干扰。本文提出了一种全卷积神经网络,用于汽车雷达干扰缓解。为了在真实的方案中培训我们的网络,我们介绍了具有多个目标和多个干扰的新数据集的现实汽车雷达信号。为了我们的知识,我们是第一个在汽车雷达领域施加体重修剪的施加量,与广泛使用的辍学相比获得了优越的结果。虽然最先前的作品成功地估计了汽车雷达信号的大小,但我们提出了一种可以准确估计相位的深度学习模型。例如,我们的新方法将相对于普通采用的归零技术的相位估计误差从12.55度到6.58度降低了一半。考虑到缺乏汽车雷达干扰缓解数据库,我们将释放开源我们的大规模数据集,密切复制了多次干扰案例的现实世界汽车场景,允许其他人客观地比较他们在该域中的未来工作。我们的数据集可用于下载:http://github.com/ristea/arim-v2。
translated by 谷歌翻译
通过大量多输入和多重输出实现的许多性能增长取决于发射机(基站)下链路通道状态信息(CSI)的准确性,这通常是通过在接收器(用户终端)估算并馈入的。到发射器。 CSI反馈的开销占据了大量的上行链路带宽资源,尤其是当传输天线数量较大时。基于深度学习(DL)的CSI反馈是指基于DL的自动编码器的CSI压缩和重建,并且可以大大减少反馈开销。在本文中,提供了有关该主题的最新研究的全面概述,首先是在CSI反馈中广泛使用的基本DL概念,然后对一些现有的基于DL的反馈作品进行分类和描述。重点是新型的神经网络体系结构和沟通专家知识的利用来提高CSI反馈准确性。还介绍了有关CSI反馈和CSI反馈与其他通信模块的联合设计的作品,并讨论了一些实际问题,包括培训数据集收集,在线培训,复杂性,概括和标准化效果。在本文的最后,确定了与未来无线通信系统中基于DL的CSI反馈相关的一些挑战和潜在的研究方向。
translated by 谷歌翻译
神经音频/语音编码表明其能力比最近的传统方法低得多的比特率。但是,现有的神经音频/语音编解码器采用声学特征或具有卷积神经网络的学术盲功能来编码,通过该特征,编码功能中仍有时间冗余。本文将潜在域预测性编码引入VQ-VAE框架中,以完全删除此类冗余,并以端到端的方式提出了低延迟神经语音编码的TF-CODEC。具体而言,提取的特征是根据过去量化潜在框架的预测进行编码的,以便进一步删除时间相关性。更重要的是,我们在时间频输入上引入了可学习的压缩,以适应对不同比特率的主要频率和细节的关注。提出了一种基于距离映射和Gumbel-softmax的可区分矢量量化方案,以更好地模拟具有速率约束的潜在分布。多语言语音数据集的主观结果表明,在40ms的潜伏期中,提议的1kbps的TF-Codec可以比Opus 9Kbps和3Kbps的TF-Codec取得更好的质量,而3Kbps的表现都优于EVS 9.6kbps和Opus 12kbps。进行了许多研究以显示这些技术的有效性。
translated by 谷歌翻译
生成的对抗网络最近在神经声音中表现出了出色的表现,表现优于最佳自动回归和基于流动的模型。在本文中,我们表明这种成功可以扩展到有条件音频的其他任务。特别是,在HIFI Vocoders的基础上,我们为带宽扩展和语音增强的新型HIFI ++一般框架提出了新颖的一般框架。我们表明,通过改进的生成器体系结构和简化的多歧视培训,HIFI ++在这些任务中的最先进的情况下表现更好或与之相提并论,同时花费大量的计算资源。通过一系列广泛的实验,我们的方法的有效性得到了验证。
translated by 谷歌翻译
语音神经调节物有可能为患有扰动或休闲症的人提供沟通。最近的进展已经证明了从放置在皮质表面上的电加电网的高质量文本解码和语音合成。在这里,我们研究了较少的侵入性测量模态,即立体定向脑电图(SEEG),其提供来自多个脑区的稀疏抽样,包括皮质区域。为了评估Seeg是否也可用于综合神经录音的高质量音频,我们采用了一种基于现代深度学习方法的经常性编码器 - 解码器框架。我们证明,尽管有限的训练数据,但是可以从这些微创录音来重建高质量的言论。最后,我们利用变分特征丢失来成功识别最具信息丰富的电极触点。
translated by 谷歌翻译
最近,卷积增强的变压器(构象异构体)在自动语音识别(ASR)和时间域语音增强(SE)中实现了有希望的表现,因为它可以捕获语音信号中的本地和全局依赖性。在本文中,我们在时间频率(TF)域中提出了SE的基于构型的度量生成对抗网络(CMGAN)。在发电机中,我们利用两阶段的构象体块来通过对时间和频率依赖性进行建模来汇总所有幅度和复杂的频谱图。大小和复杂谱图的估计在解码器阶段被解耦,然后共同掺入以重建增强的语音。此外,通过优化相应的评估评分,采用了度量歧视器来进一步提高增强估计语音的质量。语音库+需求数据集的定量分析表明,CMGAN在优于以前的模型的功能,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
从语音音频中删除背景噪音一直是大量研究和努力的主题,尤其是由于虚拟沟通和业余声音录制的兴起,近年来。然而,背景噪声并不是唯一可以防止可理解性的不愉快干扰:混响,剪裁,编解码器工件,有问题的均衡,有限的带宽或不一致的响度同样令人不安且无处不在。在这项工作中,我们建议将言语增强的任务视为一项整体努力,并提出了一种普遍的语音增强系统,同时解决了55种不同的扭曲。我们的方法由一种使用基于得分的扩散的生成模型以及一个多分辨率调节网络,该网络通过混合密度网络进行增强。我们表明,这种方法在专家听众执行的主观测试中大大优于艺术状态。我们还表明,尽管没有考虑任何特定的快速采样策略,但它仅通过4-8个扩散步骤就可以实现竞争性的目标得分。我们希望我们的方法论和技术贡献都鼓励研究人员和实践者采用普遍的语音增强方法,可能将其作为一项生成任务。
translated by 谷歌翻译
The marine ecosystem is changing at an alarming rate, exhibiting biodiversity loss and the migration of tropical species to temperate basins. Monitoring the underwater environments and their inhabitants is of fundamental importance to understand the evolution of these systems and implement safeguard policies. However, assessing and tracking biodiversity is often a complex task, especially in large and uncontrolled environments, such as the oceans. One of the most popular and effective methods for monitoring marine biodiversity is passive acoustics monitoring (PAM), which employs hydrophones to capture underwater sound. Many aquatic animals produce sounds characteristic of their own species; these signals travel efficiently underwater and can be detected even at great distances. Furthermore, modern technologies are becoming more and more convenient and precise, allowing for very accurate and careful data acquisition. To date, audio captured with PAM devices is frequently manually processed by marine biologists and interpreted with traditional signal processing techniques for the detection of animal vocalizations. This is a challenging task, as PAM recordings are often over long periods of time. Moreover, one of the causes of biodiversity loss is sound pollution; in data obtained from regions with loud anthropic noise, it is hard to separate the artificial from the fish sound manually. Nowadays, machine learning and, in particular, deep learning represents the state of the art for processing audio signals. Specifically, sound separation networks are able to identify and separate human voices and musical instruments. In this work, we show that the same techniques can be successfully used to automatically extract fish vocalizations in PAM recordings, opening up the possibility for biodiversity monitoring at a large scale.
translated by 谷歌翻译
事实证明,神经网络是以非常低的比特率解决语音编码问题的强大工具。但是,可以在现实世界中可以强大操作的神经编码器的设计仍然是一个重大挑战。因此,我们提出了神经末端2端语音编解码器(NESC),可用于3 kbps的高质量宽带语音编码的稳定,可扩展的端到端神经语音编解码器。编码器使用一种新的体系结构配置,该配置依赖于我们提出的双PATHCONVRNN(DPCRNN)层,而解码器体系结构基于我们以前的工作streamwise-stylemelgan。我们对干净和嘈杂的语音的主观听力测试表明,NESC对于看不见的条件和信号扰动特别强大。
translated by 谷歌翻译
扬声器日流是一个标签音频或视频录制的任务,与扬声器身份或短暂的任务标记对应于扬声器标识的类,以识别“谁谈到何时发表讲话”。在早期,对MultiSpeaker录音的语音识别开发了扬声器日益衰退算法,以使扬声器自适应处理能够实现扬声器自适应处理。这些算法还将自己的价值作为独立应用程序随着时间的推移,为诸如音频检索等下游任务提供特定于扬声器的核算。最近,随着深度学习技术的出现,这在讲话应用领域的研究和实践中引起了革命性的变化,对扬声器日益改善已经进行了快速进步。在本文中,我们不仅审查了扬声器日益改善技术的历史发展,而且还审查了神经扬声器日益改善方法的最新进步。此外,我们讨论了扬声器日复速度系统如何与语音识别应用相结合,以及最近深度学习的激增是如何引领联合建模这两个组件互相互补的方式。通过考虑这种令人兴奋的技术趋势,我们认为本文对社区提供了有价值的贡献,以通过巩固具有神经方法的最新发展,从而促进更有效的扬声器日益改善进一步进展。
translated by 谷歌翻译