声音,尤其是音乐,包含散布在频率维度的各种谐波组件。正常的卷积神经网络很难观察这些泛音。本文引入了多个速率扩张的因果卷积(MRDC-CONV)方法,以有效地捕获对数量表中的谐波结构。谐波对音高估计很有帮助,这对于许多声音处理应用非常重要。我们建议在音高估计中评估MRDC-CONV和其他扩张卷积的Harmof0,旨在评估MRDC-CONV和其他扩张的卷积。结果表明,该模型的表现优于DEEPF0,在三个数据集中产生最先进的性能,同时减少了90%以上的参数。我们还发现它具有更强的噪声性和更少的八度误差。代码和预培训模型可在https://github.com/wx-wei/harmof0上找到。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
本文指的是基于Pyin的多个基本频率(多个F0),一种用于提取单声音乐基本频率(F0)的算法,以及训练有素的卷积神经网络(CNN)模型,其中倾斜的卷积神经网络(CNN)模型,在其中俯仰的sallience级功能产生输入信号以估计多个F0。本文讨论了这两种算法及其相应的优势和缺点的实施。分析这两种方法的不同性能,将Pyin应用于补充从训练有素的CNN模型中提取的F0,以结合这两种算法的优势。为了进行评估,使用了两个小提琴演奏的四件,并评估了模型的性能,以提取F0曲线的平坦度。结果表明,从单声音乐和复音音乐中提取F0时,组合模型的表现优于原始算法。
translated by 谷歌翻译
Vocoders是能够将音频信号(通常是MEL频谱图)转换为波形的低维光谱表示。现代语音生成管道使用Vocoder作为其最终组成部分。最近为语音开发的Vocoder模型实现了高度的现实主义,因此自然想知道它们在音乐信号上的表现。与言语相比,音乐声纹理的异质性和结构提供了新的挑战。在这项工作中,我们专注于一种专为语音设计的Vocoder模型在应用于音乐时倾向于展示的一种特定工件:合成持续的音符时的俯仰不稳定性。我们认为,该伪像的特征声音是由于缺乏水平相一致性,这通常是由于使用时间域目标空间与跨度班的模型(例如卷积神经网络)不变的结果。我们提出了专门为音乐设计的新型Vocoder模型。提高音高稳定性的关键是选择由幅度频谱和相位梯度组成的移位不变的目标空间。我们讨论了启发我们重新构建Vocoder任务的原因,概述一个工作示例,并在音乐信号上进行评估。我们的方法使用新颖的谐波误差度量标准,导致60%和10%的改善了相对于现有模型的持续音符和和弦的重建。
translated by 谷歌翻译
We present the first neural network model to achieve real-time and streaming target sound extraction. To accomplish this, we propose Waveformer, an encoder-decoder architecture with a stack of dilated causal convolution layers as the encoder, and a transformer decoder layer as the decoder. This hybrid architecture uses dilated causal convolutions for processing large receptive fields in a computationally efficient manner, while also benefiting from the performance transformer-based architectures provide. Our evaluations show as much as 2.2-3.3 dB improvement in SI-SNRi compared to the prior models for this task while having a 1.2-4x smaller model size and a 1.5-2x lower runtime. Open-source code and datasets: https://github.com/vb000/Waveformer
translated by 谷歌翻译
呼吸声分类中的问题已在去年的临床科学家和医学研究员团体中获得了良好的关注,以诊断Covid-19疾病。迄今为止,各种模型的人工智能(AI)进入了现实世界,从人类生成的声音等人生成的声音中检测了Covid-19疾病,例如语音/言语,咳嗽和呼吸。实现卷积神经网络(CNN)模型,用于解决基于人工智能(AI)的机器上的许多真实世界问题。在这种情况下,建议并实施一个维度(1D)CNN,以诊断Covid-19的呼吸系统疾病,例如语音,咳嗽和呼吸。应用基于增强的机制来改善Covid-19声音数据集的预处理性能,并使用1D卷积网络自动化Covid-19疾病诊断。此外,使用DDAE(数据去噪自动编码器)技术来产生诸如输入功能的深声特征,而不是采用MFCC(MEL频率跳跃系数)的标准输入,并且它更好地执行比以前的型号的准确性和性能。
translated by 谷歌翻译
注释音乐节拍在繁琐的过程中是很长的。为了打击这个问题,我们为节拍跟踪和下拍估算提出了一种新的自我监督的学习借口任务。这项任务利用SPLEETER,一个音频源分离模型,将歌曲的鼓从其其余的信号分开。第一组信号用作阳性,并通过延长否定,用于对比学习预培训。另一方面,鼓的信号用作锚点。使用此借口任务进行全卷积和复发模型时,学习了一个开始功能。在某些情况下,发现此功能被映射到歌曲中的周期元素。我们发现,当一个节拍跟踪训练集非常小(少于10个示例)时,预先训练的模型随机初始化模型表现优于随机初始化的模型。当不是这种情况时,预先训练导致了一个学习速度,导致模型过度训练集。更一般地说,这项工作定义了音乐自我监督学习领域的新观点。尤其是使用音频源分离作为自我监督的基本分量的作品之一。
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
合成器是一种电子乐器,现在已在现代音乐制作和声音设计中广泛使用。合成器的每个参数配置都会产生独特的音色,可以看作是独特的仪器。估计一组最能恢复声音音色的参数配置的问题是一个重要但复杂的问题,即:合成器参数估计问题。我们提出了一个基于多模式的深度学习管道Sound2syth,以及一个专门设计用于解决此问题的网络结构原始卷积(PDC)。我们的方法不仅实现了SOTA,而且还获得了第一个现实世界中的第一个适用于Dexed合成器(一种流行的FM合成器)。
translated by 谷歌翻译
In recent years, the task of Automatic Music Transcription (AMT), whereby various attributes of music notes are estimated from audio, has received increasing attention. At the same time, the related task of Multi-Pitch Estimation (MPE) remains a challenging but necessary component of almost all AMT approaches, even if only implicitly. In the context of AMT, pitch information is typically quantized to the nominal pitches of the Western music scale. Even in more general contexts, MPE systems typically produce pitch predictions with some degree of quantization. In certain applications of AMT, such as Guitar Tablature Transcription (GTT), it is more meaningful to estimate continuous-valued pitch contours. Guitar tablature has the capacity to represent various playing techniques, some of which involve pitch modulation. Contemporary approaches to AMT do not adequately address pitch modulation, and offer only less quantization at the expense of more model complexity. In this paper, we present a GTT formulation that estimates continuous-valued pitch contours, grouping them according to their string and fret of origin. We demonstrate that for this task, the proposed method significantly improves the resolution of MPE and simultaneously yields tablature estimation results competitive with baseline models.
translated by 谷歌翻译
音频数据增强是培训深度神经网络以解决音频分类任务的关键步骤。在本文中,我们在Matlab中引入了一个新型音频数据增强库的录音机。我们为RAW音频数据提供了15种不同的增强算法,8用于频谱图。我们有效地实施了几种增强技术,其有用性在文献中被广泛证明。据我们所知,这是最大的Matlab音频数据增强图书馆可自由使用。我们验证了我们在ESC-50数据集上评估它们的算法的效率。可以在https://github.com/lorisnanni/audiogmenter下载工具箱及其文档。
translated by 谷歌翻译
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
translated by 谷歌翻译
Music discovery services let users identify songs from short mobile recordings. These solutions are often based on Audio Fingerprinting, and rely more specifically on the extraction of spectral peaks in order to be robust to a number of distortions. Few works have been done to study the robustness of these algorithms to background noise captured in real environments. In particular, AFP systems still struggle when the signal to noise ratio is low, i.e when the background noise is strong. In this project, we tackle this problematic with Deep Learning. We test a new hybrid strategy which consists of inserting a denoising DL model in front of a peak-based AFP algorithm. We simulate noisy music recordings using a realistic data augmentation pipeline, and train a DL model to denoise them. The denoising model limits the impact of background noise on the AFP system's extracted peaks, improving its robustness to noise. We further propose a novel loss function to adapt the DL model to the considered AFP system, increasing its precision in terms of retrieved spectral peaks. To the best of our knowledge, this hybrid strategy has not been tested before.
translated by 谷歌翻译
非侵入性负载监控(NILM)试图通过从单个骨料测量中估算单个设备功率使用来节省能源。深度神经网络在尝试解决尼尔姆问题方面变得越来越流行。但是,大多数使用的模型用于负载识别,而不是在线源分离。在源分离模型中,大多数使用单任务学习方法,其中神经网络专门为每个设备培训。该策略在计算上是昂贵的,并且忽略了多个电器可以同时活跃的事实和它们之间的依赖性。其余模型不是因果关系,这对于实时应用很重要。受语音分离模型Convtas-Net的启发,我们提出了Conv-Nilm-Net,这是端到端尼尔姆的完全卷积框架。 Conv-NILM-NET是多元设备源分离的因果模型。我们的模型在两个真实数据集和英国销售的两个真实数据集上进行了测试,并且显然超过了最新技术的状态,同时保持尺寸明显小于竞争模型。
translated by 谷歌翻译
音频分割和声音事件检测是机器聆听中的关键主题,旨在检测声学类别及其各自的边界。它对于音频分析,语音识别,音频索引和音乐信息检索非常有用。近年来,大多数研究文章都采用分类。该技术将音频分为小帧,并在这些帧上单独执行分类。在本文中,我们提出了一种新颖的方法,叫您只听一次(Yoho),该方法受到计算机视觉中普遍采用的Yolo算法的启发。我们将声学边界的检测转换为回归问题,而不是基于框架的分类。这是通过具有单独的输出神经元来检测音频类的存在并预测其起点和终点来完成的。与最先进的卷积复发性神经网络相比,Yoho的F量的相对改善范围从多个数据集中的1%到6%不等,以进行音频分段和声音事件检测。由于Yoho的输出更端到端,并且可以预测的神经元更少,因此推理速度的速度至少比逐个分类快6倍。另外,由于这种方法可以直接预测声学边界,因此后处理和平滑速度约为7倍。
translated by 谷歌翻译
本文介绍了一个统一的源滤波器网络,具有谐波源源激发生成机制。在以前的工作中,我们提出了统一的源滤波器gan(USFGAN),用于开发具有统一源滤波器神经网络体系结构的灵活语音可控性的高保真神经声码器。但是,USFGAN对Aperiodic源激发信号进行建模的能力不足,并且自然语音和生成的语音之间的声音质量仍然存在差距。为了改善源激发建模和产生的声音质量,提出了一个新的源激励生成网络,分别生成周期性和大约组件。还采用了Hifigan的高级对抗训练程序来代替原始USFGAN中使用的平行波甘的训练。客观和主观评估结果都表明,经过修改的USFGAN可显着提高基本USFGAN的声音质量,同时保持语音可控性。
translated by 谷歌翻译
在本文中,我们介绍了联合主义者,这是一种能够感知的多仪器框架,能够转录,识别和识别和将多种乐器与音频剪辑分开。联合主义者由调节其他模块的仪器识别模块组成:输出仪器特异性钢琴卷的转录模块以及利用仪器信息和转录结果的源分离模块。仪器条件设计用于明确的多仪器功能,而转录和源分离模块之间的连接是为了更好地转录性能。我们具有挑战性的问题表述使该模型在现实世界中非常有用,因为现代流行音乐通常由多种乐器组成。但是,它的新颖性需要关于如何评估这种模型的新观点。在实验过程中,我们从各个方面评估了模型,为多仪器转录提供了新的评估观点。我们还认为,转录模型可以用作其他音乐分析任务的预处理模块。在几个下游任务的实验中,我们的转录模型提供的符号表示有助于解决降低检测,和弦识别和关键估计的频谱图。
translated by 谷歌翻译
口吃是一种言语障碍,在此期间,语音流被非自愿停顿和声音重复打断。口吃识别是一个有趣的跨学科研究问题,涉及病理学,心理学,声学和信号处理,使检测很难且复杂。机器和深度学习的最新发展已经彻底彻底改变了语音领域,但是对口吃的识别受到了最小的关注。这项工作通过试图将研究人员从跨学科领域聚集在一起来填补空白。在本文中,我们回顾了全面的声学特征,基于统计和深度学习的口吃/不足分类方法。我们还提出了一些挑战和未来的指示。
translated by 谷歌翻译
我们介绍了Shennong,一个Python工具箱和命令行实用程序,用于语音功能提取。它实现了广泛的既定现实算法状态,包括诸如熔融频率纤维滤波器或预测的线性滤波器,预先训练的神经网络,音高估计器以及扬声器归一化方法和后处理算法的谱时间滤波器。 Shennong是一种开源,易于使用,可靠和可扩展的框架。 Python的使用使得集成到其他语音建模和机器学习工具方便。它旨在替换或补充几种异质软件,例如Kaldi或Praat。在描述神农软件架构,其核心组件和实现的算法之后,本文说明了三种应用的使用:语音特征在手机辨别任务上的性能进行比较,作为语音函数的声音轨道长度归一化模型的分析用于训练的持续时间和各种噪声条件下的音高估计算法的比较。
translated by 谷歌翻译