自动音乐转录(AMT),从原始音频推断出音符,是音乐理解核心的具有挑战性的任务。与通常专注于单个扬声器的单词的自动语音识别(ASR)不同,AMT通常需要同时转换多个仪器,同时保留微量间距和定时信息。此外,许多AMT数据集是“低资源”,甚至甚至专家音乐家发现音乐转录困难和耗时。因此,事先工作专注于任务特定的架构,对每个任务的个体仪器量身定制。在这项工作中,通过对低资源自然语言处理(NLP)的序列到序列转移学习的有前途的结果,我们证明了通用变压器模型可以执行多任务AMT,共同转录音乐的任意组合跨几个转录数据集的仪器。我们展示了统一培训框架在一系列数据集中实现了高质量的转录结果,大大提高了低资源仪器(如吉他)的性能,同时为丰富的仪器(如钢琴)保持了强大的性能。最后,通过扩大AMT的范围,我们揭示了更加一致的评估指标和更好的数据集对齐,并为这个新的多任务AMT的新方向提供了强的基线。
translated by 谷歌翻译
Despite the central role that melody plays in music perception, it remains an open challenge in music information retrieval to reliably detect the notes of the melody present in an arbitrary music recording. A key challenge in melody transcription is building methods which can handle broad audio containing any number of instrument ensembles and musical styles - existing strategies work well for some melody instruments or styles but not all. To confront this challenge, we leverage representations from Jukebox (Dhariwal et al. 2020), a generative model of broad music audio, thereby improving performance on melody transcription by $20$% relative to conventional spectrogram features. Another obstacle in melody transcription is a lack of training data - we derive a new dataset containing $50$ hours of melody transcriptions from crowdsourced annotations of broad music. The combination of generative pre-training and a new dataset for this task results in $77$% stronger performance on melody transcription relative to the strongest available baseline. By pairing our new melody transcription approach with solutions for beat detection, key estimation, and chord recognition, we build Sheet Sage, a system capable of transcribing human-readable lead sheets directly from music audio. Audio examples can be found at https://chrisdonahue.com/sheetsage and code at https://github.com/chrisdonahue/sheetsage .
translated by 谷歌翻译
数据是现代机器学习系统的命脉,包括音乐信息检索中的命脉(MIR)。但是,MIR长期以来一直被小型数据集和不可靠的标签所困扰。在这项工作中,我们建议使用生成建模打破这种瓶颈。通过使用室内合奏的结构化合成模型(在URMP上训练的MIDI-DDSP)的结构化合成模型,通过管道说明(在巴赫合唱上训练的椰子)模型,我们演示了一个能够生成无限量的逼真的合唱音乐的系统,其中包括丰富的结合音乐,包括混合,包括混合,,,包括混合,茎,MIDI,笔记级性能属性(Staccato,Vibrato等),甚至是细粒的合成参数(音高,振幅等)。我们称此系统为室内集合发生器(CEG),并使用它来生成来自四个不同腔室合奏(cocochorales)的大型合唱数据集。我们证明,使用我们的方法生成的数据改善了音乐转录和源分离的最新模型,并且我们均发布了系统和数据集作为MIR社区未来工作的开源基础。
translated by 谷歌翻译
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
translated by 谷歌翻译
理想的音乐合成器应具有互动性和表现力,并实时产生高保真音频,以进行任意组合仪器和音符。最近的神经合成器在特定于域的模型之间表现出了折衷,这些模型仅对特定仪器或可以训练所有音乐训练但最小的控制和缓慢发电的原始波形模型提供了详细的控制。在这项工作中,我们专注于神经合成器的中间立场,这些基础可以从MIDI序列中产生音频,并实时使用仪器的任意组合。这使得具有单个模型的各种转录数据集的培训,这又提供了对各种仪器的组合和仪器的控制级别的控制。我们使用一个简单的两阶段过程:MIDI到具有编码器变压器的频谱图,然后使用生成对抗网络(GAN)频谱图逆变器将频谱图到音频。我们将训练解码器作为自回归模型进行了比较,并将其视为一种脱氧扩散概率模型(DDPM),并发现DDPM方法在定性上是优越的,并且通过音频重建和fr \'echet距离指标来衡量。鉴于这种方法的互动性和普遍性,我们发现这是迈向互动和表达性神经综合的有前途的第一步,以实现工具和音符的任意组合。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
在本文中,我们介绍了联合主义者,这是一种能够感知的多仪器框架,能够转录,识别和识别和将多种乐器与音频剪辑分开。联合主义者由调节其他模块的仪器识别模块组成:输出仪器特异性钢琴卷的转录模块以及利用仪器信息和转录结果的源分离模块。仪器条件设计用于明确的多仪器功能,而转录和源分离模块之间的连接是为了更好地转录性能。我们具有挑战性的问题表述使该模型在现实世界中非常有用,因为现代流行音乐通常由多种乐器组成。但是,它的新颖性需要关于如何评估这种模型的新观点。在实验过程中,我们从各个方面评估了模型,为多仪器转录提供了新的评估观点。我们还认为,转录模型可以用作其他音乐分析任务的预处理模块。在几个下游任务的实验中,我们的转录模型提供的符号表示有助于解决降低检测,和弦识别和关键估计的频谱图。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
开发语音技术是对低资源语言的挑战,其中注释和原始语音数据稀疏。马耳他是一种这样的语言。近年来,对马耳他的计算处理有所增加,包括语音技术,但后者的资源仍然稀疏。在本文中,我们考虑提高这些语言的语音识别的数据增强技术,专注于马耳他作为测试用例。我们考虑三种不同类型的数据增强:无监督的培训,多语言培训和合成演讲的使用作为培训数据。目标是确定这些技术或它们的组合,是改善起始点是大约7小时转录语音的语言的语言的最有效。我们的结果表明,在这里研究了三种数据增强技术,导致我们在不使用语言模型的情况下实现15%的绝对增长。
translated by 谷歌翻译
Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
translated by 谷歌翻译
最近的言语和语言技术的方法预先rain非常大型模型,用于特定任务。然而,这种大型模型的好处通常仅限于世界上少数资源丰富的语言。在这项工作中,我们对来自印度次大陆的低资源语言构建ASR系统进行多种贡献。首先,我们从各种领域策划40个印度语言的17,000小时的原始语音数据,包括教育,新闻,技术和金融。其次,使用这种原始语音数据,我们预先存在于40个印度语言的Wav2Vec样式模型的多个变体。第三,我们分析佩带的模型以查找关键特点:码本矢量的类似探测音素在语言中共享,跨层的表示是语言系列的判别,并且注意力头通常会在小型本地窗口中注意。第四,我们微调了9种语言的下游ASR模型,并在3个公共数据集上获得最先进的结果,包括非常低的资源语言,如Sinhala和Nepali。我们的工作建立了多语言预介质是建立ASR系统的有效策略,为印度次大陆的语言上不同的扬声器建立ASR系统。
translated by 谷歌翻译
Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
translated by 谷歌翻译
现有的使用变压器模型生成多功能音乐的方法仅限于一小部分乐器或简短的音乐片段。这部分是由于MultiTrack Music的现有表示形式所需的冗长输入序列的内存要求。在这项工作中,我们提出了一个紧凑的表示,该表示可以允许多种仪器,同时保持短序列长度。使用我们提出的表示形式,我们介绍了MultiTrack Music Transformer(MTMT),用于学习多领音乐中的长期依赖性。在主观的听力测试中,我们提出的模型针对两个基线模型实现了无条件生成的竞争质量。我们还表明,我们提出的模型可以生成样品,这些样品的长度是基线模型产生的样品,此外,可以在推理时间的一半中进行样本。此外,我们提出了一项新的措施,以分析音乐自我展示,并表明训练有素的模型学会更少注意与当前音符形成不和谐间隔的注释,但更多地却更多地掌握了与当前相距4N节奏的音符。最后,我们的发现为未来的工作提供了一个新颖的基础,探索了更长形式的多音阶音乐生成并改善音乐的自我吸引力。所有源代码和音频样本均可在https://salu133445.github.io/mtmt/上找到。
translated by 谷歌翻译
在这项工作中,我们介绍了BBC-oxford英国手语(Bobsl)数据集,这是英国手语的大规模视频集合(BSL)。Bobsl是一个基于以前工作中引入的BSL-1K数据集的扩展和公开发布的数据集。我们描述了数据集的动机,以及统计和可用注释。我们进行实验,为标志识别,手语对齐和手语翻译的任务提供基线。最后,我们从机器学习和语言学的角度描述了数据的几个优势和局限,注意数据集中存在的偏差源,并在手语技术背景下讨论Bobsl的潜在应用。数据集可在https://www.robots.ox.ac.uk/~vgg/data/bobsl/处获得。
translated by 谷歌翻译
我们提出了Pangu-Coder,这是一种仅预读的解码器语言模型,该模型采用pangu-alpha架构进行文本到代码生成,即给定自然语言问题描述的编程语言解决方案的合成。我们使用两阶段策略训练Pangu-Coder:第一阶段采用因果语言建模(CLM)来预先培训原始编程语言数据,而第二阶段则使用因果语言建模和掩盖语言建模(MLM)的组合培训目标,专注于文本到代码生成的下游任务,并培训松散的自然语言程序定义和代码功能。最后,我们讨论了pangu-coder-ft,该pander the是通过竞争性编程问题和代码与持续集成测试的结合进行了微调的。我们评估了pangu-coder,重点是它是否生成功能上正确的程序,并证明它在参加较小的上下文窗口和较少的数据培训的同时,它比诸如Codex之类的类似大小的模型(例如Codex)实现等效性或更好的性能。
translated by 谷歌翻译
我们介绍Audiolm,这是具有长期一致性高质量音频产生的框架。 Audiolm将输入音频映射到一系列离散令牌,并将音频生成作为此表示空间中的语言建模任务。我们展示了现有的音频令牌如何在重建质量和长期结构之间提供不同的权衡,我们提出了一个混合代币化计划来实现这两个目标。也就是说,我们利用在音频中预先训练的蒙版语言模型的离散激活来捕获长期结构和神经音频编解码器产生的离散代码,以实现高质量的合成。通过培训大型原始音频波形,Audiolm学会了在简短的提示下产生自然和连贯的连续性。当接受演讲训练时,没有任何笔录或注释,Audiolm会在句法和语义上产生可行的语音连续性,同时还为看不见的说话者保持说话者身份和韵律。此外,我们演示了我们的方法如何通过产生连贯的钢琴音乐连续性来超越语音,尽管受过训练而没有任何象征性的音乐代表。
translated by 谷歌翻译
将音频分离成不同声音源的深度学习技术面临着几种挑战。标准架构需要培训不同类型的音频源的独立型号。虽然一些通用分离器采用单个模型来靶向多个来源,但它们难以推广到看不见的来源。在本文中,我们提出了一个三个组件的管道,可以从大型但弱标记的数据集:audioset训练通用音频源分离器。首先,我们提出了一种用于处理弱标记训练数据的变压器的声音事件检测系统。其次,我们设计了一种基于查询的音频分离模型,利用此数据进行模型培训。第三,我们设计一个潜在的嵌入处理器来编码指定用于分离的音频目标的查询,允许零拍摄的概括。我们的方法使用单一模型进行多种声音类型的源分离,并仅依赖于跨标记的培训数据。此外,所提出的音频分离器可用于零拍摄设置,学习以分离从未在培训中看到的音频源。为了评估分离性能,我们在侦察中测试我们的模型,同时在不相交的augioset上培训。我们通过对从训练中保持的音频源类型进行另一个实验,进一步通过对训练进行了另一个实验来验证零射性能。该模型在两种情况下实现了对当前监督模型的相当的源 - 失真率(SDR)性能。
translated by 谷歌翻译
多代理行为建模旨在了解代理之间发生的交互。我们从行为神经科学,Caltech鼠标社交交互(CALMS21)数据集中提供了一个多代理数据集。我们的数据集由社交交互的轨迹数据组成,从标准居民入侵者测定中自由行为小鼠的视频记录。为了帮助加速行为研究,CALMS21数据集提供基准,以评估三种设置中自动行为分类方法的性能:(1)用于培训由单个注释器的所有注释,(2)用于风格转移以进行学习互动在特定有限培训数据的新行为学习的行为定义和(3)的注释差异。 DataSet由600万个未标记的追踪姿势的交互小鼠组成,以及超过100万帧,具有跟踪的姿势和相应的帧级行为注释。我们的数据集的挑战是能够使用标记和未标记的跟踪数据准确地对行为进行分类,以及能够概括新设置。
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译