数据是现代机器学习系统的命脉,包括音乐信息检索中的命脉(MIR)。但是,MIR长期以来一直被小型数据集和不可靠的标签所困扰。在这项工作中,我们建议使用生成建模打破这种瓶颈。通过使用室内合奏的结构化合成模型(在URMP上训练的MIDI-DDSP)的结构化合成模型,通过管道说明(在巴赫合唱上训练的椰子)模型,我们演示了一个能够生成无限量的逼真的合唱音乐的系统,其中包括丰富的结合音乐,包括混合,包括混合,,,包括混合,茎,MIDI,笔记级性能属性(Staccato,Vibrato等),甚至是细粒的合成参数(音高,振幅等)。我们称此系统为室内集合发生器(CEG),并使用它来生成来自四个不同腔室合奏(cocochorales)的大型合唱数据集。我们证明,使用我们的方法生成的数据改善了音乐转录和源分离的最新模型,并且我们均发布了系统和数据集作为MIR社区未来工作的开源基础。
translated by 谷歌翻译
尽管近年来取得了惊人的进步,但最先进的音乐分离系统会产生具有显着感知缺陷的源估计,例如增加无关噪声或消除谐波。我们提出了一个后处理模型(MAKE听起来不错(MSG)后处理器),以增强音乐源分离系统的输出。我们将我们的后处理模型应用于最新的基于波形和基于频谱图的音乐源分离器,包括在训练过程中未见的分离器。我们对源分离器产生的误差的分析表明,波形模型倾向于引入更多高频噪声,而频谱图模型倾向于丢失瞬变和高频含量。我们引入了客观措施来量化这两种错误并显示味精改善了两种错误的源重建。众包主观评估表明,人类的听众更喜欢由MSG进行后处理的低音和鼓的来源估计。
translated by 谷歌翻译
理想的音乐合成器应具有互动性和表现力,并实时产生高保真音频,以进行任意组合仪器和音符。最近的神经合成器在特定于域的模型之间表现出了折衷,这些模型仅对特定仪器或可以训练所有音乐训练但最小的控制和缓慢发电的原始波形模型提供了详细的控制。在这项工作中,我们专注于神经合成器的中间立场,这些基础可以从MIDI序列中产生音频,并实时使用仪器的任意组合。这使得具有单个模型的各种转录数据集的培训,这又提供了对各种仪器的组合和仪器的控制级别的控制。我们使用一个简单的两阶段过程:MIDI到具有编码器变压器的频谱图,然后使用生成对抗网络(GAN)频谱图逆变器将频谱图到音频。我们将训练解码器作为自回归模型进行了比较,并将其视为一种脱氧扩散概率模型(DDPM),并发现DDPM方法在定性上是优越的,并且通过音频重建和fr \'echet距离指标来衡量。鉴于这种方法的互动性和普遍性,我们发现这是迈向互动和表达性神经综合的有前途的第一步,以实现工具和音符的任意组合。
translated by 谷歌翻译
音乐表达需要控制播放的笔记,以及如何执行它们。传统的音频合成器提供了详细的表达控制,但以现实主义的成本提供了详细的表达控制。黑匣子神经音频合成和连接采样器可以产生现实的音频,但有很少的控制机制。在这项工作中,我们介绍MIDI-DDSP乐器的分层模型,可以实现现实的神经音频合成和详细的用户控制。从可解释的可分辨率数字信号处理(DDSP)合成参数开始,我们推断出富有表现力性能的音符和高级属性(例如Timbre,Vibrato,Dynamics和Asticiculation)。这将创建3级层次结构(注释,性能,合成),提供个人选择在每个级别进行干预,或利用培训的前沿(表现给出备注,综合赋予绩效)进行创造性的帮助。通过定量实验和聆听测试,我们证明了该层次结构可以重建高保真音频,准确地预测音符序列的性能属性,独立地操纵给定性能的属性,以及作为完整的系统,从新颖的音符生成现实音频顺序。通过利用可解释的层次结构,具有多个粒度的粒度,MIDI-DDSP将门打开辅助工具的门,以赋予各种音乐体验的个人。
translated by 谷歌翻译
自动音乐转录(AMT),从原始音频推断出音符,是音乐理解核心的具有挑战性的任务。与通常专注于单个扬声器的单词的自动语音识别(ASR)不同,AMT通常需要同时转换多个仪器,同时保留微量间距和定时信息。此外,许多AMT数据集是“低资源”,甚至甚至专家音乐家发现音乐转录困难和耗时。因此,事先工作专注于任务特定的架构,对每个任务的个体仪器量身定制。在这项工作中,通过对低资源自然语言处理(NLP)的序列到序列转移学习的有前途的结果,我们证明了通用变压器模型可以执行多任务AMT,共同转录音乐的任意组合跨几个转录数据集的仪器。我们展示了统一培训框架在一系列数据集中实现了高质量的转录结果,大大提高了低资源仪器(如吉他)的性能,同时为丰富的仪器(如钢琴)保持了强大的性能。最后,通过扩大AMT的范围,我们揭示了更加一致的评估指标和更好的数据集对齐,并为这个新的多任务AMT的新方向提供了强的基线。
translated by 谷歌翻译
The United States coastline spans 95,471 miles; a distance that cannot be effectively patrolled or secured by manual human effort alone. Unmanned Aerial Vehicles (UAVs) equipped with infrared cameras and deep-learning based algorithms represent a more efficient alternative for identifying and segmenting objects of interest - namely, ships. However, standard approaches to training these algorithms require large-scale datasets of densely labeled infrared maritime images. Such datasets are not publicly available and manually annotating every pixel in a large-scale dataset would have an extreme labor cost. In this work we demonstrate that, in the context of segmenting ships in infrared imagery, weakly-supervising an algorithm with sparsely labeled data can drastically reduce data labeling costs with minimal impact on system performance. We apply weakly-supervised learning to an unlabeled dataset of 7055 infrared images sourced from the Naval Air Warfare Center Aircraft Division (NAWCAD). We find that by sparsely labeling only 32 points per image, weakly-supervised segmentation models can still effectively detect and segment ships, with a Jaccard score of up to 0.756.
translated by 谷歌翻译
We present a human-in-the-loop evaluation framework for fact-checking novel misinformation claims and identifying social media messages that violate relevant policies. Our approach extracts structured representations of check-worthy claims, which are aggregated and ranked for review. Stance classifiers are then used to identify tweets supporting novel misinformation claims, which are further reviewed to determine whether they violate relevant policies. To demonstrate the feasibility of our approach, we develop a baseline system based on modern NLP methods for human-in-the-loop fact-checking in the domain of COVID-19 treatments. Using our baseline system, we show that human fact-checkers can identify 124 tweets per hour that violate Twitter's policies on COVID-19 misinformation. We will make our code, data, and detailed annotation guidelines available to support the evaluation of human-in-the-loop systems that identify novel misinformation directly from raw user-generated content.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Automatic defect detection for 3D printing processes, which shares many characteristics with change detection problems, is a vital step for quality control of 3D printed products. However, there are some critical challenges in the current state of practice. First, existing methods for computer vision-based process monitoring typically work well only under specific camera viewpoints and lighting situations, requiring expensive pre-processing, alignment, and camera setups. Second, many defect detection techniques are specific to pre-defined defect patterns and/or print schematics. In this work, we approach the automatic defect detection problem differently using a novel Semi-Siamese deep learning model that directly compares a reference schematic of the desired print and a camera image of the achieved print. The model then solves an image segmentation problem, identifying the locations of defects with respect to the reference frame. Unlike most change detection problems, our model is specially developed to handle images coming from different domains and is robust against perturbations in the imaging setup such as camera angle and illumination. Defect localization predictions were made in 2.75 seconds per layer using a standard MacBookPro, which is comparable to the typical tens of seconds or less for printing a single layer on an inkjet-based 3D printer, while achieving an F1-score of more than 0.9.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译