本文提出了一种新的方法,可以通过蒙特卡洛树搜索来控制象征性音乐的情感。我们使用蒙特卡洛树搜索作为一种解码机制来指导语言模型学到的概率分布朝着给定的情感。在解码过程的每个步骤中,我们都会使用树木(Puct)的预测指标上的置信度来搜索分别由情绪分类器和歧视器给出的情感和质量平均值的序列。我们将语言模型用作管道的政策,并将情感分类器和歧视器的组合作为其价值功能。为了解码一段音乐中的下一个令牌,我们从搜索过程中创建的节点访问的分布中进行采样。我们使用直接从生成的样品计算的一组客观指标来评估生成样品相对于人类组成的碎片的质量。我们还进行了一项用户研究,以评估人类受试者如何看待生成的样品的质量和情感。我们将派斗与随机双目标梁搜索(SBB)和条件采样(CS)进行了比较。结果表明,在音乐质量和情感的几乎所有指标中,Puct的表现都优于SBB和CS。
translated by 谷歌翻译
The field of Automatic Music Generation has seen significant progress thanks to the advent of Deep Learning. However, most of these results have been produced by unconditional models, which lack the ability to interact with their users, not allowing them to guide the generative process in meaningful and practical ways. Moreover, synthesizing music that remains coherent across longer timescales while still capturing the local aspects that make it sound ``realistic'' or ``human-like'' is still challenging. This is due to the large computational requirements needed to work with long sequences of data, and also to limitations imposed by the training schemes that are often employed. In this paper, we propose a generative model of symbolic music conditioned by data retrieved from human sentiment. The model is a Transformer-GAN trained with labels that correspond to different configurations of the valence and arousal dimensions that quantitatively represent human affective states. We try to tackle both of the problems above by employing an efficient linear version of Attention and using a Discriminator both as a tool to improve the overall quality of the generated music and its ability to follow the conditioning signals.
translated by 谷歌翻译
Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.
translated by 谷歌翻译
符号音乐的生成依赖于生成模型的上下文表示功能,其中最普遍的方法是基于变压器的模型。音乐背景的学习也与音乐中的结构元素,即介绍,诗歌和合唱有关,这些元素目前被研究界忽略了。在本文中,我们提出了一个分层变压器模型,以学习音乐中的多尺度上下文。在编码阶段,我们首先设计了一个片段范围定位层,以将音乐结合到和弦和部分中。然后,我们使用多尺度的注意机制来学习笔记,和弦和部分级别的上下文。在解码阶段,我们提出了一个层次变压器模型,该模型使用精细编码器并行生成部分和粗编码器来解码组合音乐。我们还设计了音乐风格的标准化层,以在生成的部分之间实现一致的音乐风格。我们的模型在两个开放的MIDI数据集上进行了评估,实验表明我们的模型优于当代音乐生成模型。更令人兴奋的是,视觉评估表明,我们的模型在旋律重复使用方面表现出色,从而产生了更现实的音乐。
translated by 谷歌翻译
Transformers and variational autoencoders (VAE) have been extensively employed for symbolic (e.g., MIDI) domain music generation. While the former boast an impressive capability in modeling long sequences, the latter allow users to willingly exert control over different parts (e.g., bars) of the music to be generated. In this paper, we are interested in bringing the two together to construct a single model that exhibits both strengths. The task is split into two steps. First, we equip Transformer decoders with the ability to accept segment-level, time-varying conditions during sequence generation. Subsequently, we combine the developed and tested in-attention decoder with a Transformer encoder, and train the resulting MuseMorphose model with the VAE objective to achieve style transfer of long pop piano pieces, in which users can specify musical attributes including rhythmic intensity and polyphony (i.e., harmonic fullness) they desire, down to the bar level. Experiments show that MuseMorphose outperforms recurrent neural network (RNN) based baselines on numerous widely-used metrics for style transfer tasks.
translated by 谷歌翻译
本文介绍了一种基于变压器深度学习模型为视频游戏生成音乐的体系结构。该系统按照设计视频游戏音乐目前使用的标准分层策略来生成各种层的音乐。根据唤醒现象模型,音乐对玩家的心理环境具有适应性。我们的动机是根据玩家的口味自定义音乐,他们可以通过一系列音乐示例选择他喜欢的音乐风格。我们讨论了未来的当前局限性和前景,例如对音乐组件的协作和互动控制。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
This work presents a thorough review concerning recent studies and text generation advancements using Generative Adversarial Networks. The usage of adversarial learning for text generation is promising as it provides alternatives to generate the so-called "natural" language. Nevertheless, adversarial text generation is not a simple task as its foremost architecture, the Generative Adversarial Networks, were designed to cope with continuous information (image) instead of discrete data (text). Thus, most works are based on three possible options, i.e., Gumbel-Softmax differentiation, Reinforcement Learning, and modified training objectives. All alternatives are reviewed in this survey as they present the most recent approaches for generating text using adversarial-based techniques. The selected works were taken from renowned databases, such as Science Direct, IEEEXplore, Springer, Association for Computing Machinery, and arXiv, whereas each selected work has been critically analyzed and assessed to present its objective, methodology, and experimental results.
translated by 谷歌翻译
即使具有像变形金刚这样的强序模型,使用远程音乐结构产生表现力的钢琴表演仍然具有挑战性。同时,构成结构良好的旋律或铅片(Melody + Chords)的方法,即更简单的音乐形式,获得了更大的成功。在观察上面的情况下,我们设计了一个基于两阶段变压器的框架,该框架首先构成铅片,然后用伴奏和表达触摸来修饰它。这种分解还可以预处理非钢琴数据。我们的客观和主观实验表明,构成和装饰会缩小当前最新状态和真实表演之间的结构性差异,并改善了其他音乐方面,例如丰富性和连贯性。
translated by 谷歌翻译
许多社交媒体用户更喜欢以视频​​而不是文本的形式消耗内容。但是,为了使内容创建者以高点击率生成视频,需要许多编辑才能将素材与音乐匹配。这员发出了更多适合业余视频制造商的额外挑战。因此,我们提出了一种新的基于关注的VMT(视频音乐变压器),它自动生成来自视频帧的钢琴分数。使用模型生成的音乐还可以防止潜在的版权侵权,这些版权往复使用现有音乐。据我们所知,除了拟议的VMT之外,没有任何工作,旨在为视频撰写音乐。此外,还缺少具有对齐视频和符号音乐的数据集。我们释放了一个由7小时超过7小时的钢琴分数组成的新数据集,在流行音乐视频和MIDI文件之间进行精细对齐。我们对VMT,SEQSEQ模型(我们的基线)和原始钢琴版原声带进行人体评估进行实验。 VMT通过对音乐平滑度和视频相关性的基线实现一致的改进。特别是,通过相关性分数和我们的案例研究,我们的模型已经显示了多模对帧级演员的音乐生成运动的能力。我们的VMT模型以及新数据集具有有希望的研究方向,旨在为视频进行匹配的匹配原声。我们在https://github.com/linchintung/vmt发布了我们的代码
translated by 谷歌翻译
在本文中,我们使用大规模播放脚本数据集来提出从对话中提出戏剧发电的新颖任务。使用超过一百万行的对话和提示,我们将提示生成问题作为受控文本生成任务方法,并展示如何使用如何使用对话/提示鉴别器的语言模型来增强对话的影响。此外,我们还探讨了主题关键字和情绪的使用,以获得受控文本生成。广泛的定量和定性实验表明,语言模型可以成功地用于在高度专业化的域中生成合理的和属性控制的文本,例如播放脚本。配套材料可在:https://catlab-team.github.io/cuegen。
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译
当前的语言模型达到了较低的困惑,但其产生的几代人仍然遭受有毒的反应,重复性和矛盾。标准语言建模设置无法解决这些问题。在本文中,我们介绍了一个新的体系结构{\ sc导演},由一个统一的生成器分类器组成,具有语言建模和每个输出令牌的分类头。培训是使用标准语言建模数据共同进行的,并以所需和不良序列标记的数据。与标准语言模型相比,该模型在多种设置中的实验表明,该模型具有竞争性的培训和解码速度,同时产生了较高的结果,从而减轻了已知的问题,同时保持发电质量。就准确性和效率而言,它还优于现有的模型指导方法。
translated by 谷歌翻译
在本文中,介绍了用于音乐和音乐技术会议(CSMT)组织的数据挑战的数据集。CSMT数据挑战要求参与者识别给定的旋律是否由计算机生成或由人类组成。数据集由两个部分组成:开发数据集和评估数据集。开发数据集仅包含计算机生成的旋转,而评估数据集包含计算机生成的旋律和人类组成的旋律。数据集的目的是通过学习产生的旋律的特征来检查是否可以区分计算机生成的旋律。
translated by 谷歌翻译
现有的使用变压器模型生成多功能音乐的方法仅限于一小部分乐器或简短的音乐片段。这部分是由于MultiTrack Music的现有表示形式所需的冗长输入序列的内存要求。在这项工作中,我们提出了一个紧凑的表示,该表示可以允许多种仪器,同时保持短序列长度。使用我们提出的表示形式,我们介绍了MultiTrack Music Transformer(MTMT),用于学习多领音乐中的长期依赖性。在主观的听力测试中,我们提出的模型针对两个基线模型实现了无条件生成的竞争质量。我们还表明,我们提出的模型可以生成样品,这些样品的长度是基线模型产生的样品,此外,可以在推理时间的一半中进行样本。此外,我们提出了一项新的措施,以分析音乐自我展示,并表明训练有素的模型学会更少注意与当前音符形成不和谐间隔的注释,但更多地却更多地掌握了与当前相距4N节奏的音符。最后,我们的发现为未来的工作提供了一个新颖的基础,探索了更长形式的多音阶音乐生成并改善音乐的自我吸引力。所有源代码和音频样本均可在https://salu133445.github.io/mtmt/上找到。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
测序技术容易出错,对下游应用程序进行纠错(EC)。需要手动配置EC工具以获得最佳性能。我们发现最佳参数(例如,k-mer大小)是依赖于工具和数据集。此外,评估给定工具的性能(即,对准速率或增益)通常依赖于参考基因组,但是质量参考基因组并不总是可用的。我们介绍了基于K-MEC的自动配置的Lerna。 Lerna首先创建未校正的基因组读取的语言模型(LM);然后,计算困惑度量以评估不同参数选择的校正读取。接下来,在不使用参考基因​​组的情况下发现产生最高对准率的那个。我们的方法的基本直觉是困惑度量与纠错后的组件的质量与组件的质量相反。结果:首先,我们表明,即使对于相同的EC工具,不同的数据集也可以对不同的数据集格变化。其次,我们使用其组件基于关注的变压器显示了我们的LM的收益。我们展示了误差校正前后困惑度量的模型的估计。校正后的困惑越低,k-mer大小越好。我们还表明,用于校正读取的对准率和组装质量与困惑强烈地呈负相关,从而实现了k-mer值的自动选择以获得更好的纠错,因此改善的组装质量。此外,我们表明我们的注意力模型对于整个管道的重大运行时间改善 - 由于并行化注意机制和JIT编译对GPU推理的使用JIT编译,因此整个管道的运行时间更快。
translated by 谷歌翻译
As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译