The field of Automatic Music Generation has seen significant progress thanks to the advent of Deep Learning. However, most of these results have been produced by unconditional models, which lack the ability to interact with their users, not allowing them to guide the generative process in meaningful and practical ways. Moreover, synthesizing music that remains coherent across longer timescales while still capturing the local aspects that make it sound ``realistic'' or ``human-like'' is still challenging. This is due to the large computational requirements needed to work with long sequences of data, and also to limitations imposed by the training schemes that are often employed. In this paper, we propose a generative model of symbolic music conditioned by data retrieved from human sentiment. The model is a Transformer-GAN trained with labels that correspond to different configurations of the valence and arousal dimensions that quantitatively represent human affective states. We try to tackle both of the problems above by employing an efficient linear version of Attention and using a Discriminator both as a tool to improve the overall quality of the generated music and its ability to follow the conditioning signals.
translated by 谷歌翻译
This work presents a thorough review concerning recent studies and text generation advancements using Generative Adversarial Networks. The usage of adversarial learning for text generation is promising as it provides alternatives to generate the so-called "natural" language. Nevertheless, adversarial text generation is not a simple task as its foremost architecture, the Generative Adversarial Networks, were designed to cope with continuous information (image) instead of discrete data (text). Thus, most works are based on three possible options, i.e., Gumbel-Softmax differentiation, Reinforcement Learning, and modified training objectives. All alternatives are reviewed in this survey as they present the most recent approaches for generating text using adversarial-based techniques. The selected works were taken from renowned databases, such as Science Direct, IEEEXplore, Springer, Association for Computing Machinery, and arXiv, whereas each selected work has been critically analyzed and assessed to present its objective, methodology, and experimental results.
translated by 谷歌翻译
本文提出了一种新的方法,可以通过蒙特卡洛树搜索来控制象征性音乐的情感。我们使用蒙特卡洛树搜索作为一种解码机制来指导语言模型学到的概率分布朝着给定的情感。在解码过程的每个步骤中,我们都会使用树木(Puct)的预测指标上的置信度来搜索分别由情绪分类器和歧视器给出的情感和质量平均值的序列。我们将语言模型用作管道的政策,并将情感分类器和歧视器的组合作为其价值功能。为了解码一段音乐中的下一个令牌,我们从搜索过程中创建的节点访问的分布中进行采样。我们使用直接从生成的样品计算的一组客观指标来评估生成样品相对于人类组成的碎片的质量。我们还进行了一项用户研究,以评估人类受试者如何看待生成的样品的质量和情感。我们将派斗与随机双目标梁搜索(SBB)和条件采样(CS)进行了比较。结果表明,在音乐质量和情感的几乎所有指标中,Puct的表现都优于SBB和CS。
translated by 谷歌翻译
As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
对机器学习和创造力领域的兴趣越来越大。这项调查概述了计算创造力理论,关键机器学习技术(包括生成深度学习)和相应的自动评估方法的历史和现状。在对该领域的主要贡献进行了批判性讨论之后,我们概述了当前的研究挑战和该领域的新兴机会。
translated by 谷歌翻译
Transformers and variational autoencoders (VAE) have been extensively employed for symbolic (e.g., MIDI) domain music generation. While the former boast an impressive capability in modeling long sequences, the latter allow users to willingly exert control over different parts (e.g., bars) of the music to be generated. In this paper, we are interested in bringing the two together to construct a single model that exhibits both strengths. The task is split into two steps. First, we equip Transformer decoders with the ability to accept segment-level, time-varying conditions during sequence generation. Subsequently, we combine the developed and tested in-attention decoder with a Transformer encoder, and train the resulting MuseMorphose model with the VAE objective to achieve style transfer of long pop piano pieces, in which users can specify musical attributes including rhythmic intensity and polyphony (i.e., harmonic fullness) they desire, down to the bar level. Experiments show that MuseMorphose outperforms recurrent neural network (RNN) based baselines on numerous widely-used metrics for style transfer tasks.
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译
现有的使用变压器模型生成多功能音乐的方法仅限于一小部分乐器或简短的音乐片段。这部分是由于MultiTrack Music的现有表示形式所需的冗长输入序列的内存要求。在这项工作中,我们提出了一个紧凑的表示,该表示可以允许多种仪器,同时保持短序列长度。使用我们提出的表示形式,我们介绍了MultiTrack Music Transformer(MTMT),用于学习多领音乐中的长期依赖性。在主观的听力测试中,我们提出的模型针对两个基线模型实现了无条件生成的竞争质量。我们还表明,我们提出的模型可以生成样品,这些样品的长度是基线模型产生的样品,此外,可以在推理时间的一半中进行样本。此外,我们提出了一项新的措施,以分析音乐自我展示,并表明训练有素的模型学会更少注意与当前音符形成不和谐间隔的注释,但更多地却更多地掌握了与当前相距4N节奏的音符。最后,我们的发现为未来的工作提供了一个新颖的基础,探索了更长形式的多音阶音乐生成并改善音乐的自我吸引力。所有源代码和音频样本均可在https://salu133445.github.io/mtmt/上找到。
translated by 谷歌翻译
我们考虑了自动生成音乐文本描述的新颖任务。与其他完善的文本生成任务(例如图像标题)相比,富裕的音乐和文本数据集的稀缺性使其成为更具挑战性的任务。在本文中,我们利用众包音乐评论来构建一个新的数据集,并提出一个序列到序列模型以生成音乐的文本描述。更具体地说,我们将扩张的卷积层用作编码器的基本组成部分,基于内存的复发性神经网络作为解码器。为了增强生成文本的真实性和主题,我们进一步建议用歧视者和新的主题评估者微调模型。为了衡量生成的文本的质量,我们还提出了两个新的评估指标,它们比人类评估比传统指标(例如BLEU)更加一致。实验结果验证了我们的模型能够在包含原始音乐的主题和内容信息的同时产生流利而有意义的评论。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译
以时间序列形式出现的信号测量是医疗机学习应用中使用的最常见数据类型之一。这样的数据集的大小通常很小,收集和注释昂贵,并且可能涉及隐私问题,这阻碍了我们培训用于生物医学应用的大型,最先进的深度学习模型的能力。对于时间序列数据,我们可以用来扩展数据集大小的数据增强策略套件受到维护信号的基本属性的限制。生成对抗网络(GAN)可以用作另一种数据增强工具。在本文中,我们提出了TTS-CGAN,这是一种基于变压器的条件GAN模型,可以在现有的多级数据集上进行训练,并生成特定于类的合成时间序列序列的任意长度。我们详细介绍了模型架构和设计策略。由我们的模型生成的合成序列与真实的序列无法区分,可以用来补充或替换相同类型的真实信号,从而实现了数据增强的目标。为了评估生成的数据的质量,我们修改小波相干度量指标,以比较两组信号之间的相似性,还可以进行案例研究,其中使用合成和真实数据的混合来训练深度学习模型用于序列分类。与其他可视化技术和定性评估方法一起,我们证明TTS-CGAN生成的合成数据类似于真实数据,并且我们的模型的性能优于为时间序列数据生成而构建的其他最先进的GAN模型。
translated by 谷歌翻译
深度学习算法的兴起引领许多研究人员使用经典信号处理方法来发声。深度学习模型已经实现了富有富有的语音合成,现实的声音纹理和虚拟乐器的音符。然而,最合适的深度学习架构仍在调查中。架构的选择紧密耦合到音频表示。声音的原始波形可以太密集和丰富,用于深入学习模型,以有效处理 - 复杂性提高培训时间和计算成本。此外,它不代表声音以其所感知的方式。因此,在许多情况下,原始音频已经使用上采样,特征提取,甚至采用波形的更高级别的图示来转换为压缩和更有意义的形式。此外,研究了所选择的形式,另外的调节表示,不同的模型架构以及用于评估重建声音的许多度量的条件。本文概述了应用于使用深度学习的声音合成的音频表示。此外,它呈现了使用深度学习模型开发和评估声音合成架构的最重要方法,始终根据音频表示。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
音乐表达需要控制播放的笔记,以及如何执行它们。传统的音频合成器提供了详细的表达控制,但以现实主义的成本提供了详细的表达控制。黑匣子神经音频合成和连接采样器可以产生现实的音频,但有很少的控制机制。在这项工作中,我们介绍MIDI-DDSP乐器的分层模型,可以实现现实的神经音频合成和详细的用户控制。从可解释的可分辨率数字信号处理(DDSP)合成参数开始,我们推断出富有表现力性能的音符和高级属性(例如Timbre,Vibrato,Dynamics和Asticiculation)。这将创建3级层次结构(注释,性能,合成),提供个人选择在每个级别进行干预,或利用培训的前沿(表现给出备注,综合赋予绩效)进行创造性的帮助。通过定量实验和聆听测试,我们证明了该层次结构可以重建高保真音频,准确地预测音符序列的性能属性,独立地操纵给定性能的属性,以及作为完整的系统,从新颖的音符生成现实音频顺序。通过利用可解释的层次结构,具有多个粒度的粒度,MIDI-DDSP将门打开辅助工具的门,以赋予各种音乐体验的个人。
translated by 谷歌翻译
基于分数的生成模型和扩散概率模型已经成功地在连续域中产生高质量样本,例如图像和音频。然而,由于他们的LangeVin启发了采样机制,它们对离散和顺序数据的应用受到限制。在这项工作中,我们通过参数化在预先训练的变化性AutiaceOder的连续潜空间中的离散域参数,介绍了一种用于训练延伸模型的技术。我们的方法是非自回归的,并学习通过反向过程生成潜在嵌入的序列,并通过恒定数量的迭代细化步骤提供并行生成。与在相同连续嵌入的自回归语言模型相比,我们将这种技术应用于建模符号音乐,并显示出强大的无条件生成和后HOC条件缺陷结果。
translated by 谷歌翻译
实时音乐伴奏的生成在音乐行业(例如音乐教育和现场表演)中具有广泛的应用。但是,自动实时音乐伴奏的产生仍在研究中,并且经常在逻辑延迟和暴露偏见之间取决于权衡。在本文中,我们提出了Song Driver,这是一种无逻辑延迟或暴露偏见的实时音乐伴奏系统。具体而言,Songdriver将一个伴奏的生成任务分为两个阶段:1)安排阶段,其中变压器模型首先安排了和弦,以实时进行输入旋律,并在下一阶段加速了和弦,而不是播放它们。 2)预测阶段,其中CRF模型基于先前缓存的和弦生成了即将到来的旋律的可播放的多轨伴奏。通过这种两相策略,歌手直接生成即将到来的旋律的伴奏,从而达到了零逻辑延迟。此外,在预测时间步的和弦时,歌手是指第一阶段的缓存和弦,而不是其先前的预测,这避免了暴露偏见问题。由于输入长度通常在实时条件下受到限制,因此另一个潜在的问题是长期顺序信息的丢失。为了弥补这一缺点,我们在当前时间步骤作为全球信息之前从长期音乐作品中提取了四个音乐功能。在实验中,我们在一些开源数据集上训练歌手,以及由中国风格的现代流行音乐得分构建的原始\```````'''aisong数据集。结果表明,歌手在客观和主观指标上均优于现有的SOTA(最先进)模型,同时大大降低了物理潜伏期。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
本文介绍了一种基于变压器深度学习模型为视频游戏生成音乐的体系结构。该系统按照设计视频游戏音乐目前使用的标准分层策略来生成各种层的音乐。根据唤醒现象模型,音乐对玩家的心理环境具有适应性。我们的动机是根据玩家的口味自定义音乐,他们可以通过一系列音乐示例选择他喜欢的音乐风格。我们讨论了未来的当前局限性和前景,例如对音乐组件的协作和互动控制。
translated by 谷歌翻译