我们提出了一个端到端的移情对话言语综合(DSS)模型,该模型既考虑对话历史的语言和韵律背景。同理心是人类积极尝试进入对话中的对话者,而同理心DSS是在口语对话系统中实施此行为的技术。我们的模型以语言和韵律特征的历史为条件,以预测适当的对话环境。因此,可以将其视为传统基于语言 - 基于语言的对话历史建模的扩展。为了有效地培训善解人意的DSS模型,我们研究1)通过大型语音语料库预审预测的一个自我监督的学习模型,2)一种风格引导的培训,使用韵律嵌入对话上下文嵌入的当前话语,3)对结合文本和语音方式的跨模式的关注,以及4)句子的嵌入,以实现细粒度的韵律建模,而不是通过话语建模。评估结果表明,1)仅考虑对话历史的韵律环境并不能提高善解人意的DSS中的语音质量和2)引入样式引导的培训和句子嵌入模型的言语质量比传统方法更高。
translated by 谷歌翻译
我们提出了研究,这是一种新的演讲语料库,用于开发一个可以以友好方式讲话的语音代理。人类自然会控制他们的言语韵律以相互同情。通过将这种“同情对话”行为纳入口语对话系统,我们可以开发一个可以自然响应用户的语音代理。我们设计了研究语料库,以包括一位演讲者,他明确地对对话者的情绪表示同情。我们描述了构建善解人意的对话语音语料库的方法论,并报告研究语料库的分析结果。我们进行了文本到语音实验,以最初研究如何开发更多的自然语音代理,以调整其口语风格,以对应对话者的情绪。结果表明,对话者的情绪标签和对话上下文嵌入的使用可以与使用代理商的情感标签相同的自然性产生语音。我们的研究项目页面是http://sython.org/corpus/studies。
translated by 谷歌翻译
最近的文本到语音(TTS)的质量与人类的质量相当。但是,其在口语对话中的应用尚未得到广泛研究。这项研究旨在实现与人类对话非常相似的TT。首先,我们记录并抄录实际自发对话。然后,提出的对话TTS分为两个阶段:第一阶段,各种自动编码器(VAE) - VITS或高斯混合物变化自动编码器(GMVAE) - 培训了训练,从端到端文本对语音(VIT),最近提出的端到端TTS模型。从语音中提取潜在的口语表示的样式编码器与TTS共同培训。在第二阶段,对风格预测指标进行了训练,以预测从对话历史中综合的说话风格。在推断期间,通过将样式预测器预测的语言样式表示为VAE/gmvae-vits,可以以适合对话背景的样式合成语音。主观评估结果表明,所提出的方法在对话级别的自然性方面优于原始VIT。
translated by 谷歌翻译
Entrainment is the phenomenon by which an interlocutor adapts their speaking style to align with their partner in conversations. It has been found in different dimensions as acoustic, prosodic, lexical or syntactic. In this work, we explore and utilize the entrainment phenomenon to improve spoken dialogue systems for voice assistants. We first examine the existence of the entrainment phenomenon in human-to-human dialogues in respect to acoustic feature and then extend the analysis to emotion features. The analysis results show strong evidence of entrainment in terms of both acoustic and emotion features. Based on this findings, we implement two entrainment policies and assess if the integration of entrainment principle into a Text-to-Speech (TTS) system improves the synthesis performance and the user experience. It is found that the integration of the entrainment principle into a TTS system brings performance improvement when considering acoustic features, while no obvious improvement is observed when considering emotion features.
translated by 谷歌翻译
情绪转换(EVC)寻求转换话语的情绪状态,同时保留语言内容和扬声器身份。在EVC,情绪通常被视为离散类别,忽略了言论也传达了听众可以感知的各种强度水平的情绪。在本文中,我们的目标是明确地表征和控制情绪强度。我们建议解开语言内容的扬声器风格,并将扬声器风格编码成一个嵌入的嵌入空间,形成情绪嵌入的原型。我们进一步从情感标记的数据库中了解实际的情感编码器,并研究使用相对属性来表示细粒度的情绪强度。为确保情绪可理解性,我们将情感分类损失和情感嵌入了EVC网络培训中的相似性损失。根据需要,所提出的网络控制输出语音中的细粒度情绪强度。通过目标和主观评估,我们验证了建议网络的情感表达和情感强度控制的有效性。
translated by 谷歌翻译
情感语音综合旨在使人类的声音具有各种情感影响。当前的研究主要集中于模仿属于特定情感类型的平均风格。在本文中,我们试图在运行时与情感混合在一起。我们提出了一种新颖的表述,可以衡量不同情绪的语音样本之间的相对差异。然后,我们将公式纳入序列到序列情感文本到语音框架中。在培训期间,该框架不仅明确地表征了情感风格,而且还通过用其他情感量化差异来探索情绪的序数。在运行时,我们通过手动定义情感属性向量来控制模型以产生所需的情绪混合物。客观和主观评估验证了拟议框架的有效性。据我们所知,这项研究是关于言语中混合情绪的建模,综合和评估混合情绪的第一项研究。
translated by 谷歌翻译
当前的大多数TTS数据集是单个话语的集合,在样式和元数据方面几乎没有对话方面。在本文中,我们介绍了DailyTalk,这是一种专为文本到语音设计的高质量对话语音数据集。我们从开放域对话数据集Dabordialog中取样,修改和记录了2,541个对话,这些对话足以表示每个对话的上下文。在数据构建步骤中,我们维护了最初在DailyDialog中注释的属性分布,以支持DailyTalk中的各种对话。除了数据集之外,我们将先前的工作扩展为我们的基线,在该基线中,非自动回忆TTS的条件是对话框中的历史信息。我们收集元数据,以便TTS模型可以学习历史对话信息,这是产生上下文感知语音的关键。从基线实验结果中,我们显示每日talk可用于训练神经文本到语音模型,我们的基线可以代表上下文信息。 DailyTalk数据集和基线代码可自由使用CC-BY-SA 4.0许可证。
translated by 谷歌翻译
本文提出了一种具有多粒度潜变量的分层生成模型,以综合表达语音。近年来,将细粒度的潜在变量引入了文本到语音合成中,使得韵律和讲话方式的精细控制能够进行综合演讲。然而,当通过从标准高斯先前抽样获得这些潜变量时,言语的自然度降低。为了解决这个问题,我们提出了一种用于建模细粒度潜在变量的新框架,考虑到输入文本,分层语言结构和潜在变量的时间结构的依赖性。该框架包括多粒子变形AutoEncoder,条件先前和多级自回归潜伏转换器,以获得不同的时间分辨率潜变量,并通过拍摄来对较粗级别的潜入变量进行样本考虑到输入文本。实验结果表明,在合成阶段在没有参考信号的情况下采样细粒潜变量的适当方法。我们拟议的框架还提供了整个话语中说话风格的可控性。
translated by 谷歌翻译
In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces.
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
像有声读物的综合一样,表达性语音综合仍然对样式表示学习和预测仍然具有挑战性。从参考音频或从文本预测样式标签中得出的标签需要大量标记的数据,这是昂贵的,并且难以准确定义和注释。在本文中,我们提出了一个新颖的框架,以一种自我监督的方式从丰富的纯文本中学习样式表示。它利用情感词典,并使用对比度学习和深度聚类。我们进一步将样式表示形式整合为多式变压器TTS中的条件嵌入。通过预测在同一数据集上训练的样式标签,但通过人类注释,我们的方法根据对声音域内和室外测试集的主观评估来改进结果,从而获得了改进的结果。此外,有了隐性的背景感知样式表示,长期综合音频的情感过渡似乎更自然。音频样本可在演示网络上找到。
translated by 谷歌翻译
本文提出了一种表达语音合成架构,用于在单词级别建模和控制说话方式。它试图借助两个编码器来学习语音数据的单词级风格和韵律表示。通过查找声学特征的每个单词的样式令牌的组合,第二个模型样式,第二个输出单词级序列仅在语音信息上调节,以便从风格信息解开它。两个编码器输出与音素编码器输出对齐并连接,然后用非周度塔歇尔策略模型解码。额外的先前编码器用于自向预测样式标记,以便模型能够在没有参考话语的情况下运行。我们发现所产生的模型给出了对样式的单词级和全局控制,以及韵律转移能力。
translated by 谷歌翻译
Automatic emotion recognition in conversation (ERC) is crucial for emotion-aware conversational artificial intelligence. This paper proposes a distribution-based framework that formulates ERC as a sequence-to-sequence problem for emotion distribution estimation. The inherent ambiguity of emotions and the subjectivity of human perception lead to disagreements in emotion labels, which is handled naturally in our framework from the perspective of uncertainty estimation in emotion distributions. A Bayesian training loss is introduced to improve the uncertainty estimation by conditioning each emotional state on an utterance-specific Dirichlet prior distribution. Experimental results on the IEMOCAP dataset show that ERC outperformed the single-utterance-based system, and the proposed distribution-based ERC methods have not only better classification accuracy, but also show improved uncertainty estimation.
translated by 谷歌翻译
良好的善解人意对话系统应首先跟踪并理解用户的情绪,然后以适当的情感回复。但是,目前对此任务的方法要么集中于提高对用户情绪的理解或提出更好的反应策略,而且很少有作品同时考虑这两种工作。我们的工作试图填补这一空缺。受到任务导向对话系统的启发,我们提出了一种具有情感感知对话管理的新颖善解人意的响应生成模型。情绪感知对话管理包含两个部分:(1)情绪状态跟踪保持当前用户的情绪状态,(2)善解人意的对话策略选择预测目标情绪和用户的意图,基于情绪状态跟踪的结果。然后,预测信息用于指导响应的产生。实验结果表明,与自动评估和人类评估下的几个基准相比,动态管理不同的信息可以帮助模型产生更多的移情反应。
translated by 谷歌翻译
情绪识别(ER)旨在将人的话语分类为不同的情感类别。基于本文和声学模式之间的早期融合和基于自我注意力的多模式相互作用,在本文中,我们提出了一种多模式多任务学习方法,用于从孤立的单个话语中进行ER。Iemocap基准测试的实验表明,我们提出的模型的表现要比我们对最新的改性的重新实现要好,并且比文献中所有其他单峰和多模式方法更好地实现了性能。此外,强大的基准和消融研究证明了我们提出的方法的有效性。我们在GitHub上公开提供所有代码。
translated by 谷歌翻译
近年来,表现力的文本到语音表现出改善的性能。但是,综合语音的样式控制通常仅限于离散的情绪类别,并且需要目标扬声器记录的培训数据。在许多实际情况下,用户可能没有在目标情感中记录的参考语音,但仅通过键入所需情感风格的文本描述来控制语音样式。在本文中,我们提出了一个基于文本的界面,用于情感风格控制和多演讲者TTS中的跨言式风格转移。我们提出了双模式样式编码器,该编码器模拟了文本描述嵌入与语言模型嵌入语音样式之间的语义关系。为了进一步改善横向扬声器风格的转移,在多种风格的数据集上,我们提出了新型样式损失。实验结果表明,即使以看不见的风格,我们的模型也可以产生高质量的表达语音。
translated by 谷歌翻译
本文介绍了语音(TTS)系统的Microsoft端到端神经文本:暴风雪挑战2021。这一挑战的目标是从文本中综合自然和高质量的演讲,并在两个观点中接近这一目标:首先是直接模型,并在48 kHz采样率下产生波形,这比以前具有16 kHz或24 kHz采样率的先前系统带来更高的感知质量;第二个是通过系统设计来模拟语音中的变化信息,从而提高了韵律和自然。具体而言,对于48 kHz建模,我们预测声学模型中的16 kHz熔点 - 谱图,并提出称为HIFINET的声码器直接从预测的16kHz MEL谱图中产生48kHz波形,这可以更好地促进培训效率,建模稳定性和语音。质量。我们从显式(扬声器ID,语言ID,音高和持续时间)和隐式(话语级和音素级韵律)视角系统地模拟变化信息:1)对于扬声器和语言ID,我们在培训和推理中使用查找嵌入; 2)对于音高和持续时间,我们在训练中提取来自成对的文本语音数据的值,并使用两个预测器来预测推理中的值; 3)对于话语级和音素级韵律,我们使用两个参考编码器来提取训练中的值,并使用两个单独的预测器来预测推理中的值。此外,我们介绍了一个改进的符合子块,以更好地模拟声学模型中的本地和全局依赖性。对于任务SH1,DelightFultts在MOS测试中获得4.17均匀分数,4.35在SMOS测试中,表明我们所提出的系统的有效性
translated by 谷歌翻译
Expressing empathy is important in everyday conversations, and exploring how empathy arises is crucial in automatic response generation. Most previous approaches consider only a single factor that affects empathy. However, in practice, empathy generation and expression is a very complex and dynamic psychological process. A listener needs to find out events which cause a speaker's emotions (emotion cause extraction), project the events into some experience (knowledge extension), and express empathy in the most appropriate way (communication mechanism). To this end, we propose a novel approach, which integrates the three components - emotion cause, knowledge graph, and communication mechanism for empathetic response generation. Experimental results on the benchmark dataset demonstrate the effectiveness of our method and show that incorporating the key components generates more informative and empathetic responses.
translated by 谷歌翻译
随着在线聊天的日益普及,贴纸在我们的在线沟通中变得越来越重要。在开放域对话中选择适当的贴纸需要对对话和贴纸以及两种类型的方式之间的关系有全面的了解。为了应对这些挑战,我们提出了一种由三个辅助任务组成的多任务学习方法,以增强对对话历史,情感和语义含义的理解。在最近的一个具有挑战性的数据集中进行的广泛实验表明,我们的模型可以更好地结合多模式信息,并在强质基础上获得更高的精度。消融研究进一步验证了每个辅助任务的有效性。我们的代码可在\ url {https://github.com/nonstopfor/sticker-selection}中找到
translated by 谷歌翻译
Human speech can be characterized by different components, including semantic content, speaker identity and prosodic information. Significant progress has been made in disentangling representations for semantic content and speaker identity in Automatic Speech Recognition (ASR) and speaker verification tasks respectively. However, it is still an open challenging research question to extract prosodic information because of the intrinsic association of different attributes, such as timbre and rhythm, and because of the need for unsupervised training schemes to achieve robust large-scale and speaker-independent ASR. The aim of this paper is to address the disentanglement of emotional prosody from speech based on unsupervised reconstruction. Specifically, we identify, design, implement and integrate three crucial components in our proposed speech reconstruction model Prosody2Vec: (1) a unit encoder that transforms speech signals into discrete units for semantic content, (2) a pretrained speaker verification model to generate speaker identity embeddings, and (3) a trainable prosody encoder to learn prosody representations. We first pretrain the Prosody2Vec representations on unlabelled emotional speech corpora, then fine-tune the model on specific datasets to perform Speech Emotion Recognition (SER) and Emotional Voice Conversion (EVC) tasks. Both objective and subjective evaluations on the EVC task suggest that Prosody2Vec effectively captures general prosodic features that can be smoothly transferred to other emotional speech. In addition, our SER experiments on the IEMOCAP dataset reveal that the prosody features learned by Prosody2Vec are complementary and beneficial for the performance of widely used speech pretraining models and surpass the state-of-the-art methods when combining Prosody2Vec with HuBERT representations. Some audio samples can be found on our demo website.
translated by 谷歌翻译