Exploiting rich linguistic information in raw text is crucial for expressive text-to-speech (TTS). As large scale pre-trained text representation develops, bidirectional encoder representations from Transformers (BERT) has been proven to embody semantic information and employed to TTS recently. However, original or simply fine-tuned BERT embeddings still cannot provide sufficient semantic knowledge that expressive TTS models should take into account. In this paper, we propose a word-level semantic representation enhancing method based on dependency structure and pre-trained BERT embedding. The BERT embedding of each word is reprocessed considering its specific dependencies and related words in the sentence, to generate more effective semantic representation for TTS. To better utilize the dependency structure, relational gated graph network (RGGN) is introduced to make semantic information flow and aggregate through the dependency structure. The experimental results show that the proposed method can further improve the naturalness and expressiveness of synthesized speeches on both Mandarin and English datasets.
translated by 谷歌翻译
With the advent of deep learning, a huge number of text-to-speech (TTS) models which produce human-like speech have emerged. Recently, by introducing syntactic and semantic information w.r.t the input text, various approaches have been proposed to enrich the naturalness and expressiveness of TTS models. Although these strategies showed impressive results, they still have some limitations in utilizing language information. First, most approaches only use graph networks to utilize syntactic and semantic information without considering linguistic features. Second, most previous works do not explicitly consider adjacent words when encoding syntactic and semantic information, even though it is obvious that adjacent words are usually meaningful when encoding the current word. To address these issues, we propose Relation-aware Word Encoding Network (RWEN), which effectively allows syntactic and semantic information based on two modules (i.e., Semantic-level Relation Encoding and Adjacent Word Relation Encoding). Experimental results show substantial improvements compared to previous works.
translated by 谷歌翻译
在人类的言论中,说话者的态度不能只用文本内容完全表达。它必须带有语调。声明性的问题通常用于日常的广东话对话中,通常会以不断增长的语调发出。香草神经文本到语音(TTS)系统由于语义信息的丢失而无法为这些句子综合这些句子的上升。尽管使用额外的语言模型补充系统已经变得越来越普遍,但它们在建模升起的语调方面的性能尚未得到很好的研究。在本文中,我们建议通过基于BERT的语句/问题分类器来补充广州TTS模型。我们设计了不同的培训策略并比较他们的表现。我们对一个名为Cantts的粤语语料库进行实验。经验结果表明,单独的培训方法获得了最佳的概括性能和可行性。
translated by 谷歌翻译
神经端到端TTS模型的最新进展显示出在常规句子的TTS中表现出高质量的自然合成语音。但是,当TTS中考虑整个段落时,重现相似的高质量,在构建基于段落的TTS模型时需要考虑大量上下文信息。为了减轻培训的困难,我们建议通过考虑跨性别,嵌入式结构在培训中对语言和韵律信息进行建模。三个子模块,包括语言学意识,韵律和句子位置网络。具体而言,要了解嵌入在段落中的信息以及相应的组件句子之间的关系,我们利用语言学意识和韵律感知网络。段落中的信息由编码器捕获,段落中的句子间信息通过多头注意机制学习。段落中的相对句子位置由句子位置网络明确利用。拟议中的TTS模型在女性普通话中录制的讲故事的音频语料库(4.08小时)接受了培训,该模型表明,它可以产生相当自然而良好的语音段落。与基于句子的模型相比,可以更好地预测和渲染的跨句子上下文信息,例如连续句子之间的断裂和韵律变化。在段落文本上进行了测试,其长度与培训数据的典型段落长度相似,比训练数据的典型段落长得多,新模型产生的TTS语音始终优先于主观测试和基于句子的模型和在客观措施中确认。
translated by 谷歌翻译
We propose a transition-based approach that, by training a single model, can efficiently parse any input sentence with both constituent and dependency trees, supporting both continuous/projective and discontinuous/non-projective syntactic structures. To that end, we develop a Pointer Network architecture with two separate task-specific decoders and a common encoder, and follow a multitask learning strategy to jointly train them. The resulting quadratic system, not only becomes the first parser that can jointly produce both unrestricted constituent and dependency trees from a single model, but also proves that both syntactic formalisms can benefit from each other during training, achieving state-of-the-art accuracies in several widely-used benchmarks such as the continuous English and Chinese Penn Treebanks, as well as the discontinuous German NEGRA and TIGER datasets.
translated by 谷歌翻译
在方面情绪分类(ASC)中,最先进的模型编码语法图形或关系图以捕获本地语法信息或全局关系信息。尽管语法和关系图的优点,但它们具有忽略的缺点,这限制了图形建模过程中的表示功率。为了解决他们的局限性,我们设计了一种新的本地 - 全局交互图,它通过互动边缘缝合两个图来结合它们的优势。为了模拟本地全局交互图形,我们提出了一个新的神经网络被称为Dignet,其核心模块是执行两个进程的堆叠本地 - 全局交互(LGI)层:图中媒体消息传递和跨图形消息传递。通过这种方式,可以在理解方面的情绪方面整体和解局部句法和全局关系信息。具体而言,我们设计了具有不同种类的交互边缘和LGI层的三种变体的局部全局交互图的两种变体。我们对几个公共基准数据集进行实验,结果表明,在LAP14,Res14和Res15数据集的宏F1方面,我们以前的3 \%,2.32 \%和6.33 \%以3 \%,2.32 \%和6.33 \%。拟议的本地 - 全球互动图和赤霞珠的效力与优越性。
translated by 谷歌翻译
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
translated by 谷歌翻译
在源代码处理的领域中,基于变压器的表示模型表现出强大的功能,并在许多任务中都实现了最先进的(SOTA)性能。尽管变压器模型处理了顺序源代码,但证据表明,它们也可以捕获结构信息(\ eg,在语法树,数据流,控制流,\等)。我们提出了汇总的注意力评分,这是一种研究变压器学到的结构信息的方法。我们还提出了汇总的注意图,这是一种从预训练模型中提取程序图的新方法。我们从多个角度测量我们的方法。此外,根据我们的经验发现,我们使用自动提取的图形来替换可变滥用任务中那些巧妙的手动设计图。实验结果表明,我们自动提取的语义图非常有意义且有效,这为我们提供了一个新的观点,可以理解和使用模型中包含的信息。
translated by 谷歌翻译
基于神经网络的嵌入一直是创建文本的向量表示以捕获词汇和语义相似性和差异的主流方法。通常,现有的编码方法将标点符号视为微不足道的信息;因此,通常将它们视为预定义的令牌/单词或在预处理阶段消除。但是,标点符号可能在句子的语义中发挥重要作用,例如“让我们吃\ hl {,}奶奶”和“让我们吃奶奶”。我们假设标点符号表示模型将影响下游任务的性能。因此,我们提出了一种模型 - 不足的方法,该方法同时结合了句法和上下文信息,以提高情感分类任务的性能。我们通过对公开可用数据集进行实验来证实我们的发现,并提供案例研究,我们的模型就句子中的标点符号生成了表示。
translated by 谷歌翻译
来自文本的采矿因果关系是一种复杂的和至关重要的自然语言理解任务,对应于人类认知。其解决方案的现有研究可以分为两种主要类别:基于特征工程和基于神经模型的方法。在本文中,我们发现前者具有不完整的覆盖范围和固有的错误,但提供了先验知识;虽然后者利用上下文信息,但其因果推断不足。为了处理限制,我们提出了一个名为MCDN的新型因果关系检测模型,明确地模拟因果关系,而且,利用两种方法的优势。具体而言,我们采用多头自我关注在Word级别获得语义特征,并在段级别推断出来的SCRN。据我们所知,关于因果关系任务,这是第一次应用关系网络。实验结果表明:1)该方法对因果区检测进行了突出的性能; 2)进一步分析表现出MCDN的有效性和稳健性。
translated by 谷歌翻译
End-to-end text-to-speech synthesis (TTS) can generate highly natural synthetic speech from raw text. However, rendering the correct pitch accents is still a challenging problem for end-to-end TTS. To tackle the challenge of rendering correct pitch accent in Japanese end-to-end TTS, we adopt PnG~BERT, a self-supervised pretrained model in the character and phoneme domain for TTS. We investigate the effects of features captured by PnG~BERT on Japanese TTS by modifying the fine-tuning condition to determine the conditions helpful inferring pitch accents. We manipulate content of PnG~BERT features from being text-oriented to speech-oriented by changing the number of fine-tuned layers during TTS. In addition, we teach PnG~BERT pitch accent information by fine-tuning with tone prediction as an additional downstream task. Our experimental results show that the features of PnG~BERT captured by pretraining contain information helpful inferring pitch accent, and PnG~BERT outperforms baseline Tacotron on accent correctness in a listening test.
translated by 谷歌翻译
最近结束语音合成的最新进步使得能够产生高度自然的语音。然而,训练这些模型通常需要大量的高保真语音数据,并且对于看不见的文本,合成语音的韵律相对不自然。为了解决这些问题,我们建议将基于精细的BERT基前端与基于预先训练的FastSeech2的声学模型结合起来,以改善韵律建模。在多任务学习中,预训练的伯爵在多电话消歧任务中,联合中文词组分割任务,联合中文字分割(CWS)和演讲(POS)标记任务,以及在多任务学习中的韵律结构预测(PSP)任务框架。FastSeech 2在大规模的外部数据上预先培训,这些数据很少,但更容易获得。实验结果表明,微调BERT模型和预训练的禁止轴2可以改善韵律,特别是对于那些结构复杂的句子。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
语法纠错(GEC)是检测和纠正句子中语法错误的任务。最近,神经机翻译系统已成为这项任务的流行方法。然而,这些方法缺乏使用句法知识,这在语法错误的校正中起着重要作用。在这项工作中,我们提出了一种语法引导的GEC模型(SG-GEC),它采用图表注意机制来利用依赖树的句法知识。考虑到语法不正确的源句子的依赖性树可以提供不正确的语法知识,我们提出了一个依赖树修正任务来处理它。结合数据增强方法,我们的模型在不使用任何大型预先训练模型的情况下实现了强大的性能。我们评估我们在GEC任务的公共基准上的模型,实现了竞争结果。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
考虑到RDF三元组的集合,RDF到文本生成任务旨在生成文本描述。最先前的方法使用序列到序列模型或使用基于图形的模型来求解此任务以编码RDF三维并生成文本序列。然而,这些方法未能明确模拟RDF三元组之间的本地和全球结构信息。此外,以前的方法也面临了生成文本的低信任问题的不可忽略的问题,这严重影响了这些模型的整体性能。为了解决这些问题,我们提出了一种组合两个新的图形增强结构神经编码器的模型,共同学习输入的RDF三元组中的本地和全局结构信息。为了进一步改进文本忠诚,我们创新地根据信息提取(即)引进了强化学习(RL)奖励。我们首先使用佩带的IE模型从所生成的文本中提取三元组,并将提取的三级的正确数量视为额外的RL奖励。两个基准数据集上的实验结果表明,我们所提出的模型优于最先进的基线,额外的加强学习奖励确实有助于改善所生成的文本的忠诚度。
translated by 谷歌翻译
对于自然语言处理系统,两种证据支持在大型未解除的基层上的神经语言模型中使用文本表示:在应用程序启发基准上的表现(Peters等,2018年,除其他外)以及出现的出现这些陈述中的句法抽象(Tenney等,2019年,尤其)。另一方面,缺乏接地的监督呼吁质疑这些表现如何捕获意义(Bender和Koller,2020)。我们对最近的语言模型应用小说探针 - 特别关注由语义依赖性运作的谓词参数结构(Ivanova等,2012) - 并发现,与语法不同,语义不是通过今天的预磨款模型带到表面上。然后,我们使用卷积图编码器将语义解析明确地将语义解析结合到特定于任务的FineTuning中,为胶水基准测试中的自然语言理解(NLU)任务产生益处。这种方法展示了通用(而不是任务特定的)语言监督的潜力,以上和超越传统的预威胁和芬特。有几个诊断有助于本地化我们方法的好处。
translated by 谷歌翻译
统一的意见角色标签(ORL)旨在给予一篇文章检测一次拍摄中“意见持有人 - 目标”的所有可能的意见结构。不幸的是,现有的基于转换的统一方法受到更长的意见术语,并且无法解决术语重叠问题。通过采用基于跨度的图形模型实现了当前的最佳性能,然而仍然存在高模型复杂性并且在意见和角色之间的互动不足。在这项工作中,我们通过重新检测转换架构并使用指针网络(PINETNET)来调查新的解决方案。该框架在线性时间复杂度解析了所有意见结构,同时通过限制与PointNet的任何术语的限制。为了实现明确的观点 - 角色互动,我们进一步提出了一个统一的依赖性意见图(UDOG),共同建立了句法依赖结构和部分意见角色结构。然后,我们设计了居中性的图形聚合器(RCGA)以编码多关键udog,其中产生的高阶表示用于促进香草过渡系统中的预测。我们的模型在MPQA基准测试中实现了新的最先进结果。分析进一步证明了我们对疗效和效率的方法的优越性。
translated by 谷歌翻译
预训练的语言模型在对话任务上取得了长足的进步。但是,这些模型通常在表面对话文本上进行训练,因此被证明在理解对话环境的主要语义含义方面是薄弱的。我们研究抽象含义表示(AMR)作为预训练模型的明确语义知识,以捕获预训练期间对话中的核心语义信息。特别是,我们提出了一个基于语义的前训练框架,该框架通过三个任务来扩展标准的预训练框架(Devlin等,2019)。根据AMR图表示。关于聊天聊天和面向任务的对话的理解的实验表明了我们的模型的优势。据我们所知,我们是第一个利用深层语义表示进行对话预训练的人。
translated by 谷歌翻译
Document-level relation extraction (DocRE) aims to identify semantic labels among entities within a single document. One major challenge of DocRE is to dig decisive details regarding a specific entity pair from long text. However, in many cases, only a fraction of text carries required information, even in the manually labeled supporting evidence. To better capture and exploit instructive information, we propose a novel expLicit syntAx Refinement and Subsentence mOdeliNg based framework (LARSON). By introducing extra syntactic information, LARSON can model subsentences of arbitrary granularity and efficiently screen instructive ones. Moreover, we incorporate refined syntax into text representations which further improves the performance of LARSON. Experimental results on three benchmark datasets (DocRED, CDR, and GDA) demonstrate that LARSON significantly outperforms existing methods.
translated by 谷歌翻译