非平行文本样式转移是自然语言生成的重要任务。但是,先前的研究集中在令牌或句子级别上,例如句子情绪和形式转移,但在话语水平上忽略了长时间的转移。长文本通常涉及更复杂的作者语言偏好,例如话语结构,而不是句子。在本文中,我们制定了非并行故事作者风格转移的任务,该任务需要将输入故事传输到指定的作者样式的同时,同时维护源语义。为了解决这个问题,我们提出了一个名为StoryTrans的一代模型,该模型利用话语表示捕获源内容信息并将其传输到具有可学习样式嵌入的目标样式中。我们使用额外的培训目标将文学的文学特征与学习的话语表示,以防止模型退化为自动编码器。此外,为了增强内容保存,我们设计了一个面具和填充框架,以将源文本的特定于特定于样式的关键字定为生成。此外,我们分别用中文和英语构建了此任务的新数据集。广泛的实验表明,我们的模型在样式传输和内容保存的总体性能方面优于强大的基线。
translated by 谷歌翻译
Text style transfer aims to alter the style of a sentence while preserving its content. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and often uses cycle construction to train models. Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks. In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation. Instead of the typical encoder-decoder scheme, StyleFlow can not only conduct the forward process to obtain the output, but also infer to the input through the output. We design an attention-aware coupling layers to disentangle the content representations and the style representations of a sentence. Besides, we propose a data augmentation method based on Normalizing Flow to improve the robustness of the model. Experiment results demonstrate that our model preserves content effectively and achieves the state-of-the-art performance on the most metrics.
translated by 谷歌翻译
尽管在产生流利的文本方面取得了进步,但现有的预训练模型倾向于在产生诸如故事和新闻之类的叙述时将不连贯的事件序列附加到相关实体上。我们猜想,这些问题是由将实体表示为浅表词的静态嵌入而导致的,同时忽略了对其不断变化的状态建模,即随着文本的展开,即它们所携带的信息。因此,我们将变压器模型扩展到动态执行实体状态更新和叙事生成的句子实现。我们提出了一个对比框架,以在离散空间中学习状态表示,并将其他注意层插入解码器中以更好地利用这些状态。两个叙述数据集的实验表明,与有意义的实体状态的指导相比,我们的模型可以产生更多的连贯和多样化的叙事。
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
近年来,文本的风格特性吸引了计算语言学研究人员。具体来说,研究人员研究了文本样式转移(TST)任务,该任务旨在在保留其样式独立内容的同时改变文本的风格属性。在过去的几年中,已经开发了许多新颖的TST算法,而该行业利用这些算法来实现令人兴奋的TST应用程序。由于这种共生,TST研究领域迅速发展。本文旨在对有关文本样式转移的最新研究工作进行全面审查。更具体地说,我们创建了一种分类法来组织TST模型,并提供有关最新技术状况的全面摘要。我们回顾了针对TST任务的现有评估方法,并进行了大规模的可重复性研究,我们在两个公开可用的数据集上实验基准了19个最先进的TST TST算法。最后,我们扩展了当前趋势,并就TST领域的新开发发展提供了新的观点。
translated by 谷歌翻译
我们提出了两种小型无监督方法,用于消除文本中的毒性。我们的第一个方法结合了最近的两个想法:(1)使用小型条件语言模型的生成过程的指导和(2)使用释义模型进行风格传输。我们使用良好的令人措辞的令人愉快的释放器,由风格培训的语言模型引导,以保持文本内容并消除毒性。我们的第二种方法使用BERT用他们的非攻击性同义词取代毒性单词。我们通过使BERT替换具有可变数量的单词的屏蔽令牌来使该方法更灵活。最后,我们介绍了毒性去除任务的风格转移模型的第一个大规模比较研究。我们将模型与许多用于样式传输的方法进行比较。使用无监督的样式传输指标的组合以可参考方式评估该模型。两种方法都建议产生新的SOTA结果。
translated by 谷歌翻译
文本样式传输(TST)旨在在保持相同内容的同时将源文本的底层样式更改为另一种特定样式。由于高质量平行训练数据的稀缺性,无监督的学习已成为TST任务的趋势方向。在本文中,我们提出了一种新的基于VAE的文本方式转移,具有Pivot词增强学习(VT-LOWER)方法,该方法利用变分AutiConder(VAE)和外部风格嵌入,共同学习语义和风格分布。此外,我们介绍了枢轴词学习,它用于学习特定风格的决定性词语,从而进一步提高风格转移的整体性能。所提出的vt-rtower可以缩放到不同的TST场景,因为具有新颖和灵活的风格强度控制机制的非常有限和非平行训练数据。实验表明,VT-BURER优于语言,形式和代码切换TST任务的最先进。
translated by 谷歌翻译
Unavailability of parallel corpora for training text style transfer (TST) models is a very challenging yet common scenario. Also, TST models implicitly need to preserve the content while transforming a source sentence into the target style. To tackle these problems, an intermediate representation is often constructed that is devoid of style while still preserving the meaning of the source sentence. In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation. We posit that semantic notations like AMR are a natural choice for an intermediate representation. Hence, we propose T-STAR: a model comprising of two components, text-to-AMR encoder and a AMR-to-text decoder. We propose several modeling improvements to enhance the style agnosticity of the generated AMR. To the best of our knowledge, T-STAR is the first work that uses AMR as an intermediate representation for TST. With thorough experimental evaluation we show T-STAR significantly outperforms state of the art techniques by achieving on an average 15.2% higher content preservation with negligible loss (3% approx.) in style accuracy. Through detailed human evaluation with 90,000 ratings, we also show that T-STAR has up to 50% lesser hallucinations compared to state of the art TST models.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
预训练的语言模型(PLM)无法生成长形式的叙事文本,因为它们不考虑全局结构。结果,生成的文本通常是不巧妙的,重复的或缺乏内容的。故事发电的最新工作以提示,关键字或语义框架的形式重新引入了明确的内容计划。经过大型平行语料库的培训,这些模型可以生成更合乎逻辑的事件序列,从而产生更满足的故事。但是,这些中间表示通常不使用自然语言,并且不需要微调就无法使用。我们建议使用现成的PLM生成故事情节,同时保持内容计划的好处,以产生凝聚力和满足的故事。我们提出的方法ScratchPlot首先提示PLM构成内容计划。然后,我们生成故事的身体并以内容计划结束。此外,我们通过使用其他PLM来对生成的(故事,结尾)对进行排名。我们用各种基线基准测试我们的方法,并在人类和自动评估中取得了卓越的结果。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
Neural machine translation(NMT) has aroused wide attention due to its impressive quality. Beyond quality, controlling translation styles is also an important demand for many languages. Previous related studies mainly focus on controlling formality and gain some improvements. However, they still face two challenges. The first is the evaluation limitation. Style contains abundant information including lexis, syntax, etc. But only formality is well studied. The second is the heavy reliance on iterative fine-tuning when new styles are required. Correspondingly, this paper contributes in terms of the benchmark and approach. First, we re-visit this task and propose a multiway stylized machine translation (MSMT) benchmark, which includes multiple categories of styles in four language directions to push the boundary of this task. Second, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which needs no extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance. All of our data and code are released at https://github.com/IvanWang0730/StyleAP.
translated by 谷歌翻译
使用样式转移模型来降低社交媒体评论的侵犯性可以帮助促进更具包容性的环境。但是,没有大量的数据集包含令人反感的文本及其不利的同行,并且具有有限标记数据的微调预审计模型可以导致样式传递文本中原始含义的丧失。为了解决这个问题,我们提供了两个主要贡献。首先,我们发布了第一个公开可用的,平行的反击红色评论及其风格转让的评论,由专家社会语言学家注释。然后,我们介绍了第一个话语感知的样式转移模型,这些模型可以有效地降低Reddit文本中的进攻性,同时保留原始文本的含义。这些模型是第一个检查评论与文本之间回复的推论链接的模型,以转移进攻性reddit文本的样式。我们提出了两种不同的方法,将话语关系与预验证的变压器模型集成在一起,并在我们的Reddit及其无罪分子同行的进攻评论的数据集中对其进行评估。相对于自动指标和人类评估的基线的改进表明,与最先进的话语 - 不可思议的模型相比,我们的话语感知模型在保持样式转移文本的含义方面更好。
translated by 谷歌翻译
Although lyrics generation has achieved significant progress in recent years, it has limited practical applications because the generated lyrics cannot be performed without composing compatible melodies. In this work, we bridge this practical gap by proposing a song rewriting system which rewrites the lyrics of an existing song such that the generated lyrics are compatible with the rhythm of the existing melody and thus singable. In particular, we propose SongRewriter, a controllable Chinese lyric generation and editing system which assists users without prior knowledge of melody composition. The system is trained by a randomized multi-level masking strategy which produces a unified model for generating entirely new lyrics or editing a few fragments. To improve the controllabiliy of the generation process, we further incorporate a keyword prompt to control the lexical choices of the content and propose novel decoding constraints and a vowel modeling task to enable flexible end and internal rhyme schemes. While prior rhyming metrics are mainly for rap lyrics, we propose three novel rhyming evaluation metrics for song lyrics. Both automatic and human evaluations show that the proposed model performs better than the state-of-the-art models in both contents and rhyming quality. Our code and models implemented in MindSpore Lite tool will be available.
translated by 谷歌翻译
Creating an essay based on a few given topics is a challenging NLP task. Although several effective methods for this problem, topic-to-essay generation, have appeared recently, there is still much room for improvement, especially in terms of the coverage of the given topics and the coherence of the generated text. In this paper, we propose a novel approach called TegFormer which utilizes the Transformer architecture where the encoder is enriched with domain-specific contexts while the decoder is enhanced by a large-scale pre-trained language model. Specifically, a \emph{Topic-Extension} layer capturing the interaction between the given topics and their domain-specific contexts is plugged into the encoder. Since the given topics are usually concise and sparse, such an additional layer can bring more topic-related semantics in to facilitate the subsequent natural language generation. Moreover, an \emph{Embedding-Fusion} module that combines the domain-specific word embeddings learnt from the given corpus and the general-purpose word embeddings provided by a GPT-2 model pre-trained on massive text data is integrated into the decoder. Since GPT-2 is at a much larger scale, it contains a lot more implicit linguistic knowledge which would help the decoder to produce more grammatical and readable text. Extensive experiments have shown that the pieces of text generated by TegFormer have better topic coverage and higher text coherence than those from SOTA topic-to-essay techniques, according to automatic and human evaluations. As revealed by ablation studies, both the Topic-Extension layer and the Embedding-Fusion module contribute substantially to TegFormer's performance advantage.
translated by 谷歌翻译
许多古典童话,小说和剧本都利用对话来推进故事情节并建立角色。我们提出了第一个研究,以探索机器是否可以理解和产生故事中的对话,这需要捕获不同角色的特征及其之间的关系。为此,我们提出了两项​​新任务,包括蒙版对话生成和对话演讲者的认可,即分别产生对话转弯和预测说话者的指定对话转弯。我们构建了一个新的数据集拨号故事,该数据集由105K中国故事组成,其中包含大量对话,以支持评估。我们通过对拨号故事进行自动和手动评估测试现有模型来显示提出的任务的困难。此外,我们建议学习明确的角色表示,以提高这些任务的绩效。广泛的实验和案例研究表明,我们的方法可以产生更连贯和信息丰富的对话,并获得比强基础更高的说话者识别精度。
translated by 谷歌翻译
专家员工的文字式传输技术有可能改善科学社区成员与公众之间的沟通。专家制作的高质量信息往往充满了困难的术语外国人,努力了解。这是医疗领域的一个特别值得注意的问题,其中Layman经常在线医学文本混淆。目前,两个瓶颈干扰了建立高质量医学专家外延式转移系统的目标:曾经专家和外行术语的缺点是普及的预押医学域语言模型,缺乏并行的Corpora培训转让任务本身。为了缓解第一个问题,我们提出了一种新颖的语言模型(LM)预测任务,知识基础同化,从自我监督学习期间将来自专家和外行式医学术语术语的边缘的预先训练数据综合为LM的LM。 。要缓解第二个问题,我们使用基于边缘的标准在医学专家 - Layman域中建立大规模并行语料库。我们的实验表明,基于变压器的模型,以知识库同化和其他良好的预先预订任务对我们的新并行语料库进行了微调,这导致专家外部转账基准的相当大,达到了我们人类评估的平均相对改善总体成功率(OSR),达106%。我们释放我们的代码和并行语料库以供未来的研究。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
定义生成任务旨在自动在特定上下文中生成一个单词的定义。但是,由于缺乏针对不同复杂性的数据集,模型产生的定义往往会保持相同的复杂度。本文提出了为具有可控复杂性级别的单词生成定义的新任务。相应地,我们介绍了编译,一个数据集给出了有关中国定义的详细信息,并且每个定义都标有其复杂性级别。编译数据集包括74,303个单词和106,882个定义。据我们所知,它是中国定义生成任务的最大数据集。我们选择各种代表性生成方法作为此任务的基准和进行评估,这说明我们的数据集在协助模型生成不同的复杂性级别定义方面发挥了出色的作用。我们认为,编译数据集将使复杂性可控定义生成的进一步研究受益。
translated by 谷歌翻译