Attribute-controlled text rewriting, also known as text style-transfer, has a crucial role in regulating attributes and biases of textual training data and a machine generated text. In this work we present SimpleStyle, a minimalist yet effective approach for style-transfer composed of two simple ingredients: controlled denoising and output filtering. Despite the simplicity of our approach, which can be succinctly described with a few lines of code, it is competitive with previous state-of-the-art methods both in automatic and in human evaluation. To demonstrate the adaptability and practical value of our system beyond academic data, we apply SimpleStyle to transfer a wide range of text attributes appearing in real-world textual data from social networks. Additionally, we introduce a novel "soft noising" technique that further improves the performance of our system. We also show that teaching a student model to generate the output of SimpleStyle can result in a system that performs style transfer of equivalent quality with only a single greedy-decoded sample. Finally, we suggest our method as a remedy for the fundamental incompatible baseline issue that holds progress in the field. We offer our protocol as a simple yet strong baseline for works that wish to make incremental advancements in the field of attribute controlled text rewriting.
translated by 谷歌翻译
近年来,文本的风格特性吸引了计算语言学研究人员。具体来说,研究人员研究了文本样式转移(TST)任务,该任务旨在在保留其样式独立内容的同时改变文本的风格属性。在过去的几年中,已经开发了许多新颖的TST算法,而该行业利用这些算法来实现令人兴奋的TST应用程序。由于这种共生,TST研究领域迅速发展。本文旨在对有关文本样式转移的最新研究工作进行全面审查。更具体地说,我们创建了一种分类法来组织TST模型,并提供有关最新技术状况的全面摘要。我们回顾了针对TST任务的现有评估方法,并进行了大规模的可重复性研究,我们在两个公开可用的数据集上实验基准了19个最先进的TST TST算法。最后,我们扩展了当前趋势,并就TST领域的新开发发展提供了新的观点。
translated by 谷歌翻译
我们提出了两种小型无监督方法,用于消除文本中的毒性。我们的第一个方法结合了最近的两个想法:(1)使用小型条件语言模型的生成过程的指导和(2)使用释义模型进行风格传输。我们使用良好的令人措辞的令人愉快的释放器,由风格培训的语言模型引导,以保持文本内容并消除毒性。我们的第二种方法使用BERT用他们的非攻击性同义词取代毒性单词。我们通过使BERT替换具有可变数量的单词的屏蔽令牌来使该方法更灵活。最后,我们介绍了毒性去除任务的风格转移模型的第一个大规模比较研究。我们将模型与许多用于样式传输的方法进行比较。使用无监督的样式传输指标的组合以可参考方式评估该模型。两种方法都建议产生新的SOTA结果。
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
文本样式传输(TST)旨在在保持相同内容的同时将源文本的底层样式更改为另一种特定样式。由于高质量平行训练数据的稀缺性,无监督的学习已成为TST任务的趋势方向。在本文中,我们提出了一种新的基于VAE的文本方式转移,具有Pivot词增强学习(VT-LOWER)方法,该方法利用变分AutiConder(VAE)和外部风格嵌入,共同学习语义和风格分布。此外,我们介绍了枢轴词学习,它用于学习特定风格的决定性词语,从而进一步提高风格转移的整体性能。所提出的vt-rtower可以缩放到不同的TST场景,因为具有新颖和灵活的风格强度控制机制的非常有限和非平行训练数据。实验表明,VT-BURER优于语言,形式和代码切换TST任务的最先进。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle. We introduce MaRCo, a detoxification algorithm that combines controllable generation and text rewriting methods using a Product of Experts with autoencoder language models (LMs). MaRCo uses likelihoods under a non-toxic LM (expert) and a toxic LM (anti-expert) to find candidate words to mask and potentially replace. We evaluate our method on several subtle toxicity and microaggressions datasets, and show that it not only outperforms baselines on automatic metrics, but MaRCo's rewrites are preferred 2.1 $\times$ more in human evaluation. Its applicability to instances of subtle toxicity is especially promising, demonstrating a path forward for addressing increasingly elusive online hate.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Authorship style transfer involves altering the style of text to match the style of some target author whilst preserving the semantic meaning of the original text. Existing approaches to unsupervised authorship style transfer like STRAP have largely focused on style transfer for target authors with many examples of their writing style through books, speeches, or other published works (Krishna et al., 2020). Due to this high-resource training data requirement (often greater than 100,000 words), these approaches are often only useful for style transfer to the style of published authors, politicians, or other well-known figures and authorship styles. In this paper, we attempt to perform low-resource authorship style transfer, a more challenging class of authorship style transfer where only a limited amount of text in the target author's style may exist. In our experiments, we specifically choose source and target authors from Reddit to perform style transfer over their Reddit posts, limiting ourselves to just 16 posts (on average $\approx$ 500 words) of the target author's style. We then propose a method for automatic evaluation on the low-resource authorship style transfer task utilizing authorship and style representation embeddings (Rivera-Soto et al., 2021; Wegmann et al., 2022). We evaluate our style transferred outputs with the proposed automatic evaluation method and find that our method, STYLL, is able to outperform STRAP and a comprehensive set of baselines.
translated by 谷歌翻译
文本排毒是创建中性版本的有毒文本的样式转移任务。在本文中,我们使用文本编辑的概念来使用平行的俄罗斯文本语料库构建基于两步标记的排毒模型。通过此模型,我们在Russe排毒共享任务中所有模型中达到了最佳的样式转移精度,超过了较大的序列到序列模型。
translated by 谷歌翻译
使用样式转移模型来降低社交媒体评论的侵犯性可以帮助促进更具包容性的环境。但是,没有大量的数据集包含令人反感的文本及其不利的同行,并且具有有限标记数据的微调预审计模型可以导致样式传递文本中原始含义的丧失。为了解决这个问题,我们提供了两个主要贡献。首先,我们发布了第一个公开可用的,平行的反击红色评论及其风格转让的评论,由专家社会语言学家注释。然后,我们介绍了第一个话语感知的样式转移模型,这些模型可以有效地降低Reddit文本中的进攻性,同时保留原始文本的含义。这些模型是第一个检查评论与文本之间回复的推论链接的模型,以转移进攻性reddit文本的样式。我们提出了两种不同的方法,将话语关系与预验证的变压器模型集成在一起,并在我们的Reddit及其无罪分子同行的进攻评论的数据集中对其进行评估。相对于自动指标和人类评估的基线的改进表明,与最先进的话语 - 不可思议的模型相比,我们的话语感知模型在保持样式转移文本的含义方面更好。
translated by 谷歌翻译
真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
在线仇恨言论已成为小时的需求。但是,由于几种地缘政治和文化原因,对此类活动的禁令是不可行的。为了减少问题的严重性,在本文中,我们介绍了一项新颖的任务,仇恨言语归一化,旨在削弱在线帖子表现出的仇恨强度。仇恨言语归一化的意图不是支持仇恨,而是为用户提供对非讨厌的垫脚石,同时为在线平台提供更多时间来监视用户行为的任何改进。为此,我们手动策划了平行语料库 - 仇恨文本及其标准化的同行(标准化文本较不憎恨,更良性)。我们介绍了NACL,这是一个简单而有效的仇恨言语归一化模型,该模型在三个阶段运行 - 首先,它测量了原始样本的仇恨强度;其次,它标识了其中的仇恨跨度;最后,它通过解释仇恨跨度来降低仇恨强度。我们进行了广泛的实验,以通过三向评估(内在,外部和人类研究)来衡量NaCl的功效。我们观察到,NaCl优于六个基准-NACL的强度预测得分为0.1365 RMSE,在SPAN识别中获得0.622 F1分数,而82.27 BLEU和80.05的差异和80.05的困惑为归一化​​文本生成。我们进一步显示了NACL在其他平台上的普遍性(Reddit,Facebook,GAB)。将NaCl的交互式原型放在一起进行用户研究。此外,该工具正在WIPRO AI的真实环境中部署,这是其在线平台上处理有害内容的任务的一部分。
translated by 谷歌翻译
Recent studies have shown the impressive efficacy of counterfactually augmented data (CAD) for reducing NLU models' reliance on spurious features and improving their generalizability. However, current methods still heavily rely on human efforts or task-specific designs to generate counterfactuals, thereby impeding CAD's applicability to a broad range of NLU tasks. In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework. AutoCAD first leverages a classifier to unsupervisedly identify rationales as spans to be intervened, which disentangles spurious and causal features. Then, AutoCAD performs controllable generation enhanced by unlikelihood training to produce diverse counterfactuals. Extensive evaluations on multiple out-of-domain and challenge benchmarks demonstrate that AutoCAD consistently and significantly boosts the out-of-distribution performance of powerful pre-trained models across different NLU tasks, which is comparable or even better than previous state-of-the-art human-in-the-loop or task-specific CAD methods. The code is publicly available at https://github.com/thu-coai/AutoCAD.
translated by 谷歌翻译
惯用表达式(IES)在自然语言中起重要作用。在本文中,我们研究了惯用句子解释(ISP)的任务,旨在通过用IE用文字解释来解释一个句子。缺乏与惯用语文平行句子的大型语料库是这项任务的主要挑战,我们考虑了两个单独的解决方案。首先,我们向ISP提出了一个无人监督的方法,它利用IE的上下文信息和定义,不需要并行句子训练集。其次,我们提出了一种弱监督的方法,使用后翻来的方法与IE共同执行释义和生成句子,以扩大小规模并行句子训练数据集。该研究的其他重要衍生物包括一种模型,该模型将句子中的文字短语替换为一种与IE生成惯用表达式和具有惯用/文字句对的大规模并行数据集。拟议的解决方案与竞争性基线相比的有效性在Bleu超过5.16点的相对增益中观察到超过8.75点,在使用自动和手动的并行数据集上经验上验证生成的句子时,Sari超过19.57点评估。我们展示了ISP作为EN-DE机器翻译中的预处理步骤的实用实用性。
translated by 谷歌翻译
State-of-the-art text simplification (TS) systems adopt end-to-end neural network models to directly generate the simplified version of the input text, and usually function as a blackbox. Moreover, TS is usually treated as an all-purpose generic task under the assumption of homogeneity, where the same simplification is suitable for all. In recent years, however, there has been increasing recognition of the need to adapt the simplification techniques to the specific needs of different target groups. In this work, we aim to advance current research on explainable and controllable TS in two ways: First, building on recently proposed work to increase the transparency of TS systems, we use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction. Second, based on the results of this preliminary task, we extend a state-of-the-art Seq2Seq TS model, ACCESS, to enable explicit control of ten attributes. The results of experiments show (1) that our approach improves the performance of state-of-the-art models for predicting explainable complexity and (2) that explicitly conditioning the Seq2Seq model on ten attributes leads to a significant improvement in performance in both within-domain and out-of-domain settings.
translated by 谷歌翻译
生成反事实测试箱是测试NLP模型并使其像传统软件一样坚固且可靠的重要主体。在生成测试箱时,所需的特性是能够以灵活的方式控制测试案例生成以测试各种故障案例并以目标方式解释和修复它们。在这个方向上,通过手动编写生成受控反事实的规则,在先前的作品中取得了重大进展。但是,这种方法需要大量的手动监督,并且缺乏轻松引入新控件的灵活性。由PPLM的插件方法令人印象深刻的灵活性的激励,我们建议将插件的框架带入反事实测试案例生成任务。我们介绍了Casper,这是一种插件的反事实生成框架,以生成满足需求目标属性的测试用例。我们的插件模型可以在给定任何属性模型的情况下引导测试案例生成过程,而无需对模型的属性特定培训。在实验中,我们表明Casper有效地生成了反事实文本,该文本遵循属性模型提供的转向,同时流利,多样化并保留原始内容。我们还表明,CASPER的生成的反事实可用于增强训练数据,从而固定并使测试模型更加可靠。
translated by 谷歌翻译
Unavailability of parallel corpora for training text style transfer (TST) models is a very challenging yet common scenario. Also, TST models implicitly need to preserve the content while transforming a source sentence into the target style. To tackle these problems, an intermediate representation is often constructed that is devoid of style while still preserving the meaning of the source sentence. In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation. We posit that semantic notations like AMR are a natural choice for an intermediate representation. Hence, we propose T-STAR: a model comprising of two components, text-to-AMR encoder and a AMR-to-text decoder. We propose several modeling improvements to enhance the style agnosticity of the generated AMR. To the best of our knowledge, T-STAR is the first work that uses AMR as an intermediate representation for TST. With thorough experimental evaluation we show T-STAR significantly outperforms state of the art techniques by achieving on an average 15.2% higher content preservation with negligible loss (3% approx.) in style accuracy. Through detailed human evaluation with 90,000 ratings, we also show that T-STAR has up to 50% lesser hallucinations compared to state of the art TST models.
translated by 谷歌翻译
Text style transfer aims to alter the style of a sentence while preserving its content. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and often uses cycle construction to train models. Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks. In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation. Instead of the typical encoder-decoder scheme, StyleFlow can not only conduct the forward process to obtain the output, but also infer to the input through the output. We design an attention-aware coupling layers to disentangle the content representations and the style representations of a sentence. Besides, we propose a data augmentation method based on Normalizing Flow to improve the robustness of the model. Experiment results demonstrate that our model preserves content effectively and achieves the state-of-the-art performance on the most metrics.
translated by 谷歌翻译