舌头是有意义的句子,难以发音。自动产生舌头扭曲的过程具有挑战性,因为产生的话语必须立即满足两个条件:语音难度和语义含义。此外,语音难度本身很难表征,并且通过异质的现象(例如垂涎和谐音)的异质组合以自然的扭曲词来表达。在本文中,我们提出了Pancetta:音素意识到的神经完成,以自动引起舌头扭曲。我们利用音素表示来捕获语音难度的概念,并训练语言模型以在两个提出的任务设置上生成原始的舌头扭曲。为此,我们策划了一个名为Pancetta的数据集,该数据集由现有的英语舌头组成。通过自动和人类评估以及定性分析,我们表明pancetta产生了新颖,语音上的困难,流利和语义上有意义的舌头扭曲。
translated by 谷歌翻译
拟人化是一种语音人物,它赋予无生命实体具有属性和行动,通常被视为需要动画。在本文中,我们探讨了人格化生成的任务。为此,我们提出了菠萝:通过获取平行的人格化数据来学习增强的产生,来拟人化无生命的实体。我们策划了一个名为PersonifCorp的拟人化语料库,并自动生成了这些拟人化的文字化。我们通过训练SEQ2SEQ模型来拟人化给定的文字输入,从而证明了该平行语料库的有用性。自动评估和人类评估都表明,通过人格科目进行微调会带来与人格化相关的素质(例如动画和兴趣)的显着提高。详细的定性分析还强调了菠萝在基准上的关键优势和瑕疵,表明具有强大的能力产生多样化和创造性的拟人化,从而增强了句子的整体吸引力。
translated by 谷歌翻译
根据其结构如何影响文本的解释和意义,文本中存在许多隐式推论。具有年代学中的文本中存在的一个这样的结构方面是其演示的顺序。对于叙述或故事,这被称为叙述顺序。重新排序叙述可能影响时间,因果,事件和其他推论读者从中抽取,这反过来可能对其解释和有趣有很大的影响。在本文中,我们提出并调查了叙事重新排序(Nareor)的任务,涉及以不同的叙述顺序重写给定的故事,同时保留其情节。我们在非线性订单中展示了一个DataSet,Nareorc,在洛奇因子内的故事中的人类重写,并对其进行详细分析。此外,我们提出了具有合适的评估指标的新型任务特定培训方法。我们使用诸如BART和T5等最先进的模型进行Nareorc的实验,并进行广泛的自动和人类评估。我们证明,尽管我们的模型可以体现,但是Nareor是一个具有挑战性的任务,具有进一步探索的潜力。我们还调查了Nareor的两种应用:生成更有趣的故事变化,并且作为临时/事件相关任务的对抗集,除了讨论其他潜在的任务之外,例如与文章技能相关的教学设置,如文章写作和医学的应用。涉及临床叙事。
translated by 谷歌翻译
我们提出了两种小型无监督方法,用于消除文本中的毒性。我们的第一个方法结合了最近的两个想法:(1)使用小型条件语言模型的生成过程的指导和(2)使用释义模型进行风格传输。我们使用良好的令人措辞的令人愉快的释放器,由风格培训的语言模型引导,以保持文本内容并消除毒性。我们的第二种方法使用BERT用他们的非攻击性同义词取代毒性单词。我们通过使BERT替换具有可变数量的单词的屏蔽令牌来使该方法更灵活。最后,我们介绍了毒性去除任务的风格转移模型的第一个大规模比较研究。我们将模型与许多用于样式传输的方法进行比较。使用无监督的样式传输指标的组合以可参考方式评估该模型。两种方法都建议产生新的SOTA结果。
translated by 谷歌翻译
将文本插入段落中指定位置的任务(称为空白(FITB))对于各种应用程序与作家与自然语言生成(NLG)系统互动以制作文本的应用很有用。虽然先前的工作已经通过专门培训的模型来解决此问题,但更有用的模型是可以有效地执行_both_ fitb和延续的模型。在这项工作中,我们评估了使用单个模型完成这两个任务的可行性。我们表明,通过FITB式目标进行预训练的模型都可以完成这两个任务,而预先训练的持续训练的模型却没有。最后,我们展示了如何轻松地对FITB模型进行填充,以允许对一代的长度和单词选择进行细粒度的控制。
translated by 谷歌翻译
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF-better few-shot fine-tuning of language models 1 -a suite of simple and complementary techniques for finetuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning. 2 * The first two authors contributed equally. 1 Alternatively, language models' best friends forever. 2 Our implementation is publicly available at https:// github.com/princeton-nlp/LM-BFF.
translated by 谷歌翻译
Unavailability of parallel corpora for training text style transfer (TST) models is a very challenging yet common scenario. Also, TST models implicitly need to preserve the content while transforming a source sentence into the target style. To tackle these problems, an intermediate representation is often constructed that is devoid of style while still preserving the meaning of the source sentence. In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation. We posit that semantic notations like AMR are a natural choice for an intermediate representation. Hence, we propose T-STAR: a model comprising of two components, text-to-AMR encoder and a AMR-to-text decoder. We propose several modeling improvements to enhance the style agnosticity of the generated AMR. To the best of our knowledge, T-STAR is the first work that uses AMR as an intermediate representation for TST. With thorough experimental evaluation we show T-STAR significantly outperforms state of the art techniques by achieving on an average 15.2% higher content preservation with negligible loss (3% approx.) in style accuracy. Through detailed human evaluation with 90,000 ratings, we also show that T-STAR has up to 50% lesser hallucinations compared to state of the art TST models.
translated by 谷歌翻译
象征性语言生成是在所需的言语中重新设计给定文本的任务,同时仍然忠于原始上下文。我们通过为自动生成五种英语中的五种常见形式形式提供基准,迈出了迈向多位数语言建模的第一步。我们训练MFLAG采用一种在BART顶部预训练的多基因语言的方案,以及将目标象征性信息注入编码器的机制;这使得具有目标形式形式的文本从另一种比喻形式产生,而没有平行的形象构句。我们的方法表现优于所有强大的基线。我们还提供了一些定性分析和对不同语音数字之间关系的反思。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train and release ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and introspect the model's understanding of style conditions. We make our code, models, and datasets publicly available.
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
This work aims to employ natural language generation (NLG) to rapidly generate items for English language learning applications: this requires both language models capable of generating fluent, high-quality English, and to control the output of the generation to match the requirements of the relevant items. We experiment with deep pretrained models for this task, developing novel methods for controlling items for factors relevant in language learning: diverse sentences for different proficiency levels and argument structure to test grammar. Human evaluation demonstrates high grammatically scores for all models (3.4 and above out of 4), and higher length (24%) and complexity (9%) over the baseline for the advanced proficiency model. Our results show that we can achieve strong performance while adding additional control to ensure diverse, tailored content for individual users.
translated by 谷歌翻译
In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient "prompt ensembling." * Work done as a Google AI Resident.
translated by 谷歌翻译
Authorship style transfer involves altering the style of text to match the style of some target author whilst preserving the semantic meaning of the original text. Existing approaches to unsupervised authorship style transfer like STRAP have largely focused on style transfer for target authors with many examples of their writing style through books, speeches, or other published works (Krishna et al., 2020). Due to this high-resource training data requirement (often greater than 100,000 words), these approaches are often only useful for style transfer to the style of published authors, politicians, or other well-known figures and authorship styles. In this paper, we attempt to perform low-resource authorship style transfer, a more challenging class of authorship style transfer where only a limited amount of text in the target author's style may exist. In our experiments, we specifically choose source and target authors from Reddit to perform style transfer over their Reddit posts, limiting ourselves to just 16 posts (on average $\approx$ 500 words) of the target author's style. We then propose a method for automatic evaluation on the low-resource authorship style transfer task utilizing authorship and style representation embeddings (Rivera-Soto et al., 2021; Wegmann et al., 2022). We evaluate our style transferred outputs with the proposed automatic evaluation method and find that our method, STYLL, is able to outperform STRAP and a comprehensive set of baselines.
translated by 谷歌翻译
惯用表达(IES)以其非构成性为特征,是自然语言的重要组成部分。这是对NLP的经典挑战,包括推动当今最先进的培训的预培训语言模型。先前的工作已经确定了其背景化表示的缺陷,这是由于代表的基本组成范式所致。在这项工作中,我们采用了第一个原理的方法,以使用适配器作为对惯用句子的轻量级非构成语言专家来建立惯用性。通过固有和外在方法可以看到基准(例如BART)的能力提高,其中嵌入聚类的均匀性得分高0.19分,而IE sense sense Inambiagiation和Insense Disamage Disamage和Idiom处理任务的均质得分提高了0.19分,高达25%跨度检测。
translated by 谷歌翻译
当前的语言模型可以产生高质量的文本。他们只是复制他们之前看到的文本,或者他们学习了普遍的语言抽象吗?要取笑这些可能性,我们介绍了乌鸦,这是一套评估生成文本的新颖性,专注于顺序结构(n-gram)和句法结构。我们将这些分析应用于四种神经语言模型(LSTM,变压器,变换器-XL和GPT-2)。对于本地结构 - 例如,单个依赖性 - 模型生成的文本比来自每个模型的测试集的人类生成文本的基线显着不那么新颖。对于大规模结构 - 例如,总句结构 - 模型生成的文本与人生成的基线一样新颖甚至更新颖,但模型仍然有时复制,在某些情况下,在训练集中重复超过1000字超过1,000字的通道。我们还表现了广泛的手动分析,表明GPT-2的新文本通常在形态学和语法中形成良好,但具有合理的语义问题(例如,是自相矛盾)。
translated by 谷歌翻译
大型语言模型(例如GPT-3(Brown等,2020)可以执行任意任务,而无需在仅使用少数标签示例的提示之后进行微调。可以将任意任务重新构成自然语言提示,并且可以要求语言模型生成完成,并以称为基于及时的学习的范式间接执行该任务。迄今为止,主要针对单向语言模型证明了新兴迅速的学习能力。但是,预先培训的双向语言模型(例如蒙版语言建模)为转移学习提供了更强大的学习表示。这激发了促使双向模型的可能性,但是它们的预训练目标使它们与现有的提示范式不相容。我们提出SAP(顺序自动回旋提示),该技术可以使双向模型提示。利用机器翻译任务作为案例研究,我们提示了带有SAP的双向MT5模型(Xue等,2021),并演示其少量拍摄和零照片的翻译优于GPT-3等单向模型的几个单拍翻译和XGLM(Lin等,2021),尽管MT5的参数减少了约50%。我们进一步表明SAP对问题的回答和摘要有效。我们的结果首次表明基于及时的学习是更广泛的语言模型的新兴属性,而不仅仅是单向模型。
translated by 谷歌翻译
惯用表达式(IES)在自然语言中起重要作用。在本文中,我们研究了惯用句子解释(ISP)的任务,旨在通过用IE用文字解释来解释一个句子。缺乏与惯用语文平行句子的大型语料库是这项任务的主要挑战,我们考虑了两个单独的解决方案。首先,我们向ISP提出了一个无人监督的方法,它利用IE的上下文信息和定义,不需要并行句子训练集。其次,我们提出了一种弱监督的方法,使用后翻来的方法与IE共同执行释义和生成句子,以扩大小规模并行句子训练数据集。该研究的其他重要衍生物包括一种模型,该模型将句子中的文字短语替换为一种与IE生成惯用表达式和具有惯用/文字句对的大规模并行数据集。拟议的解决方案与竞争性基线相比的有效性在Bleu超过5.16点的相对增益中观察到超过8.75点,在使用自动和手动的并行数据集上经验上验证生成的句子时,Sari超过19.57点评估。我们展示了ISP作为EN-DE机器翻译中的预处理步骤的实用实用性。
translated by 谷歌翻译
我们探索使用大型预用语言模型作为少量语义解析器。语义解析中的目标是给定自然语言输入的结构化含义表示。但是,培训语言模型以生成自然语言。为了弥合差距,我们使用语言模型来解释进入一个类似于英语的受控的子宫内的输入,可以自动映射到目标含义表示表示。我们的结果表明,只有少量的数据和较少的代码转换为类似英语的代表,我们为快速启动语义解析器的蓝图导致了对多个社区任务的令人惊讶的有效性能,大大超过基线方法也在相同的限制上培训数据。
translated by 谷歌翻译