神经文本生成的主导范式是自回归语言模型的左右解码。然而,复杂的词汇约束下的受约束或可控发生的产生需要远见计划未来可行的未来路径。从A *搜索算法绘制灵感,我们提出了一种神经系统A * esque,一种解码算法包含未来成本的启发式估计。我们开发了高效的寻找高效,对大规模语言模型有效,使我们的方法成为诸如光束搜索和顶-K采样等共同技术的替代品。为了使受约束的产生,我们构建了神经系统解码(Lu等,2021),将其灵活性结合到与未来约束满足的* esque估计结合起来的逻辑限制。我们的方法在五代任务中优于竞争力的基线,并在表格到文本生成,受限机器翻译和关键字的生成中实现了新的最先进的性能。在需要复杂约束满足或少量拍摄或零拍摄设置的任务上,改进尤其显着。神经系统A * esque说明了用于改进和实现大规模语言模型的新功能的解码的力量。
translated by 谷歌翻译
Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
神经文本生成模型,如用于总结和翻译的那些模型产生高质量的输出,但是当我们真正想要的是一个不同的选项时,通常会集中在模式周围。我们介绍了一个搜索算法来构建编码大量生成选项的格子。首先,我们将解码重组为最佳搜索,该搜索探讨了与光束搜索不同的空间,并通过避免修剪路径来提高效率。其次,我们重新审视假设重组的想法:我们可以在搜索期间识别类似的生成候选者,并将它们合并为近似。在摘要和机器翻译中,我们表明我们的算法编码了数百到数千个不同的选项,这些选项保持语法和高质量成一个线性型格子。该算法为在大规模不同输出之上构建下游生成应用提供了基础。
translated by 谷歌翻译
Pre-trained language models have been successful in natural language generation (NLG) tasks. While various decoding methods have been employed, they often produce suboptimal results. We first present an empirical analysis of three NLG tasks: summarization, machine translation, and constrained text generation. We found that selecting the best output from the results of multiple decoding methods can significantly improve performance. To further improve reranking for NLG tasks, we proposed a novel method, \textsc{PairReranker}, which uses a single encoder and a pairwise loss function to jointly encode a source input and a pair of candidates and compare them. Experiments on three NLG tasks demonstrated the effectiveness and flexibility of \textsc{PairReranker}, showing strong results, compared with previous baselines. In addition, our \textsc{PairReranker} can generalize to significantly improve GPT-3 (text-davinci-003) results (e.g., 24.55\% on CommonGen and 11.35\% on WMT18 zh-en), even though our rerankers are not trained with any GPT-3 candidates.
translated by 谷歌翻译
人工推理通常可以理解为两个系统之间的相互作用:直观和关联(“系统1”)和审议和逻辑(“系统2”)。神经序列模型 - 在执行复杂,结构化任务时越来越成功 - 表现出系统1的优点和故障模式:它们是快速和学习数据的模式,但通常不一致和不连贯。在这项工作中,我们通过添加系统2-Inspired逻辑推理,寻求一种轻量级,无培训的手段来改善现有系统1样序列模型。我们探讨了该主题的几种变体,其中通过符号推理模块检查来自神经序列模型的候选几代,可以通过符号推理模块来接受或拒绝几代人。我们的方法使用神经推理来介导神经系统1和逻辑系统2.导致强大的故事生成和接地的指示,表明这种方法可以增加神经基代的一致性和准确性。
translated by 谷歌翻译
公开可用的大型预磨语删除媒介(LMS)生成具有显着质量的文本,但仅从左右依次顺序地。因此,它们不会立即适用于打破单向假设的生成任务,例如释放或文本缺陷,需要特定于特定的监督。在本文中,我们呈现反射解码,这是一种新型无监督算法,其允许直接向非顺序任务应用单向LMS。我们的2步方法不需要监督甚至并行对象,只有两个离心的预磨损LMS相反的方向:向前和向后。首先,在上下文化步骤中,我们使用LMS生成过去和未来环境的集合,该上下文共同捕获输入(例如,索引源句)。其次,在反射步骤中,我们在这些“上下文集合”中的条件,生成与它们兼容的输出。综合经验结果表明,反思解码优于涉及释义和绑架文本缺陷的强烈无监督的基线,显着缩小无监督和监督方法之间的差距。反射解码超越了各种度量的多个监督基线,包括人为评估。
translated by 谷歌翻译
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration -output text that is bland, incoherent, or gets stuck in repetitive loops.To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for openended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality -as measured by human evaluation -and as diverse as human-written text.Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
translated by 谷歌翻译
神经文本生成模型可能遭受低多样性问题。已经提出了各种解码策略和基于培训的方法仅通过利用上下文特征来促进多样性,但很少他们考虑纳入句法结构线索。在这项工作中,我们建议使用语言注释,即演讲(POS),来指导文本生成。详细地,我们将POS引导SoftMax介绍以显式模拟两个后部概率:(i)下一页POS,(ii)来自目标POS的词汇的下一个令牌。进一步提出POS导游采样策略来解决POS的多样性来解决低多样性问题。广泛的实验和人类评估表明,与现有最先进的方法相比,我们的POS引导的Softmax和采样(POSG)可以在保持相当的质量的同时产生更多样化的文本。
translated by 谷歌翻译
文本生成对于许多自然语言处理应用至关重要。然而,基于最大化的解码方法(例如,神经语言模型的光束搜索)通常会导致解析解决方案 - 生成的文本是不自然的,并且包含不良的重复。现有方法通过采样或修改训练目标引入随机性,以降低某些令牌的概率(例如,不可能训练)。但是,它们通常会导致缺乏连贯性的解决方案。在这项工作中,我们表明,模型变性的根本原因是令牌表示的各向异性分布。我们提出了一种对比解决方案:(i)SIMCTG,是校准模型表示空间的对比训练目标,以及(ii)一种解码方法 - 对比度搜索 - 以鼓励多样性,同时在生成的文本中保持连贯性。对两种语言的三个基准测试的广泛实验和分析表明,我们提出的方法显着优于人类和自动指标评估的当前最新文本生成方法。
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
由于在开放式文本生成中取得了重大进展,衡量机器生成的文本是如何对人类语言的关键问题。我们介绍紫红色,一个开放式文本生成的比较措施,它直接将文本生成模型的学习分布与使用发散边界的分发进行了分布到人写的文本。淡紫色通过计算量化嵌入空间中的信息分流来缩放到现代文本生成模型。通过对三个开放式发电任务的广泛实证研究,我们发现紫红色标识了所生成文本的已知属性,天然存在模型大小,并与人类判断相关,而不是现有的分布评估度量的限制较少。
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
这项工作适用于最低贝叶斯风险(MBR)解码,以优化翻译质量的各种自动化指标。机器翻译中的自动指标最近取得了巨大的进步。特别是,在人类评级(例如BLEurt,或Comet)上微调,在与人类判断的相关性方面是优于表面度量的微调。我们的实验表明,神经翻译模型与神经基于基于神经参考度量,BLEURT的组合导致自动和人类评估的显着改善。通过与经典光束搜索输出不同的翻译获得该改进:这些翻译的可能性较低,并且较少受到Bleu等表面度量的青睐。
translated by 谷歌翻译
The rapid development and application of natural language generation (NLG) techniques has revolutionized the field of automatic text production. However, these techniques are still limited in their ability to produce human-like text that is truly reasonable and informative. In this paper, we explore the importance of NLG being guided by knowledge, in order to convey human-like reasoning through language generation. We propose ten goals for intelligent NLG systems to pursue, and briefly review the achievement of NLG techniques guided by knowledge and reasoning. We also conclude by envisioning future directions and challenges in the pursuit of these goals.
translated by 谷歌翻译
创建什么故事需要推理关于先前陈述以及变更条件的可能结果。人们可以在新条件下轻松生成连贯的结局,但目前系统会对原始故事进行最小的变化来挑战。因此,一个主要挑战是生成逻辑故事和用最小编辑重写之间的权衡。在本文中,我们提出了教育,这是一种基于编辑的无预测方法,用于反复重写。教育包括基于估计在线条件的因果效果的目标位置检测策略,这使故事的因果不变部分。 Bowat然后在流畅,一致性和最小编辑约束下生成故事。我们还提出了一种新的指标来缓解当前自动指标的缺点,更好地评估权衡。我们评估公共反事故事重写基准的教育。实验表明,教育根据自动和人类评估,达到了无监督的SOTA方法的最佳权衡。教育资源可用于:https://github.com/jiangjiechen/educat。
translated by 谷歌翻译
嘈杂的频道模型在神经机翻译(NMT)中特别有效。然而,最近的方法如“波束搜索和重新划分”(BSR)在推理期间引起了大量的计算开销,使实际应用不可行。我们的目标是建立一个摊销嘈杂的频道NMT模型,使得从它贪婪解码将生成转换,以最大化与使用BSR生成的翻译相同的奖励。我们尝试三种方法:知识蒸馏,1阶梯偏差仿制学习和Q学习。第一方法获得来自伪语料库的噪声信道信号,后两种方法旨在直接针对嘈杂的通道MT奖励优化。所有三种级别的速度推动速度推断为1-2级。对于所有三种方法,所生成的翻译无法实现与BSR相当的奖励,但BLEU近似的翻译质量类似于BSR产生的翻译的质量。
translated by 谷歌翻译
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap', resulting in generated responses which are dull, repetitive, and often inconsistent with dialogue history. Comparing ranked lists of multiple generated responses against the 'gold response' (from training data) reveals a wide diversity in response quality, with many good responses placed lower in the ranked list. The main challenge, addressed in this work, is then how to reach beyond greedily generated system responses, that is, how to obtain and select such high-quality responses from the list of overgenerated responses at inference without availability of the gold response. To this end, we propose a simple yet effective reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system. The idea is to use any sequence-level (similarity) scoring function to divide the semantic space of responses into high-scoring versus low-scoring partitions. At training, the high-scoring partition comprises all generated responses whose similarity to the gold response is higher than the similarity of the greedy response to the gold response. At inference, the aim is to estimate the probability that each overgenerated response belongs to the high-scoring partition, given only previous dialogue history. We validate the robustness and versatility of our proposed method on the standard MultiWOZ dataset: our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results. Additional experiments on the BiTOD dataset and human evaluation further ascertain the generalisability and effectiveness of the proposed framework.
translated by 谷歌翻译