神经文本生成模型,如用于总结和翻译的那些模型产生高质量的输出,但是当我们真正想要的是一个不同的选项时,通常会集中在模式周围。我们介绍了一个搜索算法来构建编码大量生成选项的格子。首先,我们将解码重组为最佳搜索,该搜索探讨了与光束搜索不同的空间,并通过避免修剪路径来提高效率。其次,我们重新审视假设重组的想法:我们可以在搜索期间识别类似的生成候选者,并将它们合并为近似。在摘要和机器翻译中,我们表明我们的算法编码了数百到数千个不同的选项,这些选项保持语法和高质量成一个线性型格子。该算法为在大规模不同输出之上构建下游生成应用提供了基础。
translated by 谷歌翻译
当前的抽象摘要模型要么仅通过突出源文档的一部分而缺乏明显的解释性或提供不完整的理由。为此,我们提出了摘要程序(SP),这是一个由二进制树的(有序)列表组成的可解释的模块化框架,每个框架都编码来自源文档的抽象摘要句子的分步生成过程。一个摘要程序每个摘要句子包含一个根节点,一棵不同的树将每个摘要句子(根节点)连接到派生的文档句子(叶节点),其中包含中间生成的句子的连接节点。边缘代表涉及摘要的不同模块化操作,例如句子融合,压缩和释义。我们首先建议通过神经模块提出有效的最佳搜索方法,SP搜索通过直接优化Rouge分数来识别人类摘要的SP搜索。接下来,使用这些程序作为自动监督,我们建议使用生成摘要程序的SEQ2SEQ模型,然后执行以获取最终摘要。我们证明,SP搜索有效地代表了使用通常忠于其预期行为的模块的人类摘要背后的生成过程。我们还进行了一项仿真研究,以表明汇总计划通过允许人类更好地模拟模型推理来改善摘要模型的解释性。汇总计划构成了朝着可解释和模块化的抽象摘要迈出的有希望的步骤,这是先前主要通过黑框端到端神经系统解决的复杂任务。我们的代码可从https://github.com/swarnahub/summarization Programs获得
translated by 谷歌翻译
神经文本生成的主导范式是自回归语言模型的左右解码。然而,复杂的词汇约束下的受约束或可控发生的产生需要远见计划未来可行的未来路径。从A *搜索算法绘制灵感,我们提出了一种神经系统A * esque,一种解码算法包含未来成本的启发式估计。我们开发了高效的寻找高效,对大规模语言模型有效,使我们的方法成为诸如光束搜索和顶-K采样等共同技术的替代品。为了使受约束的产生,我们构建了神经系统解码(Lu等,2021),将其灵活性结合到与未来约束满足的* esque估计结合起来的逻辑限制。我们的方法在五代任务中优于竞争力的基线,并在表格到文本生成,受限机器翻译和关键字的生成中实现了新的最先进的性能。在需要复杂约束满足或少量拍摄或零拍摄设置的任务上,改进尤其显着。神经系统A * esque说明了用于改进和实现大规模语言模型的新功能的解码的力量。
translated by 谷歌翻译
Current abstractive summarization systems present important weaknesses which prevent their deployment in real-world applications, such as the omission of relevant information and the generation of factual inconsistencies (also known as hallucinations). At the same time, automatic evaluation metrics such as CTC scores have been recently proposed that exhibit a higher correlation with human judgments than traditional lexical-overlap metrics such as ROUGE. In this work, we intend to close the loop by leveraging the recent advances in summarization metrics to create quality-aware abstractive summarizers. Namely, we propose an energy-based model that learns to re-rank summaries according to one or a combination of these metrics. We experiment using several metrics to train our energy-based re-ranker and show that it consistently improves the scores achieved by the predicted summaries. Nonetheless, human evaluation results show that the re-ranking approach should be used with care for highly abstractive summaries, as the available metrics are not yet sufficiently reliable for this purpose.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Stack Overflow是最受欢迎的编程社区之一,开发人员可以为他们遇到的问题寻求帮助。然而,如果没有经验的开发人员无法清楚地描述他们的问题,那么他们很难吸引足够的关注并获得预期的答案。我们提出了M $ _3 $ NSCT5,这是一种自动从给定代码片段生成多个帖子标题的新颖方法。开发人员可以使用生成的标题查找密切相关的帖子并完成其问题描述。 M $ _3 $ NSCT5使用Codet5骨干,这是一种具有出色语言理解和发电能力的预训练的变压器模型。为了减轻歧义问题,即在不同背景下可以将相同的代码片段与不同的标题保持一致,我们提出了最大的边缘多元核抽样策略,以一次产生多个高质量和不同的标题候选者,以便开发人员选择。我们构建了一个大规模数据集,其中包含890,000个问题帖子,其中涵盖了八种编程语言,以验证M $ _3 $ NSCT5的有效性。 BLEU和胭脂指标的自动评估结果表明,M $ _3 $ NSCT5的优势比六个最先进的基线模型。此外,具有值得信赖结果的人类评估也证明了我们对现实世界应用方法的巨大潜力。
translated by 谷歌翻译
提取性摘要通过识别和串联文档中最重要的句子来产生摘要。由于大多数摘要数据集都没有带有指示文档句子是否值得摘要的黄金标签,因此已经提出了不同的标签算法来推断甲骨文提取物进行模型培训。在这项工作中,我们以广泛使用的贪婪标签方法来识别两个缺陷:它提供了次优和确定性的甲骨文。为了减轻这两个问题,我们提出了一种简单而有效的标签算法,该算法会产生柔和的,基于期望的句子标签。我们为提取性摘要定义了一个新的学习目标,该目标将来自多个Oracle摘要的学习信号结合在一起,并证明这等同于估计每个文档句子的Oracle期望。在没有任何架构修改的情况下,提议的标签方案在跨域和语言的各种摘要基准上都在监督和零击设置中取得了卓越的性能。
translated by 谷歌翻译
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
translated by 谷歌翻译
这项工作适用于最低贝叶斯风险(MBR)解码,以优化翻译质量的各种自动化指标。机器翻译中的自动指标最近取得了巨大的进步。特别是,在人类评级(例如BLEurt,或Comet)上微调,在与人类判断的相关性方面是优于表面度量的微调。我们的实验表明,神经翻译模型与神经基于基于神经参考度量,BLEURT的组合导致自动和人类评估的显着改善。通过与经典光束搜索输出不同的翻译获得该改进:这些翻译的可能性较低,并且较少受到Bleu等表面度量的青睐。
translated by 谷歌翻译
Pre-trained language models have been successful in natural language generation (NLG) tasks. While various decoding methods have been employed, they often produce suboptimal results. We first present an empirical analysis of three NLG tasks: summarization, machine translation, and constrained text generation. We found that selecting the best output from the results of multiple decoding methods can significantly improve performance. To further improve reranking for NLG tasks, we proposed a novel method, \textsc{PairReranker}, which uses a single encoder and a pairwise loss function to jointly encode a source input and a pair of candidates and compare them. Experiments on three NLG tasks demonstrated the effectiveness and flexibility of \textsc{PairReranker}, showing strong results, compared with previous baselines. In addition, our \textsc{PairReranker} can generalize to significantly improve GPT-3 (text-davinci-003) results (e.g., 24.55\% on CommonGen and 11.35\% on WMT18 zh-en), even though our rerankers are not trained with any GPT-3 candidates.
translated by 谷歌翻译
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration -output text that is bland, incoherent, or gets stuck in repetitive loops.To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for openended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality -as measured by human evaluation -and as diverse as human-written text.Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
translated by 谷歌翻译
诸如变形金刚和LSTMS之类的流行模型将令牌用作其信息单位。也就是说,每个令牌都被编码为向量表示,这些向量直接在计算中使用。但是,人类经常考虑跨令牌(即短语)而不是其组成代币。在本文中,我们介绍了TreeFormer,这是一个受CKY算法和变压器启发的体系结构,该体系结构学习了组成操作员和汇总功能,以构建针对短语和句子的层次编码。我们的广泛实验证明了将层次结构纳入变压器的好处,并且与机器翻译,抽象性摘要和各种自然语言理解任务相比,与基线变压器相比显示出重大改进。
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
由于暴露偏见,大多数现有的自然语言产生(NLG)模型通过最大化的可能性目标训练了推理阶段的文本结果不佳。在本文中,为了解决此问题,我们重新审视生成的框架,并提出了用于文本生成任务的联合发电机库(JGR)培训算法。在JGR中,生成器模型是通过最大化两个目标来训练的:训练语料库的可能性和排名者模型给出的预期奖励。同时,Ranker模型从发电机模型中获取输入样本,并学会了将优质样本与生成池区分开来。发电机和排名模型交替优化,直到收敛为止。在实证研究中,提出的JGR模型在五个公共基准测试中实现了新的最先进的表现,涵盖了三项大众一代任务:摘要,问题生成和回答生成。我们将在https://github.com/microsoft/advnlg上提供代码,数据和模型。
translated by 谷歌翻译
近年来,研究人员创建并引入了大量各种代码生成模型。由于对每个新模型版本的人类评估都是不可行的,因此社区采用了自动评估指标,例如BLEU来近似人类判断的结果。这些指标源自机器翻译域,目前尚不清楚它们是否适用于代码生成任务,以及他们与人类对此任务的评估有多一致。还有两个指标,即Codebleu和Ruby,它们是为了估计代码的相似性并考虑了代码属性的。但是,对于这些指标,几乎没有关于他们与人类评估一致的研究。尽管如此,公制得分的最小差异仍用于声称某些代码生成模型的优越性。在本文中,我们介绍了一项有关六个指标的适用性的研究-Bleu,Rouge-L,Meteor,Chrf,Codebleu,Ruby-用于评估代码生成模型。我们对两个不同的代码生成数据集进行了一项研究,并使用人类注释来评估这些数据集上运行的所有模型的质量。结果表明,对于Python单线的Conala数据集,如果模型得分的差异小于5分,则没有一个指标可以正确模拟人类判断,而$ 95 \%$确定性,则使用$> 95 \%$确定性。对于由特定结构类别组成的炉石传说数据集,至少2分的模型得分差异足以声称一种模型比另一个模型的优越性。使用我们的发现,我们得出了有关使用指标来估计代码生成任务的模型性能的几项建议。
translated by 谷歌翻译
Minimum Bayesian Risk Decoding (MBR) emerges as a promising decoding algorithm in Neural Machine Translation. However, MBR performs poorly with label smoothing, which is surprising as label smoothing provides decent improvement with beam search and improves generality in various tasks. In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions. We demonstrate that even though label smoothing only causes a slight change in the token-level, the sequence-level distribution is highly skewed. We coin the issue \emph{distributional over-smoothness}. To address this issue, we propose a simple and effective method, Distributional Cooling MBR (DC-MBR), which manipulates the entropy of output distributions by tuning down the Softmax temperature. We theoretically prove the equivalence between pre-tuning label smoothing factor and distributional cooling. Experiments on NMT benchmarks validate that distributional cooling improves MBR's efficiency and effectiveness in various settings.
translated by 谷歌翻译
上下文:堆栈溢出对于寻求编程问题答案的软件开发人员非常有帮助。先前的研究表明,越来越多的问题质量低,因此从潜在的答案者那里获得了更少的关注。 Gao等。提出了一个基于LSTM的模型(即BilstM-CC),以自动从代码片段中生成问题标题,以提高问题质量。但是,只有在问题主体中使用代码段无法为标题生成提供足够的信息,而LSTMS无法捕获令牌之间的远程依赖性。目的:本文提出了基于深度学习的新型模型CCBERT,旨在通过充分利用整个问题主体的双模式信息来增强问题标题生成的性能。方法:CCBERT遵循编码器范式范式,并使用Codebert将问题主体编码为隐藏的表示形式,堆叠的变压器解码器以生成预测的代币,以及附加的复制注意层来完善输出分布。编码器和解码器都执行多头自我注意操作,以更好地捕获远程依赖性。本文构建了一个数据集,该数据集包含大约200,000个高质量问题,该数据从Stack Overflow正式发布的数据中滤除,以验证CCBERT模型的有效性。结果:CCBERT优于数据集上的所有基线模型。对仅代码和低资源数据集进行的实验表明,CCBERT的优势性能较小。人类评估还显示了CCBERT关于可读性和相关标准的出色表现。
translated by 谷歌翻译
我们介绍了一种新的分布式策略梯度算法,并表明它在优化机器翻译模型时,在培训稳定性和概括性绩效方面都优于现有的奖励感知培训程序,例如增强,最低风险培训(MRT)和近端政策优化(PPO)。我们称之为MAD的算法(由于在重要性加权计算中使用平均绝对偏差),它分布式数据生成器在Worker节点上每个源句子对多个候选者进行采样,而中心学习者则更新了策略。 MAD取决于两个降低差异策略:(1)一种有条件的奖励归一化方法,可确保每个源句子都具有正面和负面奖励翻译示例,以及(2)一种新的强大重要性加权方案,充当条件性熵正常化器。在各种翻译任务上进行的实验表明,使用MAD算法在使用贪婪的解码和梁搜索时,使用MAD算法学到的策略表现良好,并且学到的政策对训练过程中使用的特定奖励很敏感。
translated by 谷歌翻译
\ textit {约束路径发现}的经典问题是一个经过充分研究但充满挑战的主题,在各个领域,例如沟通和运输等各个领域的应用。权重限制了最短路径问题(WCSPP),作为仅具有一个侧面约束的约束路径查找的基本形式,旨在计划成本最佳路径,其权重/资源使用受到限制。鉴于问题的双标准性质(即处理路径的成本和权重),解决WCSPP的方法具有一些带有双目标搜索的共同属性。本文在约束路径查找和双目标搜索中利用了最新的基于A*的最新技术,并为WCSPP提供了两种精确的解决方案方法,两者都可以在非常大的图表上解决硬性问题实例。我们从经验上评估了算法在新的大型和现实的问题实例上的性能,并在时空指标中显示出它们比最新算法的优势。本文还调查了优先级队列在被a*的约束搜索中的重要性。我们通过对逼真的和随机图进行了广泛的实验来展示,基于桶的队列没有打破打盘的方式可以有效地改善详尽的双标准搜索的算法性能。
translated by 谷歌翻译