In task-oriented dialogs such as MultiWoZ (Budzianowski et al., 2018), an informative and/or successful system response needs to include necessary key information such as the phone number of a hotel. Therefore, we hypothesize that by helping the model to focus more on learning key quantities in the dialog, the model can generative more informative and helpful responses. In this paper, we propose a new training algorithm, Reinforced Language Modeling (RLM), that aims to use a fine-grained reward function and reinforcement learning to help the model focus more on generating key quantities correctly during test time. Empirical results show our proposed RLM achieves state-of-the-art performance on the inform rate, success rate, and combined score in MultiWoZ.
translated by 谷歌翻译
本文研究了以任务为导向的对话系统中的曝光偏差问题,其中模型在多个转弯中生成的内容驱动对话框上下文远离训练时间的地面真相分布,从而引入了错误传播并损害了TOD系统的稳健性。为了弥合训练和推理多转弯任务导向对话框之间的差距,我们建议会话级抽样,该采样将模型明确地暴露于培训期间对话框上下文的采样生成的内容。此外,我们采用基于辍学的一致性正规化与屏蔽策略R掩码,以进一步提高模型的鲁棒性和性能。拟议的UBARV2在标准化评估基准Multiwoz上实现了最先进的性能,并且广泛的实验显示了所提出的方法的有效性。
translated by 谷歌翻译
面向任务的对话系统旨在通过自然语言互动实现用户目标。他们可以与人类用户一起评估它们,但是在开发阶段的每个迭代中都无法实现。模拟用户可能是替代方案,但是他们的开发是不平凡的。因此,研究人员诉诸于现有的人类语料库的离线指标,这些指标更实用且易于再现。不幸的是,它们在反映对话系统的真实性能方面受到限制。例如,BLEU与人类判断力的相关性很差,现有的基于语料库的指标(例如成功率忽略对话环境不匹配)。对于具有良好概括且与人类判断密切相关的任务导向系统,仍然需要一个可靠的指标。在本文中,我们建议使用离线增强学习来基于静态语料库的对话评估。这样的评估者通常称为评论家,并用于政策优化。我们迈出了一步,并表明可以在任何对话系统的静态语料库上对离线RL批评家作为外部评估者进行培训,从而可以在各种类型的系统上进行对话性能比较。这种方法的好处是与人类判断达到密切的相关性,使其成为与模型无关的,我们通过交互式用户试验确认。
translated by 谷歌翻译
在口头对话系统中,我们的目标是部署人工智能,以建立可以与人类交流的自动化对话剂。对话系统越来越多地旨在超越仅仅模仿对话,而且随着时间的推移,这些交互也会改善。在本次调查中,我们概述了多年来制定对话系统的方法的广泛概述。对话系统的不同用例范围从基于任务的系统到开放域聊天动机和需要特定的系统。从简单的规则的系统开始,研究已经朝着越来越复杂的建筑培训,这些建筑在大规模的数据集语料库中培训,如深度学习系统。激进了类似人类对话的直觉,通过加强学习将情绪纳入自然语言发生器的进展。虽然我们看到对某些指标的高度边际改善的趋势,但我们发现指标存在有限的理由,评估实践并不统一。要得出结论,我们标志着这些问题并突出了可能的研究方向。
translated by 谷歌翻译
这项工作结合了有关预先训练模型编码的对话历史的信息,其含义表示当前系统话语,以实现面向任务对话中的语境语言生成。我们利用预先训练的多上下文转换模型进行从头开始培训的模型中的上下文表示;并利用从预训练的GPT-2调整的模型中的上下文生成的立即使用前面的用户话语。与多种数据集的两个实验表明,通过预先训练的模型编码的上下文信息可提高自动指标和人类评估中的响应生成的性能。我们所呈现的上下文发电机使得更高种类的响应能够更好地适应正在进行的对话。分析上下文大小显示,较长的上下文不会自动导致更好的性能,但是前面的用户话语的直接对上下文生成起着重要作用。此外,我们还提出了一种基于GPT的生成模型的重新排名。实验表明,RE-Ranker选择的响应对自动度量有重大改进。
translated by 谷歌翻译
强化学习(RL)已见证其培训对话政策代理人以最大限度地提高用户累计奖励的潜力。但是,奖励可以非常稀疏,它通常仅在对话会话结束时提供,这会导致可接受的对话框的无法实现的交互要求。区别于许多致力于优化策略并恢复奖励,替代地恢复了困难的奖励,这些奖励遭受了容易地陷入困境和模型崩溃,我们将对抗训练分解为两个步骤:1)我们将预先训练的语言模型集成为判别员判断当前的系统动作是否足够好,对最后一个用户操作(即,\ texit {下一个操作预测}); 2)鉴别者给出和额外的本地密集奖励,以指导代理人的探索。实验结果表明,我们的方法显着提高了对话系统的完整速率(〜4.4 \%)和成功率(〜8.0%)。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译
Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译
基于变压器的语言模型能够生成流利的文本,并在各种自然语言生成任务中有效地适应。但是,已证明在大型未标记的网络文本语料库中鉴定的语言模型已被证明会遭受堕落的有毒内容和社会偏见行为的损害,从而阻碍了他们的安全部署。提出了各种排毒方法来减轻语言模型的毒性;但是,这些方法是在包含与性别,种族或宗教相关的特定社会身份的提示条件下进行排毒语言模型的。在这项研究中,我们提出了增强氧化。一种基于强化学习的方法,用于降低语言模型中的毒性。我们应对语言模型中的安全性挑战,并提出了一种新的奖励模型,该模型能够检测有毒内容并减轻对毒性预测中社会身份的意外偏见。该实验表明,用于语言模型排毒的增强方法化方法优于自动评估指标中现有的排毒方法,这表明我们在语言模型排毒中的方法能力和对生成内容中社会认同的意外偏见的能力较小。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
End-to-end task bots are typically learned over a static and usually limited-size corpus. However, when deployed in dynamic, changing, and open environments to interact with users, task bots tend to fail when confronted with data that deviate from the training corpus, i.e., out-of-distribution samples. In this paper, we study the problem of automatically adapting task bots to changing environments by learning from human-bot interactions with minimum or zero human annotations. We propose SL-AGENT, a novel self-learning framework for building end-to-end task bots. SL-AGENT consists of a dialog model and a pre-trained reward model to predict the quality of an agent response. It enables task bots to automatically adapt to changing environments by learning from the unlabeled human-bot dialog logs accumulated after deployment via reinforcement learning with the incorporated reward model. Experimental results on four well-studied dialog tasks show the effectiveness of SL-AGENT to automatically adapt to changing environments, using both automatic and human evaluations. We will release code and data for further research.
translated by 谷歌翻译
我们开发了一种新的持续元学习方法,以解决连续多任务学习中的挑战。在此设置中,代理商的目标是快速通过任何任务序列实现高奖励。先前的Meta-Creenifiltive学习算法已经表现出有希望加速收购新任务的结果。但是,他们需要在培训期间访问所有任务。除了简单地将过去的经验转移到新任务,我们的目标是设计学习学习的持续加强学习算法,使用他们以前任务的经验更快地学习新任务。我们介绍了一种新的方法,连续的元策略搜索(Comps),通过以增量方式,在序列中的每个任务上,通过序列的每个任务来消除此限制,而无需重新访问先前的任务。 Comps持续重复两个子程序:使用RL学习新任务,并使用RL的经验完全离线Meta学习,为后续任务学习做好准备。我们发现,在若干挑战性连续控制任务的旧序列上,Comps优于持续的持续学习和非政策元增强方法。
translated by 谷歌翻译
在本文中,我们建议将面向任务导向的对话系统作为纯粹的自然语言生成任务,以便充分利用像GPT-2这样的大规模预训练模型,并简化了复杂的光学化预备。然而,直接应用这种方法严重遭受了通过删除了替代令牌而导致的对话实体不一致,以及在微调期间灾害模型的灾难性遗忘问题,导致表现不令人满意。为了缓解这些问题,我们设计了一种新颖的GPT-Adapter-CopyNet网络,它将轻量级适配器和CopyNet模块包含到GPT-2中,以实现转移学习和对话实体生成的更好性能。在DSTC8轨道1基准和多种数据集上进行的实验结果表明,我们的建议方法显着优于基线模型,在自动和人类评估中具有显着性能。
translated by 谷歌翻译
用户模拟器(USS)通常用于通过增强学习训练面向任务的对话系统(DSS)。相互作用通常是在语义层面上以提高效率的,但是从语义动作到自然语言仍然存在差距,这会导致培训和部署环境之间的不匹配。在培训期间,将自然语言生成(NLG)模块与USS结合在一起可以部分解决此问题。但是,由于US的策略和NLG是单独优化的,因此在给定的情况下,这些模拟的用户话语可能不够自然。在这项工作中,我们提出了一个基于生成变压器的用户模拟器(Gentus)。 Gentus由编码器结构组成,这意味着它可以共同优化用户策略和自然语言。 Gentus既产生语义动作又产生自然语言话语,从而保留了解释性和增强语言的变化。另外,通过将输入和输出表示为单词序列以及使用大型的预训练语言模型,我们可以在功能表示中实现普遍性。我们通过自动指标和人类评估评估绅士。我们的结果表明,绅士会产生更多的自然语言,并能够以零拍的方式转移到看不见的本体论中。此外,通过加强学习为培训专业用户模拟器打开大门,可以进一步塑造其行为。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
我们介绍了一种新的分布式策略梯度算法,并表明它在优化机器翻译模型时,在培训稳定性和概括性绩效方面都优于现有的奖励感知培训程序,例如增强,最低风险培训(MRT)和近端政策优化(PPO)。我们称之为MAD的算法(由于在重要性加权计算中使用平均绝对偏差),它分布式数据生成器在Worker节点上每个源句子对多个候选者进行采样,而中心学习者则更新了策略。 MAD取决于两个降低差异策略:(1)一种有条件的奖励归一化方法,可确保每个源句子都具有正面和负面奖励翻译示例,以及(2)一种新的强大重要性加权方案,充当条件性熵正常化器。在各种翻译任务上进行的实验表明,使用MAD算法在使用贪婪的解码和梁搜索时,使用MAD算法学到的策略表现良好,并且学到的政策对训练过程中使用的特定奖励很敏感。
translated by 谷歌翻译
预先训练的模型已经证明是强大的增强面向任务的对话系统。但是,目前的预训练方法主要关注增强对话的理解和生成任务,同时忽略对话策略的开发。在本文中,我们提出了一个小说预先训练的对话模型,明确地通过半监督学习明确地从有限标记的对话框和大规模未标记的对话框中学习对话策略。具体而言,我们在预训练期间介绍一个对话框预测任务,以便在预训练中进行策略优化,并使用一致性正则化术语在未标记的对话的帮助下优化学习的表示。我们还实施了一个浇注机制来称量合适的未标记对话框样本。经验结果表明,星系大大提高了面向任务为导向的对话系统的性能,并在基准数据集中实现了新的最先进结果:车载,多种多纤2.0和多纺,改善其端到端合并分数2.5,5.3和5.5分。我们还显示Galaxy比各种低资源设置下的现有模型更强大的少量射击能力。
translated by 谷歌翻译