Text generation has long been a popular research topic in NLP. However, the task of generating recruitment emails from recruiters to candidates in the job recommendation scenario has received little attention by the research community. This work aims at defining the topic of automatic email generation for job recommendation, identifying the challenges, and providing a baseline template-based solution for Danish jobs. Evaluation by human experts shows that our method is effective. We wrap up by discussing the future research directions for better solving this task.
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
了解用户的意图并从句子中识别出语义实体,即自然语言理解(NLU),是许多自然语言处理任务的上游任务。主要挑战之一是收集足够数量的注释数据来培训模型。现有有关文本增强的研究并没有充分考虑实体,因此对于NLU任务的表现不佳。为了解决这个问题,我们提出了一种新型的NLP数据增强技术,实体意识数据增强(EADA),该技术应用了树结构,实体意识到语法树(EAST),以表示句子与对实体的注意相结合。我们的EADA技术会自动从少量注释的数据中构造东方,然后生成大量的培训实例,以进行意图检测和插槽填充。四个数据集的实验结果表明,该技术在准确性和泛化能力方面显着优于现有数据增强方法。
translated by 谷歌翻译
除了主要的诊断目的之外,放射学报告一直是医学研究中的宝贵信息来源。鉴于放射学报告的语料,研究人员往往有兴趣识别描述特定医疗发现的报告子集。由于放射学报告中的医学发现的空间是巨大的并且可能是无限的,最近的研究提出了在放射学报告中的自由文本陈述,从有限词汇中采取的半结构化串。本文旨在提出一种方法,用于自动生成放射学报告的半结构化表示。该方法包括匹配从放射学报告的句子来手动创建半结构化表示,然后学习序列到序列神经模型,将匹配的句子映射到它们的半结构化表示。我们在手动注释的胸部X射线放射学报告的Openi语料上进行了评估了所提出的方法。结果表明,所提出的方法优于几个基线,无论如何(1)诸如BLEU,RUEGE和流星等定量措施和放射科学家的定性判断。结果还表明,培训的模型对来自不同医疗提供者的胸X射线放射学报告的样本型语料库产生合理的半结构化表示。
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
An optimal delivery of arguments is key to persuasion in any debate, both for humans and for AI systems. This requires the use of clear and fluent claims relevant to the given debate. Prior work has studied the automatic assessment of argument quality extensively. Yet, no approach actually improves the quality so far. Our work is the first step towards filling this gap. We propose the task of claim optimization: to rewrite argumentative claims to optimize their delivery. As an initial approach, we first generate a candidate set of optimized claims using a sequence-to-sequence model, such as BART, while taking into account contextual information. Our key idea is then to rerank generated candidates with respect to different quality metrics to find the best optimization. In automatic and human evaluation, we outperform different reranking baselines on an English corpus, improving 60% of all claims (worsening 16% only). Follow-up analyses reveal that, beyond copy editing, our approach often specifies claims with details, whereas it adds less evidence than humans do. Moreover, its capabilities generalize well to other domains, such as instructional texts.
translated by 谷歌翻译
This paper presents a solution to the GenChal 2022 shared task dedicated to feedback comment generation for writing learning. In terms of this task given a text with an error and a span of the error, a system generates an explanatory note that helps the writer (language learner) to improve their writing skills. Our solution is based on fine-tuning the T5 model on the initial dataset augmented according to syntactical dependencies of the words located within indicated error span. The solution of our team "nigula" obtained second place according to manual evaluation by the organizers.
translated by 谷歌翻译
随着推荐系统变得越来越复杂和复杂,它们通常会缺乏公平和透明度。为建议提供强大而公正的解释,人们越来越关注,因为它可以帮助解决这些问题并提高推荐系统的信任度和信息性。然而,尽管事实是为人类生成了这种解释,这些人类对具有适当情绪的信息做出更强烈反应,但在为建议解释时,人们缺乏对情绪的考虑。发现当前的解释生成模型可以夸大某些情绪,而无需准确捕获基本的语调或含义。在本文中,我们提出了一种基于多头变压器的新方法,称为“情感感知变压器”,以解释推荐(情感者),以产生更健壮,公平和情感增强的解释。为了衡量产生的解释的语言质量和情感公平性,我们采用自动文本指标和人类的看法进行评估。在具有多个评估指标的三个广泛使用基准数据集上进行的实验表明,情感者在文本质量,解释性和对情感分布的公平性方面始终优于现有的最新解释生成模型。 Emoter的实施将作为开源工具包发布,以支持进一步的研究。
translated by 谷歌翻译
上下文:堆栈溢出对于寻求编程问题答案的软件开发人员非常有帮助。先前的研究表明,越来越多的问题质量低,因此从潜在的答案者那里获得了更少的关注。 Gao等。提出了一个基于LSTM的模型(即BilstM-CC),以自动从代码片段中生成问题标题,以提高问题质量。但是,只有在问题主体中使用代码段无法为标题生成提供足够的信息,而LSTMS无法捕获令牌之间的远程依赖性。目的:本文提出了基于深度学习的新型模型CCBERT,旨在通过充分利用整个问题主体的双模式信息来增强问题标题生成的性能。方法:CCBERT遵循编码器范式范式,并使用Codebert将问题主体编码为隐藏的表示形式,堆叠的变压器解码器以生成预测的代币,以及附加的复制注意层来完善输出分布。编码器和解码器都执行多头自我注意操作,以更好地捕获远程依赖性。本文构建了一个数据集,该数据集包含大约200,000个高质量问题,该数据从Stack Overflow正式发布的数据中滤除,以验证CCBERT模型的有效性。结果:CCBERT优于数据集上的所有基线模型。对仅代码和低资源数据集进行的实验表明,CCBERT的优势性能较小。人类评估还显示了CCBERT关于可读性和相关标准的出色表现。
translated by 谷歌翻译
随着未来以数据为中心的决策,对数据库的无缝访问至关重要。关于创建有效的文本到SQL(Text2SQL)模型以访问数据库的数据有广泛的研究。使用自然语言是可以通过有效访问数据库(尤其是对于非技术用户)来弥合数据和结果之间差距的最佳接口之一。它将打开门,并在精通技术技能或不太熟练的查询语言的用户中引起极大的兴趣。即使提出或研究了许多基于深度学习的算法,在现实工作场景中使用自然语言来解决数据查询问题仍然非常具有挑战性。原因是在不同的研究中使用不同的数据集,这带来了其局限性和假设。同时,我们确实缺乏对这些提议的模型及其对其训练的特定数据集的局限性的彻底理解。在本文中,我们试图介绍过去几年研究的24种神经网络模型的整体概述,包括其涉及卷积神经网络,经常性神经网络,指针网络,强化学习,生成模型等的架构。我们还概述11个数据集,这些数据集被广泛用于训练Text2SQL技术的模型。我们还讨论了无缝数据查询中文本2SQL技术的未来应用可能性。
translated by 谷歌翻译
每年医生对患者的基于形象的诊断需求越来越大,是最近的人工智能方法可以解决的问题。在这种情况下,我们在医学图像的自动报告领域进行了调查,重点是使用深神经网络的方法,了解:(1)数据集,(2)架构设计,(3)解释性和(4)评估指标。我们的调查确定了有趣的发展,也是留下挑战。其中,目前对生成的报告的评估尤为薄弱,因为它主要依赖于传统的自然语言处理(NLP)指标,这不准确地捕获医疗正确性。
translated by 谷歌翻译
数据到文本生成系统旨在基于输入数据生成文本描述(通常以表格形式表示)。典型系统使用巨大的训练样本来学习表和文本之间的对应关系。然而,大型训练套装昂贵,可以获得这些方法在现实世界方案中的适用性。在这项工作中,我们专注于几次数据到文本生成。我们观察到,虽然微调预训练的语言模型可能会产生合理的句子,但它们在几次拍摄设置中遭受了低语义覆盖问题。换句话说,生成的文本中的重要输入时隙往往丢失。为此,我们提出了一种搜索和学习方法,可以利用预训练的语言模型,而是插入丢失的插槽以提高语义覆盖。我们根据搜索结果进一步微调我们的系统,以平滑搜索噪声,在很大程度上产生更好的质量文本并提高推理效率。实验表明,我们的模型在E2E和Wikibio数据集上实现了高性能。特别是,我们在E2E上覆盖了98.35%的输入槽,很大程度上减轻了低覆盖问题。
translated by 谷歌翻译
Gender-inclusive language is important for achieving gender equality in languages with gender inflections, such as German. While stirring some controversy, it is increasingly adopted by companies and political institutions. A handful of tools have been developed to help people use gender-inclusive language by identifying instances of the generic masculine and providing suggestions for more inclusive reformulations. In this report, we define the underlying tasks in terms of natural language processing, and present a dataset and measures for benchmarking them. We also present a model that implements these tasks, by combining an inclusive language database with an elaborate sequence of processing steps via standard pre-trained models. Our model achieves a recall of 0.89 and a precision of 0.82 in our benchmark for identifying exclusive language; and one of its top five suggestions is chosen in real-world texts in 44% of cases. We sketch how the area could be further advanced by training end-to-end models and using large language models; and we urge the community to include more gender-inclusive texts in their training data in order to not present an obstacle to the adoption of gender-inclusive language. Through these efforts, we hope to contribute to restoring justice in language and, to a small extent, in reality.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
Stack Overflow是最受欢迎的编程社区之一,开发人员可以为他们遇到的问题寻求帮助。然而,如果没有经验的开发人员无法清楚地描述他们的问题,那么他们很难吸引足够的关注并获得预期的答案。我们提出了M $ _3 $ NSCT5,这是一种自动从给定代码片段生成多个帖子标题的新颖方法。开发人员可以使用生成的标题查找密切相关的帖子并完成其问题描述。 M $ _3 $ NSCT5使用Codet5骨干,这是一种具有出色语言理解和发电能力的预训练的变压器模型。为了减轻歧义问题,即在不同背景下可以将相同的代码片段与不同的标题保持一致,我们提出了最大的边缘多元核抽样策略,以一次产生多个高质量和不同的标题候选者,以便开发人员选择。我们构建了一个大规模数据集,其中包含890,000个问题帖子,其中涵盖了八种编程语言,以验证M $ _3 $ NSCT5的有效性。 BLEU和胭脂指标的自动评估结果表明,M $ _3 $ NSCT5的优势比六个最先进的基线模型。此外,具有值得信赖结果的人类评估也证明了我们对现实世界应用方法的巨大潜力。
translated by 谷歌翻译
现有的解释模型仅生成建议的文本,但仍然难以生产各种内容。在本文中,为了进一步丰富解释,我们提出了一项名为“个性化展示”的新任务,其中我们同时提供文本和视觉信息来解释我们的建议。具体来说,我们首先选择一个个性化图像集,该图与用户对推荐物品的兴趣最相关。然后,自然语言解释将相应地产生我们的选定图像。对于这项新任务,我们从Google Local(即〜maps)收集一个大规模数据集,并构建一个用于生成多模式说明的高质量子集。我们提出了一个个性化的多模式框架,可以通过对比度学习产生多样化和视觉上的解释。实验表明,我们的框架受益于不同方式作为输入,并且与以前的各种评估指标相比,能够产生更多样化和表达的解释。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译