The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from-machine-to-corpus-to-machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al., 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically-as text-in addition to the neural model. We also distill only one aspect-the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model's commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models.
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
近年来带来了对自然语言理解领域的勤义代表和推理的重新兴趣。新的致辞知识图表(CSKG)的发展是这些进步的核心,因为他们的不同事实可以通过机器学习模型来解决新的和具有挑战性的任务。与此同时,由于全面地涵盖了一般勤杂朗知识所需的大规模规模,对这些资源的质量和覆盖率仍存在疑问。在这项工作中,我们将手动构建的CSKGS分配在NLP代理商遇到的所有情况下,我们将永远不会实现适用所需的覆盖范围。因此,我们提出了一种新的评估框架,用于测试KGS的效用,基于如何从中学习有效的隐式知识表示。通过这一新目标,我们提出了一个含有知识的全新CSKG的新CSKG,该知识不容易获得预用的语言模型。我们与其他领先的CSKG相比,评估其属性,表现了对勤杂朗语言知识资源的第一个大规模对研究。接下来,我们显示原子2020更适合培训知识模型,可以为新的,看不见的实体和事件产生准确,代表知识。最后,通过人类评估,我们表明,尽管使用超过430倍的参数,但GPT-3(175B参数)的几次射击性能较低,而令人印象深刻,令人印象深刻,令人印象深刻,令人印象深刻,仍然低于原子型2020的巴特的知识模型。
translated by 谷歌翻译
符号知识图(kgs)是通过昂贵的人众包或特定于域特异性的复杂信息提取管道来构建的。诸如BERT之类的新兴大型语言模型(LMS)已显示出隐式编码的大量知识,可以使用正确设计的提示来查询。但是,与明确的公斤相比,黑盒LMS中的知识通常很难访问或编辑,并且缺乏解释性。在这项工作中,我们旨在从LMS收获符号KG,这是一个由神经LMS的灵活性和可扩展性增强的自动kg构造的新框架。与通常依赖大型人类注释的数据或现有大量KG的先前作品相比,我们的方法仅需要对关系的最小定义作为输入,因此适合于以前无法提取有关丰富新关系的知识。该方法会自动生成多样化的提示,并在给定的LM内执行有效的知识搜索,以进行一致和广泛的输出。与以前的方法相比,使用我们的方法收获的知识要准确得多,如自动和人类评估所示。结果,我们源于多元化的LMS,一个新的KG家族(例如Bertnet和Robertanet),其中包含一套更丰富的常识关系,包括复杂的关系(例如,A对B的能力,但不擅长B”)人类注销的kg(例如概念网)。此外,由此产生的kg也是解释各自的源LMS的工具,从而导致对不同LMS不同知识能力的新见解。
translated by 谷歌翻译
We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., "if X pays Y a compliment, then Y will likely return the compliment"). We propose nine if-then relation types to distinguish causes vs. effects, agents vs. themes, voluntary vs. involuntary events, and actions vs. mental states. By generatively training on the rich inferential knowledge described in ATOMIC, we show that neural models can acquire simple commonsense capabilities and reason about previously unseen events. Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
translated by 谷歌翻译
大型语言模型越来越能够通过相对较少的特定任务的监督产生流畅的出现文本。但这些模型可以准确解释分类决策吗?我们考虑使用少量人写的例子(即,以几滴方式)生成自由文本解释的任务。我们发现(1)创作更高质量的例子,以提示导致更高质量的世代; (2)令人惊讶的是,在头到头比较中,人群公司通常更喜欢GPT-3生成的解释,以众包中包含的人性写入的解释。然而,Crowdworker评级也表明,虽然模型产生了事实,语法和充分的解释,但它们具有改进的空间,例如沿着提供新颖信息和支持标签的轴。我们创建了一种管道,该管道将GPT-3与监督过滤器结合起来,该过滤器通过二进制可接受性判断来包含人类循环。尽管具有重要的主观性内在的判断可接受性,但我们的方法能够始终如一地过滤人类可接受的GPT-3生成的解释。
translated by 谷歌翻译
We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and Con-ceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET ) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
translated by 谷歌翻译
在有问题的回答需要常识的问题上,语言模型(例如,GPT-3)已用于生成表达有助于提高性能的背景知识的文本。然而,使用此类模型的成本很高。在这项工作中,我们对较小的语言模型产生有用的中间上下文,此处称为阐述。我们的框架在更新两个语言模型之间交替使用 - 阐述生成器和一个答案预测变量 - 允许每个语言都影响彼此。我们的模型使用少于GPT-3的参数的0.5%优于具有相似尺寸的替代方案,并在四个常识性问题上回答基准测试的GPT-3上的差距缩小。人类评估表明,生成的阐述的质量很高。
translated by 谷歌翻译
Language models (LMs) have demonstrated remarkable performance on downstream tasks, using in-context exemplars or human instructions. Recent works have shown that chain-of-thought (CoT) prompting can elicit models to solve complex reasoning tasks, step-by-step. However, the efficacy of prompt-based CoT methods is restricted to very large LMs such as GPT-3 (175B), thus limiting deployability. In this paper, we revisit the fine-tuning approach to enable complex reasoning in smaller LMs, optimized to efficiently perform a specific task. We propose Fine-tune-CoT, a method that leverages the capabilities of very large LMs to generate reasoning samples and teach smaller models via fine-tuning. We evaluate our method on publicly available LMs across a wide range of complex tasks and model sizes. We find that Fine-tune-CoT enables substantial reasoning capability in small models, whereas previous prompt-based baselines exhibit near-random performance. Student models can even outperform the teacher in some tasks while reducing model size requirements by several orders of magnitude. We conduct extensive ablations and sample studies to understand the reasoning capabilities of student models. We also identify several important nuances that have been overlooked in concurrent fine-tuning works on CoT and address them in our analysis.
translated by 谷歌翻译
自动化讲故事长期以来一直抓住了研究人员在日常生活中的叙述中的难以感受。但是,在用神经语言模型产生叙述时,保持一致性并保持对特定结束的特定结束挑战。在本文中,我们介绍了读者模型(Storm)的故事生成,这是一个框架,其中读者模型用于推理故事的推理应该进步。读者模型是人类读者相信关于虚构故事世界的概念,实体和关系的人。我们展示了如何作为知识图表所代表的明确读者模型提供故事一致性,并以实现给定的故事世界目标的形式提供可控性。实验表明,我们的模型产生了显着更加连贯和主题的故事,优于尺寸的基线,包括情节合理性并保持主题。我们的系统也优于在未订购的情况下在组成给定概念时占总引导的故事生成基线。
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with a size of over 100 billion parameters. In this paper, we explore the transfer of such reasoning capabilities to models with less than 100 billion parameters via knowledge distillation. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% when finetuned on PaLM-540B generated chains of thought.
translated by 谷歌翻译
关于概念及其属性的常识知识(CSK)有助于AI应用程序。诸如ConceptNet之类的先前作品已经编译了大型CSK集合。但是,它们的表现力限制在主题性 - 预处理(SPO)的三联元中,对p和o的s和字符串的简单概念。与先前的作品相比,CSK断言具有精致的表现力和更好的精度和回忆。 Ascent ++通过用子组和方面捕获复合概念,以及用语义方面的主张来捕获复合概念。后者对于表达断言和进一步预选赛的时间和空间有效性至关重要。此外,Ascent ++将开放信息提取(OpenIE)与典型性和显着性分数的明智清洁和排名相结合。对于高覆盖范围,我们的方法挖掘到具有广泛的Web内容的大规模爬网C4中。通过人类判断的评估显示了上升++ Kb的卓越质量,以及对QA支持任务的外部评估强调了Ascent ++的好处。可以在https://ascentpp.mpi-inf.mpg.de/上访问Web界面,数据和代码。
translated by 谷歌翻译
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
对事件序列的预测对于信息检索和自然语言处理中的许多现实世界应用至关重要。在事件序列预测中,未来的活动生成(FEG)是一项具有挑战性的任务,因为它不仅需要流利的文本生成,而且需要常识性推理才能保持整个事件故事的逻辑连贯性。在本文中,我们提出了一个新颖的可解释的FEG框架COEP。它突出并整合了两种类型的事件知识,对直接事件事件关系的顺序知识以及推论知识,这些知识反映了事件之间的中间角色心理学(例如意图,原因,反应),这些心理本质地将故事推向了故事。为了减轻知识遗忘问题,我们为每种类型的知识设计了两个模块,即IM和GM,它们是通过及时调整组合的。首先,IM专注于理解推论知识,以产生常识性解释并为通用汽车提供软提示向量。我们还设计了一种对比歧视器,以提高概括能力。其次,GM通过用IM的指导对直接顺序知识进行建模来生成未来事件。自动和人类评估表明,我们的方法可以产生更连贯,具体和逻辑的未来事件。
translated by 谷歌翻译
预训练的语言模型(PTLM)已显示出在自然语言任务上表现良好。许多先前的作品都以通过知识图(KGS)标记的关系链接的实体的形式利用结构性常识来协助PTLM。检索方法使用kg作为单独的静态模块,该模块限制了覆盖范围,因为kgs包含有限的知识。生成方法训练PTLMS kg三倍以提高获得知识的规模。但是,对符号KG实体的培训限制了其在涉及自然语言文本的任务中的适用性,在这些任务中,它们忽略了整体上下文。为了减轻这种情况,我们提出了一个以句子为条件的常识性上下文化器(COSE-CO)作为输入,以使其在生成与输入文本的整体上下文相关的任务中通常可用。为了训练Cose-Co,我们提出了一个新的数据集,其中包括句子和常识知识对。 COSE-CO推断出的知识是多种多样的,并且包含了基础KG中不存在的新实体。我们增强了在多选质量质量检查和开放式常识性推理任务中产生的知识,从而改善了CSQA,ARC,QASC和OBQA数据集的当前最佳方法。我们还展示了其在改善释义生成任务的基线模型方面的适用性。
translated by 谷歌翻译
This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks. We present a new Multi-Topic Paraphrase in Twitter (MultiPIT) corpus that consists of a total of 130k sentence pairs with crowdsoursing (MultiPIT_crowd) and expert (MultiPIT_expert) annotations using two different paraphrase definitions for paraphrase identification, in addition to a multi-reference test set (MultiPIT_NMR) and a large automatically constructed training set (MultiPIT_Auto) for paraphrase generation. With improved data annotation quality and task-specific paraphrase definition, the best pre-trained language model fine-tuned on our dataset achieves the state-of-the-art performance of 84.2 F1 for automatic paraphrase identification. Furthermore, our empirical results also demonstrate that the paraphrase generation models trained on MultiPIT_Auto generate more diverse and high-quality paraphrases compared to their counterparts fine-tuned on other corpora such as Quora, MSCOCO, and ParaNMT.
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译
大型语言模型在零拍摄设置中的许多自然语言处理(NLP)任务中表现出令人印象深刻的性能。我们询问这些模型是否展示了致辞语言 - NLP应用的关键组成部分 - 通过评估四个偶数基准的模型。我们发现大型语言模型的令人印象深刻的零射击性能主要是由于我们的基准测试中的数据集偏差。我们还表明,零拍摄性能对基准的超参数和相似性敏感到预训练数据集。此外,当在几次拍摄设置中评估模型时,我们没有观察大量改进。最后,与以前的工作相比,我们发现利用明确的致辞知识并没有产生重大改善。
translated by 谷歌翻译