Current large language models can perform reasonably well on complex tasks that require step-by-step reasoning with few-shot learning. Are these models applying reasoning skills they have learnt during pre-training and reason outside of their training context, or are they simply memorizing their training corpus at finer granularity and have learnt to better understand their context? To tease apart these possibilities, we introduce ALERT, a benchmark and suite of analyses for assessing language models' reasoning ability comparing pre-trained and finetuned models on complex tasks that require reasoning skills to solve. ALERT provides a test bed to asses any language model on fine-grained reasoning skills, which spans over 20 datasets and covers 10 different reasoning skills. We leverage ALERT to further investigate the role of finetuning. With extensive empirical analysis we find that language models learn more reasoning skills such as textual entailment, abductive reasoning, and analogical reasoning during finetuning stage compared to pretraining state. We also find that when language models are finetuned they tend to overfit to the prompt template, which hurts the robustness of models causing generalization problems.
translated by 谷歌翻译
大型语言模型经常经过数十万个计算天的训练,已经显示出零和少数学习的显着功能。鉴于它们的计算成本,如果没有大量资本,这些模型很难复制。对于通过API可用的少数产品,没有访问完整的模型权重,因此很难学习。我们提供开放训练的预训练变压器(OPT),这是一套仅解码器预训练的变压器,范围从12500万到175b参数,我们旨在与感兴趣的研究人员完全和负责任地分享。我们表明,OPT-175B与GPT-3相当,而仅需要1/7碳足迹才能开发。我们还释放了日志,详细介绍了我们面临的基础架构挑战,以及用于尝试所有发布模型的代码。
translated by 谷歌翻译
专家层(MOES)的混合物通过条件计算实现语言模型的高效缩放。本文提出了一个详细的实证研究,自回归鞋语言模型与广泛的设置中的密集模型相比:在域外语言建模,零和少量射击和全部微调。除了微调外,我们发现Moes基本上更加计算效率。在更适度的培训预算下,MOES可以使用$ \ SIM值4倍的计算,符合密集模型的性能。该差距在比例下变窄,但我们最大的MOE模型(1.1T参数)始终如一地优于计算等效的密集模型(6.7b参数)。总体而言,这种表现差距在任务和域中有很大差异,表明MOE和密集模型以不值得研究的方式概括不同的方式。我们使我们的代码和模型公开可用于研究使用。
translated by 谷歌翻译
GPT-3等大型自回归语言模型是几秒钟的学习者,可以在没有微调的情况下执行各种语言任务。虽然已知这些模型能够共同代表许多不同的语言,但他们的培训数据由英语主导,可能限制了它们的交叉概括。在这项工作中,我们在覆盖多种语言的平衡语料库上培训多语言自回归语言模型,并在广泛的任务中研究他们几乎没有零点的学习能力。我们最大的模型,具有75亿参数,在20多种代表语言中,在几种代表语言中,在几种代表性语言中,在几种代表性语言中,在多语言型号推理中表现出可比大小的GPT-3(在0次设置和0次拍摄设置中的绝对精度改善+ 7.4% 4-拍摄设置中的9.4%)和自然语言推理(每次拍摄和4次设置中的每一个+ 5.4%)。在Flores-101机器翻译基准测试中,我们的模型优于GPT-3在182个翻译方向上有32个培训例子,同时超过45个方向的官方监督基线。我们介绍了模型成功和失败的位置的详细分析,特别是它尤其显示在某些任务中实现交叉语境的内容学习,而仍然存在改善表面的鲁棒性和适应没有a的任务的余地自然冻结形式。最后,我们评估我们在仇恨语音检测中以五种语言的仇恨语音检测的模型,并发现它具有与可比大小的GPT-3模型类似的限制。
translated by 谷歌翻译
可以利用致辞知识来识别文本中的因果关系。在这项工作中,我们在Atomic2020中言语三元组,广泛的覆盖率致辞推理知识图表,到自然语言文本,并不断预先预留伯特普瑞赖林模型。我们评估了回答勤杂朗语言推理问题所产生的模型。我们的研究结果表明,通过致致通知推理知识增强了不断预付费的语言模型在两个致辞语言推理基准测试,COPA和BCOPA-CE上表现出我们的基线,而无需对基础模型的额外改进或使用质量增强的数据进行微调。
translated by 谷歌翻译
语言模型是否存在对世界的信念? Dennett(1995年)着名的据称,即使是恒温器也有信仰,认为,信仰只是一种与任何动机国家分离的信息状态。在本文中,我们讨论了何时何时对世界的信仰何时对世界的信念进行检测,并且我们改进了更新模型信念的方法更加真实,重点是基于学习优化器或HyperNetwork的方法。我们的主要贡献包括:(1)评估信仰更新方法的新指标,重点关注信仰的逻辑一致性,(2)培训目标,用于顺序,本地和概括模型更新(渣),从而提高学习优化器的性能(3)介绍信仰图,这是一种新的界面,语言模型显示模型信仰之间的相互依赖性。我们的实验表明,模型只有有限的程度才具有相信的品质,但更新方法都可以修复不正确的模型信念,并大大提高了它们的一致性。虽然现成的优化器令人惊讶地强烈的信念更新基线,但我们所学的优化器可以在更困难的环境中赢得比过去的工作更困难。代码可在https://github.com/peterbhase/slag-belifapdating中获得
translated by 谷歌翻译
社区问题应答(CQA)FORA,如堆栈溢出和雅虎!答案包含丰富的资源,对广泛的基于社区的问题答案。每个问题线程都可以通过不同的角度接收大量答案。答案摘要的一个目标是产生反映答案视角范围的摘要。抽象答案概述的主要障碍是没有数据集,可以提供监督制作这些摘要。最近的作品提出了创建此类数据的启发式,但这些是嘈杂的,并且不会涵盖答案中存在的所有观点。这项工作介绍了4,631个CQA线程的新型数据集,用于答案摘要,由专业语言学家策划。我们的管道收集了答案概述所涉及的所有子特设的注释,包括选择与问题相关的答案句子,根据透视图对这些句子进行分组,总结每个视角,并生成整体摘要。我们在这些子组织上分析和基准最先进的模型,并为多视角数据增强引入了一种新的无监督方法,这进一步提高了根据自动评估的整体摘要性能。最后,我们提出了加强学习奖励,以改善事实一致性和答案覆盖范围和分析改进领域。
translated by 谷歌翻译
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017). 7 We use 50-dimensional GloVe word embeddings (Pennington et al., 2014) trained on a combination of Gigaword 5 (Parker et al., 2011) and English Wikipedia available at http://nlp.stanford.edu/projects/glove/.8 https://www.mturk.com/ 9 A designation that statistically identifies workers who perform high quality work across a diverse set of tasks.10 Spanish data from 2015 and 2014 uses a 5 point scale that collapses STS labels 4 and 3, removing the distinction between unimportant and important details.
translated by 谷歌翻译
Here, we demonstrate how machine learning enables the prediction of comonomers reactivity ratios based on the molecular structure of monomers. We combined multi-task learning, multi-inputs, and Graph Attention Network to build a model capable of predicting reactivity ratios based on the monomers chemical structures.
translated by 谷歌翻译
A fundamental characteristic common to both human vision and natural language is their compositional nature. Yet, despite the performance gains contributed by large vision and language pretraining, we find that - across 6 architectures trained with 4 algorithms on massive datasets - they exhibit little compositionality. To arrive at this conclusion, we introduce a new compositionality evaluation benchmark CREPE which measures two important aspects of compositionality identified by cognitive science literature: systematicity and productivity. To measure systematicity, CREPE consists of three test datasets. The three test sets are designed to test models trained on three of the popular training datasets: CC-12M, YFCC-15M, and LAION-400M. They contain 385K, 385K, and 373K image-text pairs and 237K, 210K, and 178K hard negative captions. To test productivity, CREPE contains 17K image-text pairs with nine different complexities plus 246K hard negative captions with atomic, swapping, and negation foils. The datasets are generated by repurposing the Visual Genome scene graphs and region descriptions and applying handcrafted templates and GPT-3. For systematicity, we find that model performance decreases consistently when novel compositions dominate the retrieval set, with Recall@1 dropping by up to 8%. For productivity, models' retrieval success decays as complexity increases, frequently nearing random chance at high complexity. These results hold regardless of model and training dataset size.
translated by 谷歌翻译