我们介绍伯克利填字游戏求解器,这是一种自动解决填字游戏的最先进方法。我们的系统通过使用神经问题答案模型为每个填字游戏生成答案候选者,然后将loopy信念传播与本地搜索结合在一起,以找到完整的拼图解决方案。与现有方法相比,我们的系统将精确的拼图准确性从《纽约时报》的填字游戏中提高到82%,并获得了无主题难题的99.9%的字母准确性。此外,在2021年,我们系统的混合动力车和现有的博士系统在美国填字游戏中首次优于所有人类竞争对手。为了促进问题回答和填字游戏解决方案的研究,我们分析了系统的剩余错误,并发布了超过600万个问答对的数据集。
translated by 谷歌翻译
This paper proposes to tackle opendomain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.
translated by 谷歌翻译
自动问题应答(QA)系统的目的是以时间有效的方式向用户查询提供答案。通常在数据库(或知识库)或通常被称为语料库的文件集合中找到答案。在过去的几十年里,收购知识的扩散,因此生物医学领域的新科学文章一直是指数增长。因此,即使对于领域专家,也难以跟踪域中的所有信息。随着商业搜索引擎的改进,用户可以在某些情况下键入其查询并获得最相关的一小组文档,以及在某些情况下从文档中的相关片段。但是,手动查找所需信息或答案可能仍然令人疑惑和耗时。这需要开发高效的QA系统,该系统旨在为用户提供精确和精确的答案提供了生物医学领域的自然语言问题。在本文中,我们介绍了用于开发普通域QA系统的基本方法,然后彻底调查生物医学QA系统的不同方面,包括使用结构化数据库和文本集合的基准数据集和几种提出的方​​法。我们还探讨了当前系统的局限性,并探索潜在的途径以获得进一步的进步。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1
translated by 谷歌翻译
我们介绍了一种称为编程拼图的新型编程挑战,作为方案合成的客观和全面评估,并释放Python编程拼图的开源数据集(P3)。每个拼图由短Python程序$ F $定义,目标是找到一个使$ F $返回true的输入。谜题是目的,因为每个人都由其验证者$ F $的源代码完全指定,因此评估为测试候选解决方案所需的$ F $。它们不需要答案密钥或输入/输出示例,也不依赖于自然语言理解。该数据集是全面的,因为它跨越一系列困难和域的问题,从琐碎的字符串操纵问题,经典编程谜题(例如,河内塔),用于采访/竞争编程问题(例如,动态编程),在算法和数学中的长期开放问题(例如,因子)。我们开发基准枚举程序合成,GPT-3和能够解决难题的食盒求解器 - 即使没有访问任何参考解决方案 - 通过从他们自己的过去的解决方案中学习。 Codex表现最佳,解决高达18%的397个测试问题的测试问题,每次尝试和80%的问题占1,000个问题。在一个小的用户学习中,我们发现拼图解决性能和编码体验之间的正相关性,以及人类和AI求解器的难题难度之间。因此,P3的进一步改进可能对许多程序合成区域产生重大影响。
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
语言模型在需要自然语言理解的各种任务上取得了非凡的表现。然而,最先进的模型通常在需要定量推理的任务上挣扎,例如在大学一级解决数学,科学和工程问题。为了帮助缩小这一差距,我们介绍了Minerva,Minerva是一种在一般自然语言数据上鉴定的大型语言模型,并进一步培训了技术内容。该模型在不使用外部工具的情况下实现了技术基准测试的最新性能。我们还评估了我们在需要定量推理的物理学,生物学,化学,经济学和其他科学方面的200多个本科生问题上评估我们的模型,并发现该模型可以正确回答其中几乎三分之一。
translated by 谷歌翻译
英国主导填字游戏,英国的主要填字游戏是推进寻求语义复杂,高度成分语言的NLP系统的有希望的目标。密码线索像流利的自然语言一样读,但是由两部分进行前列组成:一个定义和播放密码需要字符级操作。专家人类使用创造性的智能来解决隐秘,灵活地结合语言,世界和领域知识。在本文中,我们进行了两个主要贡献。首先,我们向NLP系统提供了一个充满挑战的新基准,以便在更具创造性,人类的方式处理组成语言的挑战新的基准。在显示三种非神经方法和T5之后,一种最先进的神经语言模型,无法实现良好的性能,我们提出了第二种主要贡献:一种新的课程方法,其中模型首先进行微调在相关任务,如解读单词。我们还介绍了一个具有挑战性的数据拆分,检查子字标记模型的元语言功能,并通过扰动线索的晶片部分来调查模型系统性,表明T5表现出与人类解决部分符合的行为策略。虽然我们的课程方法大大提高了T5基线,但我们最好的模型仍然无法概括为人类可以的程度。因此,隐秘填字游戏仍然是NLP系统的未解决挑战和未来创新的潜在来源。
translated by 谷歌翻译
中文角色是一款具有挑战性的谜语游戏,将一个角色作为解决方案。谜语用修辞技术描述了解决方案特征的发音,形状和含义。在本文中,我们提出了一个汉字谜语数据集,该数据集涵盖了大多数普通简化的中文字符,通过从网络上爬出谜语并生成全新的杂物。在一代阶段,我们为生成模型提供了中文的语音字母,解释和解释解决方案特征,并为每个测试的字符获得多个谜语描述。然后,生成的谜语是手动过滤的,最终数据集CC-Riddle由人写的谜语和过滤的生成的谜语组成。此外,我们基于数据集构建了一个角色谜语QA系统,发现现有模型难以解决此类棘手的问题。CC-Riddle现已公开可用。
translated by 谷歌翻译
最近的开放式域问题回答表明,新颖的测试问题之间的模型性能和那些在很大程度上与培训问题重叠的模型性能存在很大差异。然而,目前尚不清楚新颖的问题的哪些方面使他们成为挑战。在进行系统泛化的研究时,我们根据三个类别介绍和注释问题,这些类别测量了不同的水平和概括的种类:培训设定重叠,组成泛化(Comp-Gen)和新颖的实体概括(新实体)。在评估六个流行的参数和非参数模型时,我们发现,对于既定的自然问题和TriviaQA数据集,即使是Comp-Gen /新颖实体的最强的模型性能也是13.1 / 5.4%和9.6 / 1.5%,而与此相比降低对于完整的测试集 - 表示这些类型的问题所带来的挑战。此外,我们表明,虽然非参数模型可以相对良好地处理含有新颖实体的问题,但它们与那些需要组成泛化的问题斗争。最后,我们发现关键问题是:来自检索组件的级联错误,问题模式的频率和实体的频率。
translated by 谷歌翻译
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. 1 Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp. github.io/coqa.
translated by 谷歌翻译
When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from CON-CEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.
translated by 谷歌翻译
Winograd架构挑战 - 一套涉及代词参考消歧的双句话,似乎需要使用致辞知识 - 是由2011年的赫克托勒维克斯提出的。到2019年,基于大型预先训练的变压器的一些AI系统基于语言模型和微调这些问题,精度优于90%。在本文中,我们审查了Winograd架构挑战的历史并评估了其重要性。
translated by 谷歌翻译
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset -matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
translated by 谷歌翻译
使用来自表格(TableQA)的信息回答自然语言问题是最近的兴趣。在许多应用程序中,表未孤立,但嵌入到非结构化文本中。通常,通过将其部分与表格单元格内容或非结构化文本跨度匹配,并从任一源中提取答案来最佳地回答问题。这导致了HybridQA数据集引入的TextableQA问题的新空间。现有的表格表示对基于变换器的阅读理解(RC)架构的适应性未通过单个系统解决两个表示的不同模式。培训此类系统因对遥远监督的需求而进一步挑战。为了降低认知负担,培训实例通常包括问题和答案,后者匹配多个表行和文本段。这导致嘈杂的多实例培训制度不仅涉及表的行,而且涵盖了链接文本的跨度。我们通过提出Mitqa来回应这些挑战,这是一个新的TextableQA系统,明确地模拟了表行选择和文本跨度选择的不同但密切相关的概率空间。与最近的基线相比,我们的实验表明了我们的方法的优越性。该方法目前在HybridQA排行榜的顶部,并进行了一个试验集,在以前公布的结果上实现了对em和f1的21%的绝对改善。
translated by 谷歌翻译
自Bert(Devlin等,2018)以来,学习上下文化的单词嵌入一直是NLP中的事实上的标准。然而,学习上下文化短语嵌入的进展受到缺乏人类通知的语句基准基准的阻碍。为了填补这一空白,我们提出了PIC- 〜28K名词短语的数据集伴随着它们的上下文Wikipedia页面,以及一套三个任务,这些任务增加了评估短语嵌入质量的难度。我们发现,在我们的数据集中进行的培训提高了排名模型的准确性,并明显地将问题答案(QA)模型推向了近人类的准确性,而在语义搜索上,鉴于询问短语和段落,在语义搜索上是95%的精确匹配(EM)。有趣的是,我们发现这种令人印象深刻的性能的证据是因为质量检查模型学会了更好地捕获短语的共同含义,而不管其实际背景如何。也就是说,在我们的短语中歧义歧义(PSD)任务上,SOTA模型的精度大大下降(60%EM),在两个不同情况下未能区分相同短语的两种不同感觉。在我们的3任任务基准测试中的进一步结果表明,学习上下文化的短语嵌入仍然是一个有趣的开放挑战。
translated by 谷歌翻译
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dualencoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. 1 * Equal contribution 1 The code and trained models have been released at https://github.com/facebookresearch/DPR.
translated by 谷歌翻译
In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
translated by 谷歌翻译
Wikidata是一个经常更新,社区驱动和多语言知识图形。因此,Wikidata是实体联系的一个有吸引力的基础,这是最近发表论文的增加显而易见的。该调查侧重于四个主题:(1)存在哪些Wikidata实体链接数据集,它们是多么广泛使用,它们是如何构建的? (2)对实体联系数据集的设计进行Wikidata的特点,如果是的话,怎么样? (3)当前实体链接方法如何利用Wikidata的特定特征? (4)现有实体链接方法未开发哪种Wikidata特征?本次调查显示,当前的Wikidata特定实体链接数据集在其他知识图表中的方案中的注释方案中没有不同。因此,没有提升多语言和时间依赖数据集的可能性,是自然适合维基帽的数据集。此外,我们表明大多数实体链接方法使用Wikidata以与任何其他知识图相同的方式,因为任何其他知识图都缺少了利用Wikidata特定特征来提高质量的机会。几乎所有方法都使用标签等特定属性,有时是描述,而是忽略超关系结构等特征。因此,例如,通过包括超关系图嵌入或类型信息,仍有改进的余地。许多方法还包括来自维基百科的信息,这些信息很容易与Wikidata组合并提供有价值的文本信息,Wikidata缺乏。
translated by 谷歌翻译