We introduce a language generation task grounded in a popular video game environment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration) involves generating dialogue trees conditioned on an ontology captured in natural language passages providing quest and entity specifications. KNUDGE is constructed from side quest dialogues drawn directly from game data of Obsidian Entertainment's The Outer Worlds, leading to real-world complexities in generation: (1) dialogues are branching trees as opposed to linear chains of utterances; (2) utterances must remain faithful to the game lore--character personas, backstories, and entity relationships; and (3) a dialogue must accurately reveal new quest-related details to the human player. We report results for supervised and in-context learning techniques, finding there is significant room for future work on creating realistic game-quality dialogues.
translated by 谷歌翻译
Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in the few-shot case, giving several points of improvement in exact match scores. However, we also show that when models are given access to datasets with around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.
translated by 谷歌翻译
Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the input contexts can cause large drops in accuracy for production systems. However, little to no work has proposed methods to defend against these attacks. To do so, we introduce a new method that uses query augmentation to search for a diverse set of retrieved passages that could answer the original question. We integrate these new passages into the model through the design of a novel confidence method, comparing the predicted answer to its appearance in the retrieved contexts (what we call Confidence from Answer Redundancy, e.g. CAR). Together these methods allow for a simple but effective way to defend against poisoning attacks and provide gains of 5-20% exact match across varying levels of data poisoning.
translated by 谷歌翻译
Despite the superior performance brought by vision-and-language pretraining, it remains unclear whether learning with multi-modal data can help understand each individual modality. In this work, we investigate how language can help with visual representation learning from a probing perspective. Specifically, we compare vision-and-language and vision-only models by probing their visual representations on a broad range of tasks, in order to assess the quality of the learned representations in a fine-grained manner. Interestingly, our probing results suggest that vision-and-language models are better at label prediction tasks like object and attribute prediction, while vision-only models are stronger at dense prediction tasks that require more localized information. With further analysis using detailed metrics, our study suggests that language helps vision models learn better semantics, but not localization. Code is released at https://github.com/Lizw14/visual_probing.
translated by 谷歌翻译
Visual Question Answering (VQA) models often perform poorly on out-of-distribution data and struggle on domain generalization. Due to the multi-modal nature of this task, multiple factors of variation are intertwined, making generalization difficult to analyze. This motivates us to introduce a virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts can be isolated in order that their effects can be studied independently. Four factors are considered: visual complexity, question redundancy, concept distribution and concept compositionality. With controllably generated data, Super-CLEVR enables us to test VQA methods in situations where the test data differs from the training data along each of these axes. We study four existing methods, including two neural symbolic methods NSCL and NSVQA, and two non-symbolic methods FiLM and mDETR; and our proposed method, probabilistic NSVQA (P-NSVQA), which extends NSVQA with uncertainty reasoning. P-NSVQA outperforms other methods on three of the four domain shift factors. Our results suggest that disentangling reasoning and perception, combined with probabilistic uncertainty, form a strong VQA model that is more robust to domain shifts. The dataset and code are released at https://github.com/Lizw14/Super-CLEVR.
translated by 谷歌翻译
Task-oriented semantic parsing is increasingly being used in user-facing applications, making measuring the calibration of parsing models especially important. We examine the calibration characteristics of six models across three model families on two common English semantic parsing datasets, finding that many models are reasonably well-calibrated and that there is a trade-off between calibration and performance. Based on confidence scores across three models, we propose and release new challenge splits of the two datasets we examine. We then illustrate the ways a calibrated model can be useful in balancing common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that using model confidence allows us to improve the accuracy on validation programs by 9.6% (absolute) with annotator interactions on only 2.2% of tokens. Using sequence-level confidence scores, we then examine how we can optimize trade-off between a parser's usability and safety. We show that confidence-based thresholding can reduce the number of incorrect low-confidence programs executed by 76%; however, this comes at a cost to usability. We propose the DidYouMean system which balances usability and safety. We conclude by calling for calibration to be included in the evaluation of semantic parsing systems, and release a library for computing calibration metrics.
translated by 谷歌翻译
我们提出了一种系统推理的方法,该方法生产了基于事实基础的人类可解释的证明树。我们的解决方案类似于经典的基于序言的推理引擎的风格,在该引擎中,我们通过神经语言建模,指导生成和半磁头密集检索的结合来代替手工制作的规则。这款新颖的推理引擎Nellie动态实例化了可解释的推理规则,这些规则捕获和分数构成(DE)在自然语言陈述上。内莉(Nellie)在科学质量检查数据集上提供竞争性能,需要对多个事实进行结构化解释。
translated by 谷歌翻译
现有的多方对话数据集用于核心分辨率是新生的,许多挑战仍然没有解决。我们根据电视成绩单为此任务创建了一个大规模数据集,多语言多方CoreF(MMC)。由于使用多种语言的黄金质量字幕可用,我们建议重复注释以通过注释投影以其他语言(中文和Farsi)创建银色核心数据。在黄金(英语)数据上,现成的模型在MMC上的性能相对较差,这表明MMC比以前的数据集更广泛地覆盖多方核心。在银数据上,我们发现成功使用它进行数据增强和从头开始训练,这有效地模拟了零击的跨语性设置。
translated by 谷歌翻译
预处理的多语言编码器可实现零拍的跨语性转移,但通常会产生不可靠的模型,这些模型在目标语言上表现出高性能差异。我们假设这种高差异是由零拍的跨语性转移解决了一个不明显的优化问题。我们表明,源语言单语言模型和源 +目标双语模型之间的任何线性交互模型都具有较低的源语言概括错误,但是当我们从单语模型移动到双语模型时,目标语言概括误差会顺利而线性地降低,这表明该模型努力仅使用源语言来识别源和目标语言的良好解决方案。此外,我们表明零击解决方案在于目标语言误差概括表面的非平板区域,从而导致较高的方差。
translated by 谷歌翻译
我们介绍了BenchClamp,这是一种评估受约束语言模型解析的基准测试,该基准通过通过限制性解码的启动或微调语言模型来基于输入文本的分析来产生语义输出。目前,预审前语言模型的开发人员基于分类,跨度提取和自由文本生成任务。语言解析在语言模型评估中被忽略,因为处理特定于任务的体系结构和表示的复杂性。最近的工作表明,当输出被限制为有效的语义表示时,从提示或微调的语言模型中产生的发电能力可以很好地表现。台式设备包括无上下文的语法,适用于六个具有不同输出含义表示形式的语义解析数据集,以及一个受约束的解码接口,以生成这些语法覆盖的输出。我们为每个数据集提供低,中和高资源分割,从而可以在不同的数据制度下准确比较各种语言模型。我们的基准测试既支持基于及时的学习又支持微调,并为语言模型开发人员提供了易于使用的工具包,以评估语义解析。
translated by 谷歌翻译