Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario. To address these issues, we propose a novel Reliable Causal chain reasoning framework~(ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks~(SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge.
translated by 谷歌翻译
来自文本的采矿因果关系是一种复杂的和至关重要的自然语言理解任务,对应于人类认知。其解决方案的现有研究可以分为两种主要类别:基于特征工程和基于神经模型的方法。在本文中,我们发现前者具有不完整的覆盖范围和固有的错误,但提供了先验知识;虽然后者利用上下文信息,但其因果推断不足。为了处理限制,我们提出了一个名为MCDN的新型因果关系检测模型,明确地模拟因果关系,而且,利用两种方法的优势。具体而言,我们采用多头自我关注在Word级别获得语义特征,并在段级别推断出来的SCRN。据我们所知,关于因果关系任务,这是第一次应用关系网络。实验结果表明:1)该方法对因果区检测进行了突出的性能; 2)进一步分析表现出MCDN的有效性和稳健性。
translated by 谷歌翻译
Script event prediction aims to predict the subsequent event given the context. This requires the capability to infer the correlations between events. Recent works have attempted to improve event correlation reasoning by using pretrained language models and incorporating external knowledge~(e.g., discourse relations). Though promising results have been achieved, some challenges still remain. First, the pretrained language models adopted by current works ignore event-level knowledge, resulting in an inability to capture the correlations between events well. Second, modeling correlations between events with discourse relations is limited because it can only capture explicit correlations between events with discourse markers, and cannot capture many implicit correlations. To this end, we propose a novel generative approach for this task, in which a pretrained language model is fine-tuned with an event-centric pretraining objective and predicts the next event within a generative paradigm. Specifically, we first introduce a novel event-level blank infilling strategy as the learning objective to inject event-level knowledge into the pretrained language model, and then design a likelihood-based contrastive loss for fine-tuning the generative model. Instead of using an additional prediction layer, we perform prediction by using sequence likelihoods generated by the generative model. Our approach models correlations between events in a soft way without any external knowledge. The likelihood-based prediction eliminates the need to use additional networks to make predictions and is somewhat interpretable since it scores each word in the event. Experimental results on the multi-choice narrative cloze~(MCNC) task demonstrate that our approach achieves better results than other state-of-the-art baselines. Our code will be available at \url{https://github.com/zhufq00/mcnc}.
translated by 谷歌翻译
作为人类认知的重要组成部分,造成效果关系频繁出现在文本中,从文本策划原因关系有助于建立预测任务的因果网络。现有的因果关系提取技术包括基于知识的,统计机器学习(ML)和基于深度学习的方法。每种方法都具有其优点和缺点。例如,基于知识的方法是可以理解的,但需要广泛的手动域知识并具有较差的跨域适用性。由于自然语言处理(NLP)工具包,统计机器学习方法更加自动化。但是,功能工程是劳动密集型的,工具包可能导致错误传播。在过去的几年里,由于其强大的代表学习能力和计算资源的快速增加,深入学习技术吸引了NLP研究人员的大量关注。它们的局限包括高计算成本和缺乏足够的注释培训数据。在本文中,我们对因果关系提取进行了综合调查。我们最初介绍了因果关系提取中存在的主要形式:显式的内部管制因果关系,隐含因果关系和间情态因果关系。接下来,我们列出了代理关系提取的基准数据集和建模评估方法。然后,我们介绍了三种技术的结构化概述了与他们的代表系统。最后,我们突出了潜在的方向存在现有的开放挑战。
translated by 谷歌翻译
除了以实体为中心的知识之外,通常组织为知识图(千克),事件也是世界上的必不可少的知识,这触发了活动以kg(ekg)等事件为中心的知识表示形式的春天。它在许多机器学习和人工智能应用中起着越来越重要的作用,例如智能搜索,问答,推荐和文本生成。本文提供了历史,本体实例和应用视图的ekg综合调查。具体而言,要彻底地表征EKG,我们专注于其历史,定义,架构归纳,获取,相关代表图形/系统和应用程序。其中研究了发展过程和趋势。我们进一步总结了透视方向,以促进对EKG的未来研究。
translated by 谷歌翻译
当前的因果文本挖掘数据集在目标,数据覆盖率和注释方案中有所不同。这些不一致的努力阻止了建模能力和模型性能的公平比较。很少有数据集包含因果跨度注释,这是端到端因果提取所需的。因此,我们提出了Unicausal,这是跨三个任务的因果文本开采的统一基准:因果序列分类,因果效应跨度检测和因果对分类。我们合并了六个高质量人类注销语料库的注释和对齐注释,分别为每个任务分别为58,720、12,144和69,165个示例。由于因果关系的定义可以是主观的,因此我们的框架旨在允许研究人员处理某些或所有数据集和任务。作为初始基准,我们将BERT预培训模型调整为我们的任务并生成基线得分。对于序列分类,我们获得了70.10%的二进制F1得分,跨度检测获得了52.42%的宏F1得分,对成对分类获得了84.68%的二进制F1得分。
translated by 谷歌翻译
三重提取是自然语言处理和知识图构建信息提取的重要任务。在本文中,我们重新审视了序列生成的端到端三重提取任务。由于生成三重提取可能难以捕获长期依赖性并产生不忠的三元组,因此我们引入了一种新型模型,即与生成变压器的对比度三重提取。具体而言,我们为基于编码器的生成引入了一个共享的变压器模块。为了产生忠实的结果,我们提出了一个新颖的三胞胎对比训练对象。此外,我们引入了两种机制,以进一步提高模型性能(即,批处理动态注意力掩盖和三个方面的校准)。在三个数据集(即NYT,WebNLG和MIE)上进行的实验结果表明,我们的方法比基线的方法更好。
translated by 谷歌翻译
情绪原因对提取(ECPE)任务旨在从文档中提取情绪和原因。我们观察到,在典型的ECPE数据集中,情绪和原因的相对距离分布极为不平衡。现有方法设置了一个固定的大小窗口,以捕获相邻子句之间的关系。但是,他们忽略了遥远条款之间的有效语义联系,从而导致对位置不敏感数据的概括能力差。为了减轻问题,我们提出了一种新型的多晶格语义意识图模型(MGSAG),以共同结合细粒度和粗粒语义特征,而无需距离限制。特别是,我们首先探讨从子句和从文档中提取的关键字之间的语义依赖性,这些文档传达了细颗粒的语义特征,从而获得了关键字增强子句表示。此外,还建立了子句图,以模拟条款之间的粗粒语义关系。实验结果表明,MGSAG超过了现有的最新ECPE模型。特别是,MGSAG在不敏感数据的条件下大大优于其他模型。
translated by 谷歌翻译
近年来,基于变压器的预训练模型已获得了很大的进步,成为自然语言处理中最重要的骨干之一。最近的工作表明,变压器内部的注意力机制可能不需要,卷积神经网络和基于多层感知器的模型也已被研究为变压器替代方案。在本文中,我们考虑了一个用于语言模型预训练的图形循环网络,该网络通过本地令牌级通信为每个序列构建一个图形结构,以及与其他代币解耦的句子级表示。原始模型在受监督培训下的特定领域特定文本分类中表现良好,但是,其通过自我监督的方式学习转移知识的潜力尚未得到充分利用。我们通过优化体系结构并验证其在更通用的语言理解任务(英语和中文)中的有效性来填补这一空白。至于模型效率,我们的模型在基于变压器的模型中而不是二次复杂性,而是具有线性复杂性,并且在推断过程中的性能更有效。此外,我们发现与现有基于注意力的模型相比,我们的模型可以生成更多样化的输出,而背景化的功能冗余性较小。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
鉴于将语言模型转移到NLP任务的成功,我们询问全BERT模型是否始终是最好的,并且它存在一个简单但有效的方法,可以在没有的最先进的深神经网络中找到获胜的票复杂的计算。我们构建了一系列基于BERT的模型,具有不同的大小,并对8个二进制分类任务进行比较。结果表明,真正存在的较小的子网比完整模型更好。然后我们提供进一步的研究,并提出一种简单的方法在微调之前适当地收缩斜率。一些扩展实验表明,我们的方法可以省略甚至没有准确性损失的时间和存储开销。
translated by 谷歌翻译
本文通过将深度递归编码器添加到具有深递归编码器(BERT-DRE)的伯爵,提供了一种深度神经阵列匹配(NLSM)。我们对模型行为的分析表明,BERT仍未捕获文本的全部复杂性,因此伯特顶部应用了一个深递归编码器。具有残留连接的三个Bi-LSTM层用于设计递归编码器,并在此编码器顶部使用注意模块。为了获得最终的载体,使用由平均值和最大池组成的池化层。我们在四个基准,SNLI,贝尔船,Multinli,Scitail和新的波斯宗教问题数据集上进行模型。本文侧重于改善NLSM任务中的BERT结果。在这方面,进行BERT-DRE和BERT之间的比较,并且显示在所有情况下,BERT-DRE优于伯特。宗教数据集的BERT算法实现了89.70%的精度,并且BERT-DRE架构使用相同的数据集提高了90.29%。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
尽管预训练的语言模型(LMS)在许多NLP任务中都取得了重大改进,但人们越来越关注探索LMS的能力并解释其预测。但是,现有作品通常仅着眼于某些下游任务的特定功能。缺乏直接评估蒙版单词预测性能和预训练LMS的解释性的数据集。为了填补空白,我们提出了一个新颖的评估基准,以提供英语和中文注释的数据。它在多个维度(即语法,语义,知识,推理和计算)中测试LMS能力。此外,它提供了满足足够和紧凑性的仔细注释的令牌级别的理由。它包含每个原始实例的扰动实例,以便将扰动下的基本原理一致性用作忠实的指标,即解释性的观点。我们在几个广泛使用的预训练的LMS上进行实验。结果表明,他们在知识和计算的维度上表现较差。而且它们在所有维度上的合理性远非令人满意,尤其是当理由缩短时。此外,我们评估的预训练的LMS在语法感知数据上并不强大。我们将以\ url {http:// xyz}发布此评估基准,并希望它可以促进预训练的LMS的研究进度。
translated by 谷歌翻译
机器阅读理解引起了广泛的关注,因为它探讨了模型对文本理解的潜力。为了进一步为机器配备推理能力,提出了逻辑推理的挑战性任务。以前关于逻辑推理的著作提出了一些策略,以从不同方面提取逻辑单位。但是,对于逻辑单元之间的长距离依赖性建模仍然存在挑战。同样,要求揭示文本的逻辑结构,并将离散逻辑进一步融合到连续的文本嵌入。为了解决上述问题,我们提出了一个端到端的模型徽标,该登录徽标器利用两个分支的图形变压器网络进行文本逻辑推理。首先,我们引入了不同的提取策略,将文本分为两组逻辑单元,并分别构造逻辑图和语法图。逻辑图模拟了逻辑分支的因果关系,而语法图捕获了语法分支的共发生关系。其次,为了建模长距离依赖性,每个图的节点序列被馈入完全连接的图形变压器结构。两个相邻的矩阵被视为图形变压器层的注意偏置,它们将离散的逻辑结构映射到连续的文本嵌入空间。第三,在答案预测更新功能之前,介绍了动态的门机制和问题意识到的自我发项模块。推理过程通过采用逻辑单元来提供与人类认知一致的逻辑单位。实验结果表明了我们的模型的优势,该模型的表现优于两个逻辑推理基准上的最新单个模型。
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
尽管现有的机器阅读理解模型在许多数据集上取得了迅速的进展,但它们远非强劲。在本文中,我们提出了一个面向理解的机器阅读理解模型,以解决三种鲁棒性问题,这些问题过于敏感,稳定性和泛化。具体而言,我们首先使用自然语言推理模块来帮助模型了解输入问题的准确语义含义,以解决过度敏感性和稳定性的问题。然后,在机器阅读理解模块中,我们提出了一种记忆引导的多头注意方法,该方法可以进一步很好地理解输入问题和段落的语义含义。第三,我们提出了一种多语言学习机制来解决概括问题。最后,这些模块与基于多任务学习的方法集成在一起。我们在三个旨在衡量模型稳健性的基准数据集上评估了我们的模型,包括Dureader(健壮)和两个与小队相关的数据集。广泛的实验表明,我们的模型可以很好地解决上述三种鲁棒性问题。而且,即使在某些极端和不公平的评估下,它也比所有这些数据集中所有这些数据集的最先进模型的结果要好得多。我们工作的源代码可在以下网址获得:https://github.com/neukg/robustmrc。
translated by 谷歌翻译
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts. Recent debiasing methods usually treated features learned by small models or big models at earlier steps as bias features and proposed to exclude the branch learning those bias features during inference. However, most of these methods fail to disentangle the ``good'' stance features and ``bad'' bias features in the text part. In this paper, we investigate how to mitigate dataset bias in stance detection. Motivated by causal effects, we leverage a novel counterfactual inference framework, which enables us to capture the dataset bias in the text part as the direct causal effect of the text on stances and reduce the dataset bias in the text part by subtracting the direct text effect from the total causal effect. We novelly model bias features as features that correlate with the stance labels but fail on intermediate stance reasoning subtasks and propose an adversarial bias learning module to model the bias more accurately. To verify whether our model could better model the interaction between texts and targets, we test our model on recently proposed test sets to evaluate the understanding of the task from various aspects. Experiments demonstrate that our proposed method (1) could better model the bias features, and (2) outperforms existing debiasing baselines on both the original dataset and most of the newly constructed test sets.
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译