RTE is a significant problem and is a reasonably active research community. The proposed research works on the approach to this problem are pretty diverse with many different directions. For Vietnamese, the RTE problem is moderately new, but this problem plays a vital role in natural language understanding systems. Currently, methods to solve this problem based on contextual word representation learning models have given outstanding results. However, Vietnamese is a semantically rich language. Therefore, in this paper, we want to present an experiment combining semantic word representation through the SRL task with context representation of BERT relative models for the RTE problem. The experimental results give conclusions about the influence and role of semantic representation on Vietnamese in understanding natural language. The experimental results show that the semantic-aware contextual representation model has about 1% higher performance than the model that does not incorporate semantic representation. In addition, the effects on the data domain in Vietnamese are also higher than those in English. This result also shows the positive influence of SRL on RTE problem in Vietnamese.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
One of the common traits of past and present approaches for Semantic Role Labeling (SRL) is that they rely upon discrete labels drawn from a predefined linguistic inventory to classify predicate senses and their arguments. However, we argue this need not be the case. In this paper, we present an approach that leverages Definition Modeling to introduce a generalized formulation of SRL as the task of describing predicate-argument structures using natural language definitions instead of discrete labels. Our novel formulation takes a first step towards placing interpretability and flexibility foremost, and yet our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance. We release our software for research purposes at https://github.com/SapienzaNLP/dsrl.
translated by 谷歌翻译
语义角色标签(SRL)旨在识别句子的谓词题目结构,并可以分解为两个子任务:谓词歧义歧义和参数标记。先前的工作独立处理这两个任务,这两个任务忽略了两个任务之间的语义连接。在本文中,我们建议使用机器阅读理解(MRC)框架来弥合这一差距。我们将谓词歧义形式化为多项选择的机器阅读理解,其中给定谓词的候选感官的描述用作选择正确的感觉的选项。然后使用所选的谓词感来确定该谓词的语义角色,这些语义角色用于构建另一个MRC模型的查询以进行参数标记。这样,我们能够利用参数标记的谓词语义和语义角色语义。我们还建议为计算效率选择所有可能的语义角色的子集。实验表明,所提出的框架可实现与先前工作的最新结果或可比的结果。代码可在\ url {https://github.com/shannonai/mrc-srl}上获得。
translated by 谷歌翻译
社会科学的学术文献是记录人类文明并研究人类社会问题的文献。随着这种文献的大规模增长,快速找到有关相关问题的现有研究的方法已成为对研究人员的紧迫需求。先前的研究,例如SCIBERT,已经表明,使用特定领域的文本进行预训练可以改善这些领域中自然语言处理任务的性能。但是,没有针对社会科学的预训练的语言模型,因此本文提出了关于社会科学引文指数(SSCI)期刊上许多摘要的预培训模型。这些模型可在GitHub(https://github.com/s-t-full-text-knowledge-mining/ssci-bert)上获得,在学科分类和带有社会科学文学的抽象结构 - 功能识别任务方面表现出色。
translated by 谷歌翻译
介词经常出现多元化词。歧义歧义在语义角色标记,问题应答,文本征报和名词复合释义中,歧义是至关重要的。在本文中,我们提出了一种新颖的介词意义消费者(PSD)方法,其不使用任何语言工具。在监督设置中,机器学习模型提出有句子,其中介词已经用感测量注释。这些感官是ID所谓的介词项目(TPP)。我们使用预先训练的BERT和BERT VARIANTS的隐藏层表示。然后使用多层Perceptron将潜在的表示分为正确的感测ID。用于此任务的数据集来自Semeval-2007任务-6。我们的方法理解为86.85%,比最先进的更好。
translated by 谷歌翻译
观察到对于某些NLP任务,例如语义角色预测或主题拟合估计,随机嵌入性能以及预处理的嵌入方式,我们探索了哪些设置允许并检查大多数学习的编码:语义角色,语义角色,语义角色嵌入或``网络''。我们发现细微的答案,具体取决于任务及其与培训目标的关系。我们研究了多任务学习中的这些表示学习方面,在这些方面,角色预测和角色填充是受监督的任务,而几个主题拟合任务不在模型的直接监督之外。我们观察到某些任务的质量得分与培训数据规模之间的非单调关系。为了更好地理解此观察结果,我们使用这些任务的每个动力版本来分析这些结果。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
我们提出了一个新的框架,在增强的自然语言(TANL)之间的翻译,解决了许多结构化预测语言任务,包括联合实体和关系提取,嵌套命名实体识别,关系分类,语义角色标记,事件提取,COREREFED分辨率和对话状态追踪。通过培训特定于特定于任务的鉴别分类器来说,我们将其作为一种在增强的自然语言之间的翻译任务,而不是通过培训问题,而不是解决问题,而是可以轻松提取任务相关信息。我们的方法可以匹配或优于所有任务的特定于任务特定模型,特别是在联合实体和关系提取(Conll04,Ade,NYT和ACE2005数据集)上实现了新的最先进的结果,与关系分类(偶尔和默示)和语义角色标签(Conll-2005和Conll-2012)。我们在使用相同的架构和超参数的同时为所有任务使用相同的架构和超级参数,甚至在培训单个模型时同时解决所有任务(多任务学习)。最后,我们表明,由于更好地利用标签语义,我们的框架也可以显着提高低资源制度的性能。
translated by 谷歌翻译
指定的实体识别任务是信息提取的核心任务之一。单词歧义和单词缩写是命名实体低识别率的重要原因。在本文中,我们提出了一种名为“实体识别模型WCL-BBCD”(与Bert-Bilstm-Crf-Dbpedia的单词对比学习),结合了对比度学习的概念。该模型首先在文本中训练句子对,计算句子对通过余弦的相似性中的单词对之间的相似性,以及通过相似性通过相似性来命名实体识别任务的BERT模型,以减轻单词歧义。然后,将微调的BERT模型与Bilstm-CRF模型相结合,以执行指定的实体识别任务。最后,将识别结果与先验知识(例如知识图)结合使用,以减轻单词缩写引起的低速问题的识别。实验结果表明,我们的模型在Conll-2003英语数据集和Ontonotes V5英语数据集上优于其他类似的模型方法。
translated by 谷歌翻译
Relation extraction (RE) is a sub-discipline of information extraction (IE) which focuses on the prediction of a relational predicate from a natural-language input unit (such as a sentence, a clause, or even a short paragraph consisting of multiple sentences and/or clauses). Together with named-entity recognition (NER) and disambiguation (NED), RE forms the basis for many advanced IE tasks such as knowledge-base (KB) population and verification. In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE by encoding structured information about the sentences' principal units, such as subjects, objects, verbal phrases, and adverbials, into various forms of vectorized (and hence unstructured) representations of the sentences. Our main conjecture is that the decomposition of long and possibly convoluted sentences into multiple smaller clauses via OpenIE even helps to fine-tune context-sensitive language models such as BERT (and its plethora of variants) for RE. Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models compared to existing RE approaches. Our best results reach 92% and 71% of F1 score for KnowledgeNet and FewRel, respectively, proving the effectiveness of our approach on competitive benchmarks.
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems.
translated by 谷歌翻译
问题答案(QA)是自然语言处理中最具挑战性的最具挑战性的问题之一(NLP)。问答(QA)系统试图为给定问题产生答案。这些答案可以从非结构化或结构化文本生成。因此,QA被认为是可以用于评估文本了解系统的重要研究区域。大量的QA研究致力于英语语言,调查最先进的技术和实现最先进的结果。然而,由于阿拉伯QA中的研究努力和缺乏大型基准数据集,在阿拉伯语问答进展中的研究努力得到了很大速度的速度。最近许多预先接受的语言模型在许多阿拉伯语NLP问题中提供了高性能。在这项工作中,我们使用四个阅读理解数据集来评估阿拉伯QA的最先进的接种变压器模型,它是阿拉伯语 - 队,ArcD,AQAD和TYDIQA-GoldP数据集。我们微调并比较了Arabertv2基础模型,ArabertV0.2大型型号和ARAElectra模型的性能。在最后,我们提供了一个分析,了解和解释某些型号获得的低绩效结果。
translated by 谷歌翻译
NLP是与计算机或机器理解和解释人类语言的能力有关的人工智能和机器学习的一种形式。语言模型在文本分析和NLP中至关重要,因为它们允许计算机解释定性输入并将其转换为可以在其他任务中使用的定量数据。从本质上讲,在转移学习的背景下,语言模型通常在大型通用语料库上进行培训,称为预训练阶段,然后对特定的基本任务进行微调。结果,预训练的语言模型主要用作基线模型,该模型包含了对上下文的广泛掌握,并且可以进一步定制以在新的NLP任务中使用。大多数预训练的模型都经过来自Twitter,Newswire,Wikipedia和Web等通用领域的Corpora培训。在一般文本中训练的现成的NLP模型可能在专业领域效率低下且不准确。在本文中,我们提出了一个名为Securebert的网络安全语言模型,该模型能够捕获网络安全域中的文本含义,因此可以进一步用于自动化,用于许多重要的网络安全任务,否则这些任务将依靠人类的专业知识和繁琐的手动努力。 Securebert受到了我们从网络安全和一般计算域的各种来源收集和预处理的大量网络安全文本培训。使用我们提出的令牌化和模型权重调整的方法,Securebert不仅能够保留对一般英语的理解,因为大多数预训练的语言模型都可以做到,而且在应用于具有网络安全含义的文本时也有效。
translated by 谷歌翻译
End-to-end text-to-speech synthesis (TTS) can generate highly natural synthetic speech from raw text. However, rendering the correct pitch accents is still a challenging problem for end-to-end TTS. To tackle the challenge of rendering correct pitch accent in Japanese end-to-end TTS, we adopt PnG~BERT, a self-supervised pretrained model in the character and phoneme domain for TTS. We investigate the effects of features captured by PnG~BERT on Japanese TTS by modifying the fine-tuning condition to determine the conditions helpful inferring pitch accents. We manipulate content of PnG~BERT features from being text-oriented to speech-oriented by changing the number of fine-tuned layers during TTS. In addition, we teach PnG~BERT pitch accent information by fine-tuning with tone prediction as an additional downstream task. Our experimental results show that the features of PnG~BERT captured by pretraining contain information helpful inferring pitch accent, and PnG~BERT outperforms baseline Tacotron on accent correctness in a listening test.
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
开放信息提取是一个重要的NLP任务,它针对从非结构化文本中提取结构化信息的目标,而无需限制关系类型或文本域。该调查文件涵盖了2007年至2022年的开放信息提取技术,重点是以前的调查未涵盖的新模型。我们从信息角度来源提出了一种新的分类方法,以适应最近的OIE技术的开发。此外,我们根据任务设置以及当前流行的数据集和模型评估指标总结了三种主要方法。鉴于全面的审查,从数据集,信息来源,输出表格,方法和评估指标方面显示了几个未来的方向。
translated by 谷歌翻译
本文通过将深度递归编码器添加到具有深递归编码器(BERT-DRE)的伯爵,提供了一种深度神经阵列匹配(NLSM)。我们对模型行为的分析表明,BERT仍未捕获文本的全部复杂性,因此伯特顶部应用了一个深递归编码器。具有残留连接的三个Bi-LSTM层用于设计递归编码器,并在此编码器顶部使用注意模块。为了获得最终的载体,使用由平均值和最大池组成的池化层。我们在四个基准,SNLI,贝尔船,Multinli,Scitail和新的波斯宗教问题数据集上进行模型。本文侧重于改善NLSM任务中的BERT结果。在这方面,进行BERT-DRE和BERT之间的比较,并且显示在所有情况下,BERT-DRE优于伯特。宗教数据集的BERT算法实现了89.70%的精度,并且BERT-DRE架构使用相同的数据集提高了90.29%。
translated by 谷歌翻译
转移学习已通过深度审慎的语言模型广泛用于自然语言处理,例如来自变形金刚和通用句子编码器的双向编码器表示。尽管取得了巨大的成功,但语言模型应用于小型数据集时会过多地适合,并且很容易忘记与分类器进行微调时。为了解决这个忘记将深入的语言模型从一个域转移到另一个领域的问题,现有的努力探索了微调方法,以减少忘记。我们建议DeepeMotex是一种有效的顺序转移学习方法,以检测文本中的情绪。为了避免忘记问题,通过从Twitter收集的大量情绪标记的数据来仪器进行微调步骤。我们使用策划的Twitter数据集和基准数据集进行了一项实验研究。 DeepeMotex模型在测试数据集上实现多级情绪分类的精度超过91%。我们评估了微调DeepeMotex模型在分类Emoint和刺激基准数据集中的情绪时的性能。这些模型在基准数据集中的73%的实例中正确分类了情绪。所提出的DeepeMotex-Bert模型优于BI-LSTM在基准数据集上的BI-LSTM增长23%。我们还研究了微调数据集的大小对模型准确性的影响。我们的评估结果表明,通过大量情绪标记的数据进行微调提高了最终目标任务模型的鲁棒性和有效性。
translated by 谷歌翻译