Although attention mechanisms have become fundamental components of deep learning models, they are vulnerable to perturbations, which may degrade the prediction performance and model interpretability. Adversarial training (AT) for attention mechanisms has successfully reduced such drawbacks by considering adversarial perturbations. However, this technique requires label information, and thus, its use is limited to supervised settings. In this study, we explore the concept of incorporating virtual AT (VAT) into the attention mechanisms, by which adversarial perturbations can be computed even from unlabeled data. To realize this approach, we propose two general training techniques, namely VAT for attention mechanisms (Attention VAT) and "interpretable" VAT for attention mechanisms (Attention iVAT), which extend AT for attention mechanisms to a semi-supervised setting. In particular, Attention iVAT focuses on the differences in attention; thus, it can efficiently learn clearer attention and improve model interpretability, even with unlabeled data. Empirical experiments based on six public datasets revealed that our techniques provide better prediction performance than conventional AT-based as well as VAT-based techniques, and stronger agreement with evidence that is provided by humans in detecting important words in sentences. Moreover, our proposal offers these advantages without needing to add the careful selection of unlabeled data. That is, even if the model using our VAT-based technique is trained on unlabeled data from a source other than the target task, both the prediction performance and model interpretability can be improved.
translated by 谷歌翻译
对策培训提供了一种规范的监督学习算法的方法,而虚拟对手训练能够将监督的学习算法扩展到半监督设置。然而,两种方法都需要对输入向量的许多条目进行小扰动,这是不适合稀疏的高维输入,例如单个热词表示。我们通过将扰动应用于经常性神经网络中的单词嵌入而不是原始输入本身来扩展对文本领域的对抗和虚拟对抗训练。所提出的方法实现了最新的状态,导致多个基准半监督和纯粹监督任务。我们提供可视化和分析,表明学习的单词嵌入品质的质量有所提高,而且训练时,该模型易于过度装备。代码可在https://github.com/tensorflow/models/tree/master/research/addersarial_text中获得。
translated by 谷歌翻译
数据增强是通过转换为机器学习的人工创建数据的人工创建,是一个跨机器学习学科的研究领域。尽管它对于增加模型的概括功能很有用,但它还可以解决许多其他挑战和问题,从克服有限的培训数据到正规化目标到限制用于保护隐私的数据的数量。基于对数据扩展的目标和应用的精确描述以及现有作品的分类法,该调查涉及用于文本分类的数据增强方法,并旨在为研究人员和从业者提供简洁而全面的概述。我们将100多种方法划分为12种不同的分组,并提供最先进的参考文献来阐述哪种方法可以通过将它们相互关联,从而阐述了哪种方法。最后,提供可能构成未来工作的基础的研究观点。
translated by 谷歌翻译
最近的作品表明了解释性和鲁棒性是值得信赖和可靠的文本分类的两个关键成分。然而,以前的作品通常是解决了两个方面的一个:i)如何提取准确的理由,以便在有利于预测的同时解释; ii)如何使预测模型对不同类型的对抗性攻击稳健。直观地,一种产生有用的解释的模型应该对对抗性攻击更加强大,因为我们无法信任输出解释的模型,而是在小扰动下改变其预测。为此,我们提出了一个名为-BMC的联合分类和理由提取模型。它包括两个关键机制:混合的对手训练(AT)旨在在离散和嵌入空间中使用各种扰动,以改善模型的鲁棒性,边界匹配约束(BMC)有助于利用边界信息的引导来定位理由。基准数据集的性能表明,所提出的AT-BMC优于分类和基本原子的基础,由大边距提取。鲁棒性分析表明,建议的AT-BMC将攻击成功率降低了高达69%。经验结果表明,强大的模型与更好的解释之间存在连接。
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
积极的学习有效地收集了无标记的数据以进行注释,从而减少了对标记数据的需求。在这项工作中,我们建议以局部灵敏度和硬度感知的获取功能检索未标记的样品。所提出的方法通过局部扰动生成数据副本,并选择其预测可能性与其副本最大的数据点。我们通过注入选择的情况扰动来进一步增强我们的采集功能。我们的方法可以在各种分类任务中对常用的活跃学习策略获得一致的收益。此外,我们在基于迅速的几次学习中迅速选择的研究中观察到对基准的持续改进。这些实验表明,我们以局部敏感性和硬度为指导的获取对许多NLP任务都是有效和有益的。
translated by 谷歌翻译
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness. Recently, techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models. However, at the same time, we started to witness the emerging need beyond XAI among AI communities; based on the insights learned from XAI, how can we better empower ML engineers in steering their DNNs so that the model's reasonableness and performance can be improved as intended? This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL), a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations. In doing so, we first provide a formal definition of EGL and its general learning paradigm. Secondly, an overview of the key factors for EGL evaluation, as well as summarization and categorization of existing evaluation procedures and metrics for EGL are provided. Finally, the current and potential future application areas and directions of EGL are discussed, and an extensive experimental study is presented aiming at providing comprehensive comparative studies among existing EGL models in various popular application domains, such as Computer Vision (CV) and Natural Language Processing (NLP) domains.
translated by 谷歌翻译
基于方面的情绪分析(ABSA)是一种文本分析方法,其定义了与特定目标相关的某些方面的意见的极性。 ABSA的大部分研究都是英文,阿拉伯语有少量的工作。最先前的阿拉伯语研究依赖于深度学习模型,主要依赖于独立于上下文的单词嵌入(例如,e.g.word2vec),其中每个单词都有一个独立于其上下文的固定表示。本文探讨了从预先培训的语言模型(如BERT)的上下文嵌入的建模功能,例如BERT,以及在阿拉伯语方面情感极度分类任务中使用句子对输入。特别是,我们开发一个简单但有效的基于伯特的神经基线来处理这项任务。根据三种不同阿拉伯语数据集的实验结果,我们的BERT架构与简单的线性分类层超出了最先进的作品。在Arabic Hotel评论数据库中实现了89.51%的准确性,73%的人类注册书评论数据集和阿拉伯新闻数据集的85.73%。
translated by 谷歌翻译
近年来,人们对开发自然语言处理(NLP)中可解释模型的利益越来越多。大多数现有模型旨在识别输入功能,例如对于模型预测而言重要的单词或短语。然而,在NLP中开发的神经模型通常以层次结构的方式构成单词语义,文本分类需要层次建模来汇总本地信息,以便处理主题和标签更有效地转移。因此,单词或短语的解释不能忠实地解释文本分类中的模型决策。本文提出了一种新型的层次解释性神经文本分类器,称为提示,该分类器可以自动以层次结构方式以标记相关主题的形式生成模型预测的解释。模型解释不再处于单词级别,而是基于主题作为基本语义单元。评论数据集和新闻数据集的实验结果表明,我们所提出的方法与现有最新的文本分类器相当地达到文本分类结果,并比其他可解释的神经文本更忠实于模型的预测和更好地理解人类的解释分类器。
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译
面向目标的意见单词提取(TOWE)是一项精细的情感分析任务,旨在从句子中提取给定意见目标的相应意见单词。最近,深度学习方法在这项任务上取得了显着进步。然而,由于昂贵的数据注释过程,TOWE任务仍然遭受培训数据的稀缺性。有限的标记数据增加了测试数据和培训数据之间分配变化的风险。在本文中,我们建议利用大量未标记的数据来通过增加模型对变化分布变化的暴露来降低风险。具体而言,我们提出了一种新型的多透明一致性正则化(MGCR)方法,以利用未标记的数据并设计两个专门用于TOWE的过滤器,以在不同的粒度上过滤嘈杂的数据。四个TOWE基准数据集的广泛实验结果表明,与当前的最新方法相比,MGCR的优越性。深入分析还证明了不同粒度过滤器的有效性。我们的代码可在https://github.com/towessl/towessl上找到。
translated by 谷歌翻译
This paper presents a new data augmentation algorithm for natural understanding tasks, called RPN:Random Position Noise algorithm.Due to the relative paucity of current text augmentation methods. Few of the extant methods apply to natural language understanding tasks for all sentence-level tasks.RPN applies the traditional augmentation on the original text to the word vector level. The RPN algorithm makes a substitution in one or several dimensions of some word vectors. As a result, the RPN can introduce a certain degree of perturbation to the sample and can adjust the range of perturbation on different tasks. The augmented samples are then used to give the model training.This makes the model more robust. In subsequent experiments, we found that adding RPN to the training or fine-tuning model resulted in a stable boost on all 8 natural language processing tasks, including TweetEval, CoLA, and SST-2 datasets, and more significant improvements than other data augmentation algorithms.The RPN algorithm applies to all sentence-level tasks for language understanding and is used in any deep learning model with a word embedding layer.
translated by 谷歌翻译
最近的自然语言处理(NLP)技术在基准数据集中实现了高性能,主要原因是由于深度学习性能的显着改善。研究界的进步导致了最先进的NLP任务的生产系统的巨大增强,例如虚拟助理,语音识别和情感分析。然而,随着对抗性攻击测试时,这种NLP系统仍然仍然失败。初始缺乏稳健性暴露于当前模型的语言理解能力中的令人不安的差距,当NLP系统部署在现实生活中时,会产生问题。在本文中,我们通过以各种维度的系统方式概述文献来展示了NLP稳健性研究的结构化概述。然后,我们深入了解稳健性的各种维度,跨技术,指标,嵌入和基准。最后,我们认为,鲁棒性应该是多维的,提供对当前研究的见解,确定文学中的差距,以建议值得追求这些差距的方向。
translated by 谷歌翻译
[目的]要理解句子的含义,人类可以专注于句子中的重要单词,这反映了我们的眼睛在不同的凝视时间或时间保持在每个单词上。因此,一些研究利用眼睛跟踪值来优化深度学习模型中的注意力机制。但是这些研究缺乏解释这种方法的合理性。需要探索注意力机制是否具有人类阅读的这一特征。 [设计/方法/方法]我们进行了有关情感分类任务的实验。首先,我们从两个开源的眼睛追踪语料库中获得了令人眼前一亮的值,以描述人类阅读的特征。然后,从情感分类模型中学到了每个句子的机器注意值。最后,进行了比较以分析机器注意值和眼睛跟踪值。 [发现]通过实验,我们发现注意机制可以集中在重要词,例如形容词,副词和情感词,这些单词对于判断情感分类任务的句子情感很有价值。它具有人类阅读的特征,重点是阅读时的句子中的重要单词。由于注意力机制的学习不足,有些单词被错误地集中了。眼睛跟踪值可以帮助注意机制纠正此错误并改善模型性能。 [原创性/价值]我们的研究不仅为使用眼睛追踪值的研究提供了合理的解释来优化注意力机制,而且还为注意力机制的解释性提供了新的灵感。
translated by 谷歌翻译
标记数据可以是昂贵的任务,因为它通常由域专家手动执行。对于深度学习而言,这是繁琐的,因为它取决于大型标记的数据集。主动学习(AL)是一种范式,旨在通过仅使用二手车型认为最具信息丰富的数据来减少标签努力。在文本分类设置中,在AL上完成了很少的研究,旁边没有涉及最近的最先进的自然语言处理(NLP)模型。在这里,我们介绍了一个实证研究,可以将基于不确定性的基于不确定性的算法与Bert $ _ {base} $相比,作为使用的分类器。我们评估两个NLP分类数据集的算法:斯坦福情绪树木银行和kvk-Front页面。此外,我们探讨了旨在解决不确定性的al的预定问题的启发式;即,它是不可规范的,并且易于选择异常值。此外,我们探讨了查询池大小对al的性能的影响。虽然发现,AL的拟议启发式没有提高AL的表现;我们的结果表明,使用BERT $ _ {Base} $概率使用不确定性的AL。随着查询池大小变大,性能的这种差异可以减少。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
我们提出了Patron,这是一种新方法,它使用基于及时的不确定性估计,用于在冷启动场景下进行预训练的语言模型进行微调的数据选择,即,没有初始标记的数据可用。在顾客中,我们设计(1)一种基于迅速的不确定性传播方法来估计数据点的重要性和(2)分区 - 然后 - 剥离(PTR)策略,以促进对注释的样品多样性。六个文本分类数据集的实验表明,赞助人的表现优于最强的冷启动数据选择基准,高达6.9%。此外,仅具有128个标签,顾客分别基于香草微调和及时的学习,获得了91.0%和92.1%的全面监督性能。我们的赞助人实施可在\ url {https://github.com/yueyu1030/patron}上获得。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
Adversarial training is widely acknowledged as the most effective defense against adversarial attacks. However, it is also well established that achieving both robustness and generalization in adversarially trained models involves a trade-off. The goal of this work is to provide an in depth comparison of different approaches for adversarial training in language models. Specifically, we study the effect of pre-training data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of BERT-like language models. Our findings suggest that better robustness can be achieved by pre-training data augmentation or by training with input space perturbation. However, training with embedding space perturbation significantly improves generalization. A linguistic correlation analysis of neurons of the learned models reveal that the improved generalization is due to `more specialized' neurons. To the best of our knowledge, this is the first work to carry out a deep qualitative analysis of different methods of generating adversarial examples in adversarial training of language models.
translated by 谷歌翻译
We present two approaches to use unlabeled data to improve Sequence Learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a language model in NLP. The second approach is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. These two algorithms can be used as a "pretraining" algorithm for a later supervised sequence learning algorithm. In other words, the parameters obtained from the pretraining step can then be used as a starting point for other supervised training models. In our experiments, we find that long short term memory recurrent networks after pretrained with the two approaches become more stable to train and generalize better. With pretraining, we were able to achieve strong performance in many classification tasks, such as text classification with IMDB, DBpedia or image recognition in CIFAR-10.
translated by 谷歌翻译