现在,基于BERT的上下文排名模型已在各种段落和文档排名任务中已建立。但是,在对抗输入下基于BERT的排名模型的鲁棒性不足。在本文中,我们认为,伯特级居民对针对检索文件的对抗性攻击并不免疫。首先,我们提出了使用基于梯度的优化方法对高度相关和非相关文档的对抗扰动算法。我们的算法的目的是将少量令牌添加到高度相关或非相关的文档中,以引起大量降级或晋升。我们的实验表明,少数令牌已经可以导致文档等级发生很大变化。此外,我们发现伯特级速率在很大程度上依靠文档开始/头来进行相关性预测,从而使文档的初始部分更容易受到对抗攻击的影响。更有趣的是,我们发现一小部分反复出现的对抗性词,将这些单词添加到文档中后,这些单词分别导致任何相关/非相关/非相关文件的成功级别降级/促进。最后,我们的对抗令牌还显示了数据集内部和跨数据集内的特定主题偏好,从而暴露了BERT预训练或下游数据集中的潜在偏见。
translated by 谷歌翻译
神经文本排名模型已经见证了显着的进步,并越来越多地在实践中部署。不幸的是,它们还继承了一般神经模型的对抗性脆弱性,这些神经模型已被检测到,但仍未被先前的研究所忽视。此外,Blackhat SEO可能会利用继承的对抗性漏洞来击败受保护的搜索引擎。在这项研究中,我们提出了对黑盒神经通道排名模型的模仿对抗攻击。我们首先表明,可以通过列举关键查询/候选者,然后训练排名模仿模型来透明和模仿目标段落排名模型。利用排名模仿模型,我们可以精心操纵排名结果并将操纵攻击转移到目标排名模型。为此,我们提出了一种由成对目标函数授权的基于创新的基于梯度的攻击方法,以产生对抗性触发器,该触发器会导致有预谋的混乱,而具有很少的令牌。为了配备触发器的伪装,我们将下一个句子预测损失和语言模型流利度限制添加到目标函数中。对通过排名的实验结果证明了对各种SOTA神经排名模型的排名模仿攻击模型和对抗触发器的有效性。此外,各种缓解分析和人类评估表明,在面对潜在的缓解方法时,伪装的有效性。为了激励其他学者进一步研究这一新颖和重要的问题,我们将实验数据和代码公开可用。
translated by 谷歌翻译
Recent work has demonstrated that natural language processing techniques can support consumer protection by automatically detecting unfair clauses in the Terms of Service (ToS) Agreement. This work demonstrates that transformer-based ToS analysis systems are vulnerable to adversarial attacks. We conduct experiments attacking an unfair-clause detector with universal adversarial triggers. Experiments show that a minor perturbation of the text can considerably reduce the detection performance. Moreover, to measure the detectability of the triggers, we conduct a detailed human evaluation study by collecting both answer accuracy and response time from the participants. The results show that the naturalness of the triggers remains key to tricking readers.
translated by 谷歌翻译
尽管在许多机器学习任务方面取得了巨大成功,但深度神经网络仍然易于对抗对抗样本。虽然基于梯度的对抗攻击方法在计算机视野领域探索,但由于文本的离散性质,直接应用于自然语言处理中,这是不切实际的。为了弥合这一差距,我们提出了一般框架,以适应现有的基于梯度的方法来制作文本对抗性样本。在该框架中,将基于梯度的连续扰动添加到嵌入层中,并在前向传播过程中被放大。然后用掩模语言模型头解码最终的扰动潜在表示以获得潜在的对抗性样本。在本文中,我们将我们的框架与\ textbf {t} Extual \ TextBF {P} ROJECTED \ TextBF {G} Radient \ TextBF {D} excent(\ TextBF {TPGD})进行ronject \ textbf {p}。我们通过在三个基准数据集上执行转移黑匣子攻击来评估我们的框架来评估我们的框架。实验结果表明,与强基线方法相比,我们的方法达到了更好的性能,并产生更精细和语法的对抗性样本。所有代码和数据都将公开。
translated by 谷歌翻译
Word Embeddings于2013年在2013年宣传了Word2Vec,已成为NLP工程管道的主流。最近,随着BERT的发布,Word Embeddings已经从基于术语的嵌入空间移动到上下文嵌入空间 - 每个术语不再由单个低维向量表示,而是每个术语,而是\ \ EMPH {其上下文}。确定矢量权重。 BERT的设置和架构已被证明足以适用于许多自然语言任务。重要的是,对于信息检索(IR),与IR问题的先前深度学习解决方案相比,需要在神经净架构和培训制度的显着调整,“Vanilla BERT”已被证明以广泛的余量优于现有的检索算法,包括任务在传统的IR基线(如Robust04)上有很长的抵抗检索有效性的Corpora。在本文中,我们采用了最近提出的公理数据集分析技术 - 即,我们创建了每个诊断数据集,每个诊断数据集都满足检索启发式(术语匹配和语义) - 探索BERT能够学习的是什么。与我们的期望相比,我们发现BERT,当应用于最近发布的具有ad-hoc主题的大规模Web语料库时,\ emph {否}遵守任何探索的公理。与此同时,BERT优于传统的查询似然检索模型40 \%。这意味着IR的公理方法(及其为检索启发式创建的诊断数据集的扩展)可能无法适用于大型语料库。额外的 - 需要不同的公理。
translated by 谷歌翻译
现在,错误和虚假信息已成为我们安全和安全的全球威胁。为了应对在线错误信息的规模,一个可行的解决方案是通过检索和验证相关证据来自动对索赔进行事实检查。尽管在推动自动事实验证方面取得了最新进展,但仍缺乏对可能针对此类系统的攻击向量的全面评估。特别是,自动化事实验证过程可能容易受到其试图打击的确切虚假信息。在这项工作中,我们假设一个对手可以自动使用在线证据擦洗,以通过伪装相关证据或种植误导性的证据来破坏事实检查模型。我们首先提出了探索性分类法,该分类法涵盖了这两个目标和不同的威胁模型维度。在此指导下,我们设计并提出了几种潜在的攻击方法。我们表明,除了产生多样化和索赔一致的证据之外,还可以在证据中巧妙地修改索赔空位段。结果,我们在分类法的许多不同排列中高度降低了事实检查的表现。这些攻击也对索赔后的事后修改也很强大。我们的分析进一步暗示了在面对矛盾的证据时,模型推断的潜在局限性。我们强调,这些攻击可能会对此类模型的可检查和人类使用情况产生有害的影响,我们通过讨论未来防御的挑战和方向来得出结论。
translated by 谷歌翻译
Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation. However, these models operate in the natural channel of code, i.e., they are primarily concerned with the human understanding of the code. They are not robust to changes in the input and thus, are potentially susceptible to adversarial attacks in the natural channel. We propose, CodeAttack, a simple yet effective black-box attack model that uses code structure to generate effective, efficient, and imperceptible adversarial code samples and demonstrates the vulnerabilities of the state-of-the-art PL models to code-specific adversarial attacks. We evaluate the transferability of CodeAttack on several code-code (translation and repair) and code-NL (summarization) tasks across different programming languages. CodeAttack outperforms state-of-the-art adversarial NLP attack models to achieve the best overall drop in performance while being more efficient, imperceptible, consistent, and fluent. The code can be found at https://github.com/reddy-lab-code-research/CodeAttack.
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
我们对13个最近的模型进行了全面评估,用于使用两个流行的收藏(MS MARCO文档和Robust04)排名长期文档。我们的模型动物园包括两个专门的变压器模型(例如longformer),它们可以处理长文档而无需分配它们。一路上,我们记录了有关培训和比较此类模型的几个困难。有些令人惊讶的是,我们发现简单的第一个基线(满足典型变压器模型的输入序列约束的截断文档)非常有效。我们分析相关段落的分布(内部文档),以解释这种现象。我们进一步认为,尽管它们广泛使用,但Robust04和MS Marco文档对于基准长期模型并不是特别有用。
translated by 谷歌翻译
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TEXTFOOLER, a simple but strong baseline to generate adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate three advantages of this framework:(1) effective-it outperforms previous attacks by success rate and perturbation rate, (2) utility-preserving-it preserves semantic content, grammaticality, and correct types classified by humans, and (3) efficient-it generates adversarial text with computational complexity linear to the text length. 1
translated by 谷歌翻译
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.
translated by 谷歌翻译
Understanding why a model makes certain predictions is crucial when adapting it for real world decision making. LIME is a popular model-agnostic feature attribution method for the tasks of classification and regression. However, the task of learning to rank in information retrieval is more complex in comparison with either classification or regression. In this work, we extend LIME to propose Rank-LIME, a model-agnostic, local, post-hoc linear feature attribution method for the task of learning to rank that generates explanations for ranked lists. We employ novel correlation-based perturbations, differentiable ranking loss functions and introduce new metrics to evaluate ranking based additive feature attribution models. We compare Rank-LIME with a variety of competing systems, with models trained on the MS MARCO datasets and observe that Rank-LIME outperforms existing explanation algorithms in terms of Model Fidelity and Explain-NDCG. With this we propose one of the first algorithms to generate additive feature attributions for explaining ranked lists.
translated by 谷歌翻译
近年来,在应用预训练的语言模型(例如Bert)上,取得了巨大进展,以获取信息检索(IR)任务。在网页中通常使用的超链接已被利用用于设计预训练目标。例如,超链接的锚文本已用于模拟查询,从而构建了巨大的查询文档对以进行预训练。但是,作为跨越两个网页的桥梁,尚未完全探索超链接的潜力。在这项工作中,我们专注于建模通过超链接连接的两个文档之间的关系,并为临时检索设计一个新的预训练目标。具体而言,我们将文档之间的关系分为四组:无链接,单向链接,对称链接和最相关的对称链接。通过比较从相邻组采样的两个文档,该模型可以逐渐提高其捕获匹配信号的能力。我们提出了一个渐进的超链接预测({php})框架,以探索预训练中超链接的利用。对两个大规模临时检索数据集和六个提问数据集的实验结果证明了其优于现有的预训练方法。
translated by 谷歌翻译
深度神经网络在解决各种现实世界任务中具有广泛的应用,并在计算机视觉,图像分类和自然语言处理等域中实现了令人满意的结果。同时,神经网络的安全性和稳健性成为必要的,因为不同的研究表明了神经网络的脆弱方面。在点的情况下,在自然语言处理任务中,神经网络可以由秘密修改的文本欺骗,这与原始文本具有高相似性。根据以前的研究,大多数研究都集中在图像领域;与图像逆势攻击不同,文本以离散序列表示,传统的图像攻击方法不适用于NLP字段。在本文中,我们提出了一个单词级NLP情绪分类器攻击模型,包括一种基于自我关注机制的词选择方法和用于Word替换的贪婪搜索算法。我们通过在IMDB数据集中攻击GRU和1D-CNN受害者模型进行攻击模型进行实验。实验结果表明,我们的模型达到了更高的攻击成功率,并且比以前的方法更有效,因为由于有效的单词选择算法,并且最小化了单词替代数。此外,我们的模型可转换,可用于具有多种修改的图像域。
translated by 谷歌翻译
深度变压器神经网络模型在生物医学域中提高了智能文本处理系统的预测精度。他们在各种各样的生物医学和临床自然语言处理(NLP)基准上获得了最先进的性能分数。然而,到目前为止,这些模型的稳健性和可靠性较小。神经NLP模型可以很容易地被对抗动物样本所欺骗,即输入的次要变化,以保留文本的含义和可理解性,而是强制NLP系统做出错误的决策。这提出了对生物医学NLP系统的安全和信任的严重担忧,特别是当他们旨在部署在现实世界用例中时。我们调查了多种变压器神经语言模型的强大,即Biobert,Scibert,Biomed-Roberta和Bio-Clinicalbert,在各种生物医学和临床文本处理任务中。我们实施了各种对抗的攻击方法来测试不同攻击方案中的NLP系统。实验结果表明,生物医学NLP模型对对抗性样品敏感;它们的性能平均分别平均下降21%和18.9个字符级和字级对抗噪声的绝对百分比。进行广泛的对抗训练实验,我们在清洁样品和对抗性投入的混合物上进行了微调NLP模型。结果表明,对抗性训练是对抗对抗噪声的有效防御机制;模型的稳健性平均提高11.3绝对百分比。此外,清洁数据的模型性能平均增加2.4个绝对存在,表明对抗性训练可以提高生物医学NLP系统的概括能力。
translated by 谷歌翻译
当医学研究人员进行系统审查(SR)时,筛查研究是最耗时的过程:研究人员阅读了数千个医学文献,手动标记它们相关或无关紧要。筛选优先级排序(即,文件排名)是通过提供相关文件的排名来协助研究人员的方法,其中相关文件的排名高于无关。种子驱动的文档排名(SDR)使用已知的相关文档(即,种子)作为查询并生成这些排名。以前的SDR工作试图在查询文档中识别不同术语权重,并在检索模型中使用它们来计算排名分数。或者,我们将SDR任务制定为查询文档的类似文档,并根据相似度得分生成排名。我们提出了一个名为Mirror匹配的文件匹配度量,通过结合常见的书写模式来计算医疗摘要文本之间的匹配分数,例如背景,方法,结果和结论。我们对2019年克利夫氏素母电子邮件进行实验2 TAR数据集,并且经验结果表明这种简单的方法比平均精度和精密的度量标准的传统和神经检索模型实现了更高的性能。
translated by 谷歌翻译
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets. With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models. Our first attack capitalizes on unintended memorization in the NER's underlying neural network, a phenomenon NNs are known to be vulnerable to. Our second attack leverages a timing side-channel to target NER models that maintain vocabularies constructed from the training data. We show that different functional paths of words within the training dataset in contrast to words not previously seen have measurable differences in execution time. Revealing membership status of training samples has clear privacy implications, e.g., in text redaction, sensitive words or phrases to be found and removed, are at risk of being detected in the training dataset. Our experimental evaluation includes the redaction of both password and health data, presenting both security risks and privacy/regulatory issues. This is exacerbated by results that show memorization with only a single phrase. We achieved 70% AUC in our first attack on a text redaction use-case. We also show overwhelming success in the timing attack with 99.23% AUC. Finally we discuss potential mitigation approaches to realize the safe use of NER models in light of the privacy and security implications of membership inference attacks.
translated by 谷歌翻译
在过去的几年中,保护NLP模型免受拼写错误的障碍是研究兴趣的对象。现有的补救措施通常会损害准确性,或者需要对每个新的攻击类别进行完整的模型重新训练。我们提出了一种新颖的方法,可以向基于变压器的NLP模型中的拼写错误增加弹性。可以实现这种鲁棒性,而无需重新训练原始的NLP模型,并且只有最小的语言丧失理解在没有拼写错误的输入上的性能。此外,我们提出了一种新的有效近似方法来产生对抗性拼写错误,这大大降低了评估模型对对抗性攻击的弹性所需的成本。
translated by 谷歌翻译
最近,几种密集的检索(DR)模型已经证明了在搜索系统中无处不在的基于术语的检索的竞争性能。与基于术语的匹配相反,DR将查询和文档投影到密集的矢量空间中,并通过(大约)最近的邻居搜索检索结果。部署新系统(例如DR)不可避免地涉及其性能方面的权衡。通常,建立的检索系统按照效率和成本(例如查询延迟,索引吞吐量或存储要求)对其进行了良好的理解。在这项工作中,我们提出了一个具有一组标准的框架,这些框架超出了简单的有效性措施,可以彻底比较两个检索系统,并明确目标是评估一个系统的准备就绪,以取代另一个系统。这包括有效性和各种成本因素之间的仔细权衡考虑。此外,我们描述了护栏标准,因为即使是平均而言更好的系统,也可能会对少数查询产生系统性故障。护栏检查某些查询特性和新型故障类型的故障,这些故障仅在密集检索系统中才有可能。我们在网络排名方案上演示了我们的决策框架。在这种情况下,最先进的DR模型的结果令人惊讶,不仅是平均表现,而且通过一系列广泛的护栏测试,表现出不同的查询特性,词汇匹配,概括和回归次数的稳健性。无法预测将来博士是否会变得无处不在,但是这是一种可能的方法是通过重复应用决策过程(例如此处介绍的过程)。
translated by 谷歌翻译
在这项工作中,我们提出了一个系统的实证研究,专注于最先进的多语言编码器在跨越多种不同语言对的交叉语言文档和句子检索任务的适用性。我们首先将这些模型视为多语言文本编码器,并在无监督的ad-hoc句子和文档级CLIR中基准性能。与监督语言理解相比,我们的结果表明,对于无监督的文档级CLIR - 一个没有针对IR特定的微调 - 预训练的多语言编码器的相关性判断,平均未能基于CLWE显着优于早期模型。对于句子级检索,我们确实获得了最先进的性能:然而,通过多语言编码器来满足高峰分数,这些编码器已经进一步专注于监督的时尚,以便句子理解任务,而不是使用他们的香草'现货'变体。在这些结果之后,我们介绍了文档级CLIR的本地化相关性匹配,在那里我们独立地对文件部分进行了查询。在第二部分中,我们评估了在一系列零拍语言和域转移CLIR实验中的英语相关数据中进行微调的微调编码器精细调整的微调我们的结果表明,监督重新排名很少提高多语言变压器作为无监督的基数。最后,只有在域名对比度微调(即,同一域名,只有语言转移),我们设法提高排名质量。我们在目标语言中单次检索的交叉定向检索结果和结果(零拍摄)交叉传输之间的显着实证差异,这指出了在单机数据上训练的检索模型的“单声道过度装备”。
translated by 谷歌翻译