攻击神经机翻译模型是离散序列的本身组合任务,解决了近似启发式。大多数方法使用梯度独立地攻击每个样品上的模型。我们可以学会产生有意义的对抗攻击吗?而不是机械地应用梯度与现有方法相比,我们学会通过基于语言模型训练对抗性发生器来攻击模型。我们提出了蒙面的对抗生成(MAG)模型,该模型在整个培训过程中学会扰乱翻译模型。实验表明,它提高了机器翻译模型的鲁棒性,同时比竞争方法更快。
translated by 谷歌翻译
我们介绍了双图:一种简单但有效的训练策略,以提高神经机器翻译(NMT)性能。它由两个程序组成:双向预处理和单向填充。这两个过程均使用SIMCUT,这是一种简单的正则化方法,迫使原始句子对的输出分布之间的一致性。在不利用额外的数据集通过反翻译或集成大规模预认证的模型的情况下,BI-Simcut可以在五个翻译基准(数据尺寸从160K到20.20万)中实现强大的翻译性能:EN-的BLEU得分为31.16,EN-> DE和38.37的BLEU得分为38.37 de-> en在IWSLT14数据集上,en-> de的30.78和35.15在WMT14数据集上进行DE-> en,而WMT17数据集中的ZH-> EN为27.17。 Simcut不是一种新方法,而是简化和适用于NMT的cutoff(Shen等,2020)的版本,可以将其视为基于扰动的方法。鉴于Simcut和Bi-Simcut的普遍性和简单性,我们认为它们可以作为未来NMT研究的强大基准。
translated by 谷歌翻译
本文介绍了一种新的数据增强方法,用于神经机器翻译,该方法可以在语言内部和跨语言内部实施更强的语义一致性。我们的方法基于条件掩盖语言模型(CMLM),该模型是双向的,可以在左右上下文以及标签上有条件。我们证明CMLM是生成上下文依赖性单词分布的好技术。特别是,我们表明CMLM能够通过在替换过程中对源和目标进行调节来实现语义一致性。此外,为了增强多样性,我们将软词替换的想法纳入了数据增强,该概念用词汇上的概率分布代替了一个单词。在不同量表的四个翻译数据集上进行的实验表明,总体解决方案会导致更现实的数据增强和更好的翻译质量。与最新作品相比,我们的方法始终取得了最佳性能,并且在基线上的提高了1.90个BLEU点。
translated by 谷歌翻译
虽然端到端的神经机翻译(NMT)取得了令人印象深刻的进步,但嘈杂的输入通常会导致模型变得脆弱和不稳定。生成对抗性示例作为增强数据被证明是有用的,以减轻这个问题。对逆势示例生成(AEG)的现有方法是字级或字符级。在本文中,我们提出了一个短语级侵犯示例生成(PAEG)方法来增强模型的鲁棒性。我们的方法利用基于梯度的策略来替代源输入中的弱势位置的短语。我们在三个基准中验证了我们的方法,包括LDC中文 - 英语,IWSLT14德语,以及WMT14英语 - 德语任务。实验结果表明,与以前的方法相比,我们的方法显着提高了性能。
translated by 谷歌翻译
尽管在许多机器学习任务方面取得了巨大成功,但深度神经网络仍然易于对抗对抗样本。虽然基于梯度的对抗攻击方法在计算机视野领域探索,但由于文本的离散性质,直接应用于自然语言处理中,这是不切实际的。为了弥合这一差距,我们提出了一般框架,以适应现有的基于梯度的方法来制作文本对抗性样本。在该框架中,将基于梯度的连续扰动添加到嵌入层中,并在前向传播过程中被放大。然后用掩模语言模型头解码最终的扰动潜在表示以获得潜在的对抗性样本。在本文中,我们将我们的框架与\ textbf {t} Extual \ TextBF {P} ROJECTED \ TextBF {G} Radient \ TextBF {D} excent(\ TextBF {TPGD})进行ronject \ textbf {p}。我们通过在三个基准数据集上执行转移黑匣子攻击来评估我们的框架来评估我们的框架。实验结果表明,与强基线方法相比,我们的方法达到了更好的性能,并产生更精细和语法的对抗性样本。所有代码和数据都将公开。
translated by 谷歌翻译
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.
translated by 谷歌翻译
在过去的几年中,保护NLP模型免受拼写错误的障碍是研究兴趣的对象。现有的补救措施通常会损害准确性,或者需要对每个新的攻击类别进行完整的模型重新训练。我们提出了一种新颖的方法,可以向基于变压器的NLP模型中的拼写错误增加弹性。可以实现这种鲁棒性,而无需重新训练原始的NLP模型,并且只有最小的语言丧失理解在没有拼写错误的输入上的性能。此外,我们提出了一种新的有效近似方法来产生对抗性拼写错误,这大大降低了评估模型对对抗性攻击的弹性所需的成本。
translated by 谷歌翻译
Adversarial training is widely acknowledged as the most effective defense against adversarial attacks. However, it is also well established that achieving both robustness and generalization in adversarially trained models involves a trade-off. The goal of this work is to provide an in depth comparison of different approaches for adversarial training in language models. Specifically, we study the effect of pre-training data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of BERT-like language models. Our findings suggest that better robustness can be achieved by pre-training data augmentation or by training with input space perturbation. However, training with embedding space perturbation significantly improves generalization. A linguistic correlation analysis of neurons of the learned models reveal that the improved generalization is due to `more specialized' neurons. To the best of our knowledge, this is the first work to carry out a deep qualitative analysis of different methods of generating adversarial examples in adversarial training of language models.
translated by 谷歌翻译
深度变压器神经网络模型在生物医学域中提高了智能文本处理系统的预测精度。他们在各种各样的生物医学和临床自然语言处理(NLP)基准上获得了最先进的性能分数。然而,到目前为止,这些模型的稳健性和可靠性较小。神经NLP模型可以很容易地被对抗动物样本所欺骗,即输入的次要变化,以保留文本的含义和可理解性,而是强制NLP系统做出错误的决策。这提出了对生物医学NLP系统的安全和信任的严重担忧,特别是当他们旨在部署在现实世界用例中时。我们调查了多种变压器神经语言模型的强大,即Biobert,Scibert,Biomed-Roberta和Bio-Clinicalbert,在各种生物医学和临床文本处理任务中。我们实施了各种对抗的攻击方法来测试不同攻击方案中的NLP系统。实验结果表明,生物医学NLP模型对对抗性样品敏感;它们的性能平均分别平均下降21%和18.9个字符级和字级对抗噪声的绝对百分比。进行广泛的对抗训练实验,我们在清洁样品和对抗性投入的混合物上进行了微调NLP模型。结果表明,对抗性训练是对抗对抗噪声的有效防御机制;模型的稳健性平均提高11.3绝对百分比。此外,清洁数据的模型性能平均增加2.4个绝对存在,表明对抗性训练可以提高生物医学NLP系统的概括能力。
translated by 谷歌翻译
Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.
translated by 谷歌翻译
基于预先训练的语言模型(PRLMS)在源代码理解任务中取得的巨大成功,当前的文献研究要么进一步改善PRLM的性能(概括)或对对抗性攻击的鲁棒性。但是,他们必须在这两个方面之间的权衡方面妥协,而且它们都没有考虑以有效和实用的方式改善双方。为了填补这一空白,我们建议使用语义保护对抗代码嵌入(空间),以找到最坏的传播语义保留攻击,同时迫使模型在这些最坏情况下预测正确的标签。实验和分析表明,在提高PRLMS代码的性能的同时,空间可以保持强大的防御性攻击。
translated by 谷歌翻译
离散对手攻击是对保留输出标签的语言输入的象征性扰动,但导致预测误差。虽然这种攻击已经广泛探索了评估模型稳健性的目的,但他们的改善稳健性的效用仅限于离线增强。具体地,给定训练有素的模型,攻击用于产生扰动(对抗性)示例,并且模型重新培训一次。在这项工作中,我们解决了这个差距并利用了在线增强的离散攻击,在每个训练步骤中产生了对抗的例子,适应模型的变化性质。我们提出(i)基于最佳搜索的新的离散攻击,以及(ii)与现有工作不同的随机采样攻击不是基于昂贵的搜索过程。令人惊讶的是,我们发现随机抽样导致鲁棒性的令人印象深刻,优于普通使用的离线增强,同时导致训练时间〜10x的加速。此外,在线增强基于搜索的攻击证明了更高的培训成本,显着提高了三个数据集的鲁棒性。最后,我们表明我们的新攻击与先前的方法相比,我们的新攻击显着提高了鲁棒性。
translated by 谷歌翻译
数据增强是通过转换为机器学习的人工创建数据的人工创建,是一个跨机器学习学科的研究领域。尽管它对于增加模型的概括功能很有用,但它还可以解决许多其他挑战和问题,从克服有限的培训数据到正规化目标到限制用于保护隐私的数据的数量。基于对数据扩展的目标和应用的精确描述以及现有作品的分类法,该调查涉及用于文本分类的数据增强方法,并旨在为研究人员和从业者提供简洁而全面的概述。我们将100多种方法划分为12种不同的分组,并提供最先进的参考文献来阐述哪种方法可以通过将它们相互关联,从而阐述了哪种方法。最后,提供可能构成未来工作的基础的研究观点。
translated by 谷歌翻译
机器翻译历史上的重要突破之一是变压器模型的发展。不仅对于各种翻译任务,而且对于大多数其他NLP任务都是革命性的。在本文中,我们针对一个基于变压器的系统,该系统能够将德语用源句子转换为其英语的对应目标句子。我们对WMT'13数据集的新闻评论德语 - 英语并行句子进行实验。此外,我们研究了来自IWSLT'16数据集的培训中包含其他通用域数据以改善变压器模型性能的效果。我们发现,在培训中包括IWSLT'16数据集,有助于在WMT'13数据集的测试集中获得2个BLEU得分点。引入定性分析以分析通用域数据的使用如何有助于提高产生的翻译句子的质量。
translated by 谷歌翻译
本文概述了NVIDIA Nemo的神经电机翻译系统,用于WMT21新闻和生物医学共享翻译任务的受限数据跟踪。我们的新闻任务提交英语 - 德语(EN-DE)和英语 - 俄语(EN-RU)是基于基于基于基线变换器的序列到序列模型之上。具体而言,我们使用1)检查点平均2)模型缩放3)模型缩放3)与从左右分解模型的逆转传播和知识蒸馏的数据增强4)从前一年的测试集上的FINETUNING 5)型号集合6)浅融合解码变压器语言模型和7)嘈杂的频道重新排名。此外,我们的BioMedical任务提交的英语 - 俄语使用生物学偏见的词汇表,并从事新闻任务数据的划痕,从新闻任务数据集中策划的医学相关文本以及共享任务提供的生物医学数据。我们的新闻系统在WMT'20 en-de试验中实现了39.5的Sacrebleu得分优于去年任务38.8的最佳提交。我们的生物医学任务ru-en和en-ru系统分别在WMT'20生物医学任务测试集中达到43.8和40.3的Bleu分数,优于上一年的最佳提交。
translated by 谷歌翻译
在本文中,我们通过生成的对抗网络(GAN)架构探索机器翻译改进。我们从Relgan,一个文本制造模型和鼻孔机械翻译模型中获取灵感,实现了一个学习将尴尬,非流利的英语句子转换为流利的模型,同时只培训在单梅换语料库上。我们利用参数$ \ lambda $来控制从输入句子的偏差量,即保持原始令牌和修改它更流利之间的权衡。在某些情况下,我们的结果改进了基于短语的机器翻译。特别是,带变压器发生器的GaN显示出一些有希望的结果。我们建议将来的一些方向建立在这种概念上建立。
translated by 谷歌翻译
无向神经序列模型实现了与最先进的定向序列模型竞争的性能,这些序列模型在机器翻译任务中从左到右单调。在这项工作中,我们培训一项政策,该政策是通过加强学习来学习预先训练的,无向翻译模型的发电顺序。我们表明,通过我们学习的订单解码的翻译可以实现比从左到右解码的输出量更高的BLEU分数或由来自Mansimov等人的学习顺序解码的输出。 (2019)关于WMT'14德语翻译任务。从De-Zh,WMT'16英语 - 罗马尼亚语和WMT'21英语翻译任务的最大来源和目标长度为30的示例,我们的学习订单优于六个任务中的四个启发式生成订单。我们接下来通过定性和定量分析仔细分析学习的订单模式。我们表明我们的政策通常遵循外部到内部顺序,首先预测最左右的位置,然后向中间移动,同时在开始时跳过不太重要的单词。此外,该政策通常在连续步骤中预测单个语法构成结构的位置。我们相信我们的调查结果可以对无向生成模型的机制提供更多的见解,并鼓励在这方面进一步研究。我们的代码在HTTPS://github.com/jiangyctarheel/undirectect - generation
translated by 谷歌翻译
We propose BERTSCORE, an automatic evaluation metric for text generation. Analogously to common metrics, BERTSCORE computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTSCORE correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTSCORE is more robust to challenging examples when compared to existing metrics.
translated by 谷歌翻译
Nearest Neighbor Machine Translation (kNNMT) is a simple and effective method of augmenting neural machine translation (NMT) with a token-level nearest neighbor retrieval mechanism. The effectiveness of kNNMT directly depends on the quality of retrieved neighbors. However, original kNNMT builds datastores based on representations from NMT models, which would result in poor retrieval accuracy when NMT models are not good enough, leading to sub-optimal translation performance. In this paper, we propose PRED, a framework that leverages Pre-trained models for Datastores in kNN-MT. Better representations from pre-trained models allow us to build datastores of better quality. We also design a novel contrastive alignment objective to mitigate the representation gap between the NMT model and pre-trained models, enabling the NMT model to retrieve from better datastores. We conduct extensive experiments on both bilingual and multilingual translation benchmarks, including WMT17 English $\leftrightarrow$ Chinese, WMT14 English $\leftrightarrow$ German, IWSLT14 German $\leftrightarrow$ English, and IWSLT14 multilingual datasets. Empirical results demonstrate the effectiveness of PRED.
translated by 谷歌翻译
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
translated by 谷歌翻译