Retrieval-augmented Neural Machine Translation models have been successful in many translation scenarios. Different from previous works that make use of mutually similar but redundant translation memories~(TMs), we propose a new retrieval-augmented NMT to model contrastively retrieved translation memories that are holistically similar to the source sentence while individually contrastive to each other providing maximal information gains in three phases. First, in TM retrieval phase, we adopt a contrastive retrieval algorithm to avoid redundancy and uninformativeness of similar translation pieces. Second, in memory encoding stage, given a set of TMs we propose a novel Hierarchical Group Attention module to gather both local context of each TM and global context of the whole TM set. Finally, in training phase, a Multi-TM contrastive learning objective is introduced to learn salient feature of each TM with respect to target sentence. Experimental results show that our framework obtains improvements over strong baselines on the benchmark datasets.
translated by 谷歌翻译
低频词预测仍然是现代神经电机翻译(NMT)系统的挑战。最近的自适应培训方法通过强调整体培训目标的重量来促进不频繁词语的产出。尽管召回了低频词的召回,但它们的预测精度意外地受到自适应目标的阻碍。灵感来自观察到低频词形成更紧凑的嵌入空间,我们从代表学习角度解决这一挑战。具体地,我们提出了一种频率感知的令牌级对比度学习方法,其中每个解码步骤的隐藏状态以基于相应的字频率的柔和对比方式从其他目标单词的对应物推开。我们对广泛使用的NIST汉语 - 英语和WMT14英语 - 德语翻译任务进行实验。经验结果表明,我们的提出方法不仅可以显着提高翻译质量,还可以提高词汇分集和优化词表示空间。进一步调查揭示了,与相关的自适应培训策略相比,我们对低频词预测方法的优势在于在不牺牲精度的情况下在不同频率上的令牌级召回的鲁棒性。
translated by 谷歌翻译
现有的文档级神经计算机翻译(NMT)模型具有足够探索的不同上下文设置,为目标生成提供指导。但是,对于慷慨的上下文信息,对揭开更多样化的背景的注意力很少。在本文中,我们提出了一种选择性的内存增强神经文件翻译模型,以处理包含上下文的大假设空间的文档。具体而言,我们从训练语料库中检索类似的双语句子对来增强全局上下文,然后通过选择性机制扩展双流注意模型,以捕获本地上下文和不同的全局背景。这种统一的方法允许我们的模型在三个公开的文档级机器翻译数据集上优雅地培训,并且显着优于以前的文档级NMT型号。
translated by 谷歌翻译
Recently, contrastive learning attracts increasing interests in neural text generation as a new solution to alleviate the exposure bias problem. It introduces a sequence-level training signal which is crucial to generation tasks that always rely on auto-regressive decoding. However, previous methods using contrastive learning in neural text generation usually lead to inferior performance. In this paper, we analyse the underlying reasons and propose a new Contrastive Neural Text generation framework, CoNT. CoNT addresses bottlenecks that prevent contrastive learning from being widely adopted in generation tasks from three aspects -- the construction of contrastive examples, the choice of the contrastive loss, and the strategy in decoding. We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation. Experimental results show that CoNT clearly outperforms the conventional training framework on all the ten benchmarks with a convincing margin. Especially, CoNT surpasses previous the most competitive contrastive learning method for text generation, by 1.50 BLEU on machine translation and 1.77 ROUGE-1 on summarization, respectively. It achieves new state-of-the-art on summarization, code comment generation (without external data) and data-to-text generation.
translated by 谷歌翻译
Nearest Neighbor Machine Translation (kNNMT) is a simple and effective method of augmenting neural machine translation (NMT) with a token-level nearest neighbor retrieval mechanism. The effectiveness of kNNMT directly depends on the quality of retrieved neighbors. However, original kNNMT builds datastores based on representations from NMT models, which would result in poor retrieval accuracy when NMT models are not good enough, leading to sub-optimal translation performance. In this paper, we propose PRED, a framework that leverages Pre-trained models for Datastores in kNN-MT. Better representations from pre-trained models allow us to build datastores of better quality. We also design a novel contrastive alignment objective to mitigate the representation gap between the NMT model and pre-trained models, enabling the NMT model to retrieve from better datastores. We conduct extensive experiments on both bilingual and multilingual translation benchmarks, including WMT17 English $\leftrightarrow$ Chinese, WMT14 English $\leftrightarrow$ German, IWSLT14 German $\leftrightarrow$ English, and IWSLT14 multilingual datasets. Empirical results demonstrate the effectiveness of PRED.
translated by 谷歌翻译
在完全共享所有语言参数的多语言神经机器翻译模型中,通常使用人工语言令牌来指导转换为所需的目标语言。但是,最近的研究表明,预备语言代币有时无法将多语言神经机器翻译模型导航到正确的翻译方向,尤其是在零弹性翻译上。为了减轻此问题,我们提出了两种方法:语言嵌入实施例和语言意识的多头关注,以学习信息丰富的语言表示,以将翻译转换为正确的方向。前者体现了沿着从源到目标的信息流中的不同关键切换点的语言,旨在放大翻译方向引导信号。后者利用矩阵而不是向量来表示连续空间中的语言。矩阵分为多个头,以学习多个子空间中的语言表示。在两个数据集上进行大规模多语言神经机器翻译的实验结果表明,语言意识到的多头注意力受益于监督和零弹性翻译,并大大减轻了脱靶翻译问题。进一步的语言类型学预测实验表明,通过我们的方法学到的基于基质的语言表示能够捕获丰富的语言类型学特征。
translated by 谷歌翻译
变压器结构由一系列编码器和解码器网络层堆叠,在神经机器翻译中实现了重大发展。但是,假设下层提供了微不足道或冗余的信息,那么香草变压器主要利用顶层表示形式,从而忽略了潜在有价值的底层特征。在这项工作中,我们提出了组转换器模型(GTRAN),该模型将编码器和解码器的多层表示分为不同的组,然后融合这些组特征以生成目标词。为了证实所提出方法的有效性,对三个双语翻译基准和两个多语言翻译任务进行了广泛的实验和分析实验,包括IWLST-14,IWLST-17,IWLST-17,LDC,WMT-14和OPUS-100基准。实验和分析结果表明,我们的模型通过一致的增益优于其变压器对应物。此外,它可以成功扩展到60个编码层和36个解码器层。
translated by 谷歌翻译
如何根据新出现的情况有效地调整神经电机翻译(NMT)模型而不会再培训?尽管神经机翻译成功,但更新部署的型号在线仍然是一个挑战。现有的非参数方法从数据库中检索类似的示例以指导翻译过程是有希望的,但容易被检索到的示例过度。在这项工作中,我们建议使用示例检索(Kster)进行内核平滑的翻译,这是一种在线调整神经计算机翻译模型的有效方法。域适应和多域机平移数据集的实验表明,即使没有昂贵的再培训,Kster也能够通过最佳现有在线适应方法实现1.1至1.5 BLEU分数的提高。代码和培训的型号在https://github.com/jiangqn/kster发布。
translated by 谷歌翻译
Recent work has improved language models (LMs) remarkably by equipping them with a non-parametric memory component. However, most existing approaches only introduce mem-ories at testing time or represent them using a separately trained encoder, resulting in suboptimal training of the language model. In this work, we present TRIME, a novel yet simple training approach designed for training LMs with memory augmentation. Our approach uses a training objective that directly takes in-batch examples as accessible memory. We also present new methods for memory construction and data batching, which are used for adapting to different sets of memories--local, long-term, and external memory--at testing time. We evaluate TRIME on multiple language modeling and machine translation benchmarks and show that it is able to achieve significant improvements across all the settings. Concretely, TRIME reduces the perplexity from 18.70 to 15.37 on WIKITEXT-103, by effectively leveraging a large memory set from the training corpus. Compared to standard LM training, TRIME adds negligible computational overhead and is compatible with different neural architectures, making it a versatile solution for training memory-augmented LMs.
translated by 谷歌翻译
K-Nearest邻居神经机器翻译(KNN-MT)通过在测试时间检索单词级表示,成功地纳入了外部语料库。通常,KNN-MT在翻译任务中借用了现成的上下文表示,例如最后一个解码器层的输出,作为检索任务的查询向量。在这项工作中,我们强调说,将这两个任务的表示形式结合起来是优质检索的最佳选择。为了减轻它,我们利用受监督的对比学习来学习从原始上下文表示得出的独特检索表示。我们还提出了一种快速有效的方法来构建硬性样本。对五个领域的实验结果表明,与香草KNN-MT相比,我们的方法提高了检索准确性和BLEU评分。
translated by 谷歌翻译
非自动回旋(NAR)模型的计算能力比自回归模型较少,但牺牲生成质量可以生成句子。先前的研究通过迭代解码解决了这个问题。这项研究建议将最近的邻居用作NAR解码器的初始状态,并迭代编辑。我们提出了一种新颖的培训策略,以了解有关邻居的编辑操作,以改善NAR文本生成。实验结果表明,所提出的方法(邻域)在JRC-ACQUISIE EN-DE DATASET上获得了更高的翻译质量(比香草变压器高1.69点(比香草变压器高1.69点),而解码迭代率较少(少于十分之一)使用最近的邻居翻译。我们还确认了所提出的方法对数据到文本任务(Wikibio)的有效性。此外,所提出的方法在WMT'14 EN-DE数据集上优于NAR基线。我们还报告了建议方法中使用的邻居示例的分析。
translated by 谷歌翻译
Directly training a document-to-document (Doc2Doc) neural machine translation (NMT) via Transformer from scratch, especially on small datasets usually fails to converge. Our dedicated probing tasks show that 1) both the absolute position and relative position information gets gradually weakened or even vanished once it reaches the upper encoder layers, and 2) the vanishing of absolute position information in encoder output causes the training failure of Doc2Doc NMT. To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention. Specifically, we integrate absolute positional information, i.e., position embeddings, into the query-key pairs both in self-attention and cross-attention through a simple yet effective addition operation. Moreover, we also integrate relative position encoding in self-attention. The proposed P-Transformer utilizes sinusoidal position encoding and does not require any task-specified position embedding, segment embedding, or attention mechanism. Through the above methods, we build a Doc2Doc NMT model with P-Transformer, which ingests the source document and completely generates the target document in a sequence-to-sequence (seq2seq) way. In addition, P-Transformer can be applied to seq2seq-based document-to-sentence (Doc2Sent) and sentence-to-sentence (Sent2Sent) translation. Extensive experimental results of Doc2Doc NMT show that P-Transformer significantly outperforms strong baselines on widely-used 9 document-level datasets in 7 language pairs, covering small-, middle-, and large-scales, and achieves a new state-of-the-art. Experimentation on discourse phenomena shows that our Doc2Doc NMT models improve the translation quality in both BLEU and discourse coherence. We make our code available on Github.
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译
尽管最近在跨模式检索领域取得了进展,但由于缺乏手动注释的数据集,研究的重点较少。在本文中,我们提出了一种用于低资源语言的噪声跨语法跨模式检索方法。为此,我们使用机器翻译(MT)来构建低资源语言的伪并行句子对。但是,由于MT并不完美,因此它倾向于在翻译过程中引入噪音,从而使文本嵌入被损坏,从而损害了检索性能。为了减轻这一点,我们引入了一种多视图自我验证方法来学习噪声稳定目标语言表示,该方法采用了跨注意模块来生成软伪靶标,以从基于相似性的视图和功能 - 功能 - 基于视图。此外,受到无监督的MT的反向翻译的启发,我们最大程度地减少了原点句子和反翻译句子之间的语义差异,以进一步提高文本编码器的噪声稳健性。在三个视频文本和图像文本跨模式检索基准跨不同语言上进行了广泛的实验,结果表明,我们的方法显着改善了整体性能,而无需使用额外的人体标记数据。此外,从最近的视觉和语言预训练框架(即剪辑)中配备了预训练的视觉编码器,我们的模型可实现显着的性能增长,这表明我们的方法与流行的预训练模型兼容。代码和数据可在https://github.com/huiguanlab/nrccr上找到。
translated by 谷歌翻译
我们研究了在循环机器翻译中对人体反馈的在线学习问题,其中人类翻译人员修改了机器生成的翻译,然后使用校正的翻译来改善神经电机翻译(NMT)系统。然而,以前的方法需要在线模型更新或额外的翻译记忆网络来实现高质量的性能,使它们在实践中不灵活和效率低下。在本文中,我们提出了一种新颖的非参数在线学习方法而不改变模型结构。这种方法引入了两个K-Cirelte-邻(KNN)模块:一个模块记住了人类反馈,这是人类翻译人员提供的正确句子,而另一个模块是自适应地平衡历史人体反馈和原始NMT模型的使用。在EMEA和JRC-ACQUIS基准上进行的实验表明,我们所提出的方法对翻译准确性的大量改进,并通过更少的人力校正操作实现更好的适应性能。
translated by 谷歌翻译
Neural machine translation(NMT) has aroused wide attention due to its impressive quality. Beyond quality, controlling translation styles is also an important demand for many languages. Previous related studies mainly focus on controlling formality and gain some improvements. However, they still face two challenges. The first is the evaluation limitation. Style contains abundant information including lexis, syntax, etc. But only formality is well studied. The second is the heavy reliance on iterative fine-tuning when new styles are required. Correspondingly, this paper contributes in terms of the benchmark and approach. First, we re-visit this task and propose a multiway stylized machine translation (MSMT) benchmark, which includes multiple categories of styles in four language directions to push the boundary of this task. Second, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which needs no extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance. All of our data and code are released at https://github.com/IvanWang0730/StyleAP.
translated by 谷歌翻译
We present a method for introducing a text encoder into pre-trained end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method to build a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-arts on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.
translated by 谷歌翻译
代码摘要可帮助开发人员理解程序并减少在软件维护过程中推断程序功能的时间。最近的努力诉诸深度学习技术,例如序列到序列模型,以生成准确的代码摘要,其中基于变压器的方法已实现了有希望的性能。但是,在此任务域中,有效地将代码结构信息集成到变压器中的情况不足。在本文中,我们提出了一种名为SG-Trans的新方法,将代码结构属性纳入变压器。具体而言,我们将局部符号信息(例如,代码令牌和语句)和全局句法结构(例如,数据流程图)注入变压器的自我发项模块中。为了进一步捕获代码的层次结构特征,局部信息和全局结构旨在分布在下层和变压器高层的注意力头中。广泛的评估表明,SG-trans的表现优于最先进的方法。与表现最佳的基线相比,SG-Trans在流星评分方面仍然可以提高1.4%和2.0%,这是一个广泛用于测量发电质量的度量,分别在两个基准数据集上。
translated by 谷歌翻译
Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese. \footnote{Our code and data is available at \url{https://github.com/yiyunya/Textual_CL_MWP}
translated by 谷歌翻译
本文介绍了一种新的数据增强方法,用于神经机器翻译,该方法可以在语言内部和跨语言内部实施更强的语义一致性。我们的方法基于条件掩盖语言模型(CMLM),该模型是双向的,可以在左右上下文以及标签上有条件。我们证明CMLM是生成上下文依赖性单词分布的好技术。特别是,我们表明CMLM能够通过在替换过程中对源和目标进行调节来实现语义一致性。此外,为了增强多样性,我们将软词替换的想法纳入了数据增强,该概念用词汇上的概率分布代替了一个单词。在不同量表的四个翻译数据集上进行的实验表明,总体解决方案会导致更现实的数据增强和更好的翻译质量。与最新作品相比,我们的方法始终取得了最佳性能,并且在基线上的提高了1.90个BLEU点。
translated by 谷歌翻译