Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese. \footnote{Our code and data is available at \url{https://github.com/yiyunya/Textual_CL_MWP}
translated by 谷歌翻译
为了解决数学单词问题,人类学生利用达到不同方程解决方案的各种推理逻辑。但是,自动求解器的主流序列到序列方法旨在解码通过人类注释监督的固定溶液方程。在本文中,我们通过利用一组控制代码来指导模型考虑某些推理逻辑并解码从人类参考转换的相应方程式表达式来指导模型来考虑某些推理逻辑并解码相应的方程式表达式来提出一个受控方程生成求解器。经验结果表明,我们的方法普遍提高了单人(MATH23K)和多项(draw1k,hmwp)基准的性能,在具有挑战性的多重未知数据集上,高达13.2%的准确性。
translated by 谷歌翻译
自动解决数学字问题是自然语言处理领域的关键任务。最近的模型已达到其性能瓶颈,需要更高质量的培训数据。我们提出了一种新的数据增强方法,扭转了数学词问题的数学逻辑,以产生新的高质量数学问题,并介绍了能够在数学推理逻辑中受益的新知识点。我们在两个Sota Math Word问题解决模型上应用增强数据,并将我们的结果与强大的数据增强基线进行比较。实验结果表明了我们方法的有效性。我们在https://github.com/yiyunya/roda发布我们的代码和数据。
translated by 谷歌翻译
Math word problem (MWP) solving is an important task in question answering which requires human-like reasoning ability. Analogical reasoning has long been used in mathematical education, as it enables students to apply common relational structures of mathematical situations to solve new problems. In this paper, we propose to build a novel MWP solver by leveraging analogical MWPs, which advance the solver's generalization ability across different kinds of MWPs. The key idea, named analogy identification, is to associate the analogical MWP pairs in a latent space, i.e., encoding an MWP close to another analogical MWP, while moving away from the non-analogical ones. Moreover, a solution discriminator is integrated into the MWP solver to enhance the association between the representations of MWPs and their true solutions. The evaluation results verify that our proposed analogical learning strategy promotes the performance of MWP-BERT on Math23k over the state-of-the-art model Generate2Rank, with 5 times fewer parameters in the encoder. We also find that our model has a stronger generalization ability in solving difficult MWPs due to the analogical learning from easy MWPs.
translated by 谷歌翻译
神经MWP求解器很难处理小型本地差异。在MWP任务中,一些本地更改节省原始语义,而其他本地更改可能完全更改底层逻辑。目前,MWP任务的现有数据集包含有限的样本,这些样本是神经模型的关键,用于学会消除问题的不同类型的差异并正确解决问题。在本文中,我们提出了一套新型数据增强方法,可以通过不同类型的局部差异增强此类数据来补充现有数据集,并有助于提高当前神经模型的泛化能力。新样本由知识导向实体替换,逻辑引导问题重组产生。确保增强方法保持新数据与其标签之间的一致性。实验结果表明了我们方法的必要性和有效性。
translated by 谷歌翻译
自我监督的学习方法,如对比学习,在自然语言处理中非常重视。它使用对培训数据增强对具有良好表示能力的编码器构建分类任务。然而,在对比学习的学习成对的构建在NLP任务中更难。以前的作品生成单词级更改以形成对,但小变换可能会导致句子含义的显着变化作为自然语言的离散和稀疏性质。在本文中,对对抗的训练在NLP的嵌入空间中产生了挑战性和更难的学习对抗性示例作为学习对。使用对比学学习提高了对抗性培训的泛化能力,因为对比损失可以使样品分布均匀。同时,对抗性培训也提高了对比学习的稳健性。提出了两种小说框架,监督对比对抗学习(SCAS)和无监督的SCAS(USCAL),通过利用对比学习的对抗性培训来产生学习成对。利用基于标签的监督任务丢失,以产生对抗性示例,而无监督的任务会带来对比损失。为了验证所提出的框架的有效性,我们将其雇用到基于变换器的模型,用于自然语言理解,句子语义文本相似性和对抗学习任务。胶水基准任务的实验结果表明,我们的微调监督方法优于BERT $ _ {基础} $超过1.75 \%。我们还评估我们对语义文本相似性(STS)任务的无监督方法,并且我们的方法获得77.29 \%with bert $ _ {base} $。我们方法的稳健性在NLI任务的多个对抗性数据集下进行最先进的结果。
translated by 谷歌翻译
Recently, contrastive learning attracts increasing interests in neural text generation as a new solution to alleviate the exposure bias problem. It introduces a sequence-level training signal which is crucial to generation tasks that always rely on auto-regressive decoding. However, previous methods using contrastive learning in neural text generation usually lead to inferior performance. In this paper, we analyse the underlying reasons and propose a new Contrastive Neural Text generation framework, CoNT. CoNT addresses bottlenecks that prevent contrastive learning from being widely adopted in generation tasks from three aspects -- the construction of contrastive examples, the choice of the contrastive loss, and the strategy in decoding. We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation. Experimental results show that CoNT clearly outperforms the conventional training framework on all the ten benchmarks with a convincing margin. Especially, CoNT surpasses previous the most competitive contrastive learning method for text generation, by 1.50 BLEU on machine translation and 1.77 ROUGE-1 on summarization, respectively. It achieves new state-of-the-art on summarization, code comment generation (without external data) and data-to-text generation.
translated by 谷歌翻译
作为有效的策略,数据增强(DA)减轻了深度学习技术可能失败的数据稀缺方案。它广泛应用于计算机视觉,然后引入自然语言处理并实现了许多任务的改进。DA方法的主要重点之一是提高培训数据的多样性,从而帮助模型更好地推广到看不见的测试数据。在本调查中,我们根据增强数据的多样性,将DA方法框架为三类,包括释义,注释和采样。我们的论文根据上述类别,详细分析了DA方法。此外,我们还在NLP任务中介绍了他们的应用以及挑战。
translated by 谷歌翻译
Retrieval-augmented Neural Machine Translation models have been successful in many translation scenarios. Different from previous works that make use of mutually similar but redundant translation memories~(TMs), we propose a new retrieval-augmented NMT to model contrastively retrieved translation memories that are holistically similar to the source sentence while individually contrastive to each other providing maximal information gains in three phases. First, in TM retrieval phase, we adopt a contrastive retrieval algorithm to avoid redundancy and uninformativeness of similar translation pieces. Second, in memory encoding stage, given a set of TMs we propose a novel Hierarchical Group Attention module to gather both local context of each TM and global context of the whole TM set. Finally, in training phase, a Multi-TM contrastive learning objective is introduced to learn salient feature of each TM with respect to target sentence. Experimental results show that our framework obtains improvements over strong baselines on the benchmark datasets.
translated by 谷歌翻译
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show-both theoretically and empirically-that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available. 1 2 We randomly sample 10 6 sentences from English Wikipedia and fine-tune BERTbase with learning rate = 3e-5, N = 64. In all our experiments, no STS training sets are used.
translated by 谷歌翻译
We present Relational Sentence Embedding (RSE), a new paradigm to further discover the potential of sentence embeddings. Prior work mainly models the similarity between sentences based on their embedding distance. Because of the complex semantic meanings conveyed, sentence pairs can have various relation types, including but not limited to entailment, paraphrasing, and question-answer. It poses challenges to existing embedding methods to capture such relational information. We handle the problem by learning associated relational embeddings. Specifically, a relation-wise translation operation is applied to the source sentence to infer the corresponding target sentence with a pre-trained Siamese-based encoder. The fine-grained relational similarity scores can be computed from learned embeddings. We benchmark our method on 19 datasets covering a wide range of tasks, including semantic textual similarity, transfer, and domain-specific tasks. Experimental results show that our method is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art sentence embedding methods. https://github.com/BinWang28/RSE
translated by 谷歌翻译
Contrastive learning has become a new paradigm for unsupervised sentence embeddings. Previous studies focus on instance-wise contrastive learning, attempting to construct positive pairs with textual data augmentation. In this paper, we propose a novel Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP). Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts. Using a prototypical contrastive loss, we enforce the anchor sentence embedding to be close to its corresponding semantic prototypes, and far apart from the negative prototypes as well as the prototypes of other sentences. Extensive experimental results on semantic textual similarity, transfer, and clustering tasks demonstrate the effectiveness of our proposed model compared to strong baselines. Code is available at https://github.com/lemon0830/promptCSE.
translated by 谷歌翻译
虽然对比学习大大提升了句子嵌入的表示,但它仍然受到现有句子数据集的大小的限制。在本文中,我们向Transaug(转换为增强),它提供了利用翻译句子对作为文本的数据增强的第一次探索,并介绍了两级范例,以提高最先进的句子嵌入。我们不是采用以其他语言设置培训的编码器,我们首先从SIMCSE编码器(以英语预先预先预订)蒸发蒸馏出一个汉语编码器,以便它们的嵌入在语义空间中靠近,这可以被后悔作为隐式数据增强。然后,我们只通过交叉语言对比学习更新英语编码器并将蒸馏的中文编码器冷冻。我们的方法在标准语义文本相似度(STS)上实现了一种新的最先进的,表现出SIMCSE和句子T5,以及由Senteval评估的传输任务的相应轨道中的最佳性能。
translated by 谷歌翻译
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
translated by 谷歌翻译
Event extraction (EE) is the task of identifying interested event mentions from text. Conventional efforts mainly focus on the supervised setting. However, these supervised models cannot generalize to event types out of the pre-defined ontology. To fill this gap, many efforts have been devoted to the zero-shot EE problem. This paper follows the trend of modeling event-type semantics but moves one step further. We argue that using the static embedding of the event type name might not be enough because a single word could be ambiguous, and we need a sentence to define the type semantics accurately. To model the definition semantics, we use two separate transformer models to project the contextualized event mentions and corresponding definitions into the same embedding space and then minimize their embedding distance via contrastive learning. On top of that, we also propose a warming phase to help the model learn the minor difference between similar definitions. We name our approach Zero-shot Event extraction with Definition (ZED). Experiments on the MAVEN dataset show that our model significantly outperforms all previous zero-shot EE methods with fast inference speed due to the disjoint design. Further experiments also show that ZED can be easily applied to the few-shot setting when the annotation is available and consistently outperforms baseline supervised methods.
translated by 谷歌翻译
对比学习被出现为强大的代表学习方法,促进各种下游任务,特别是当监督数据有限时。如何通过数据增强构建有效的对比样本是其成功的关键。与视觉任务不同,语言任务中尚未对对比学习进行对比学习的数据增强方法。在本文中,我们提出了一种使用文本摘要构建语言任务的对比样本的新方法。我们使用这些样本进行监督的对比学习,以获得更好的文本表示,这极大地利用了具有有限注释的文本分类任务。为了进一步改进该方法,除了交叉熵损失之外,我们将从不同类中的样本混合并添加一个名为MIXSUM的额外正则化。真实世界文本分类数据集(Amazon-5,Yelp-5,AG新闻和IMDB)的实验展示了基于摘要的数据增强和MIXSUM正规化的提议对比学习框架的有效性。
translated by 谷歌翻译
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap', resulting in generated responses which are dull, repetitive, and often inconsistent with dialogue history. Comparing ranked lists of multiple generated responses against the 'gold response' (from training data) reveals a wide diversity in response quality, with many good responses placed lower in the ranked list. The main challenge, addressed in this work, is then how to reach beyond greedily generated system responses, that is, how to obtain and select such high-quality responses from the list of overgenerated responses at inference without availability of the gold response. To this end, we propose a simple yet effective reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system. The idea is to use any sequence-level (similarity) scoring function to divide the semantic space of responses into high-scoring versus low-scoring partitions. At training, the high-scoring partition comprises all generated responses whose similarity to the gold response is higher than the similarity of the greedy response to the gold response. At inference, the aim is to estimate the probability that each overgenerated response belongs to the high-scoring partition, given only previous dialogue history. We validate the robustness and versatility of our proposed method on the standard MultiWOZ dataset: our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results. Additional experiments on the BiTOD dataset and human evaluation further ascertain the generalisability and effectiveness of the proposed framework.
translated by 谷歌翻译
基于检索的对话响应选择旨在为给定多转中下文找到候选集的正确响应。基于预先训练的语言模型(PLMS)的方法对此任务产生了显着的改进。序列表示在对话背景和响应之间的匹配程度中扮演关键作用。然而,我们观察到相同上下文共享的不同的上下文响应对始终在由PLM计算的序列表示中具有更大的相似性,这使得难以区分来自负面的正响应。由此激励,我们提出了一种基于PLMS的响应选择任务的新颖\ TextBF {f} ine- \ textbf {g}下载\ textbf {g} unfrstive(fgc)学习方法。该FGC学习策略有助于PLMS在细粒中产生每个对话的更可区分的匹配表示,并进一步提高选择正反应的预测。两个基准数据集的实证研究表明,所提出的FGC学习方法一般可以提高现有PLM匹配模型的模型性能。
translated by 谷歌翻译
组成概括是指模型可以根据训练期间观察到的数据组件概括为新组成的输入数据的能力。它触发了对不同任务的一系列组成概括分析,因为概括是语言和解决问题技能的重要方面。但是,关于数学单词问题(MWP)的类似讨论受到限制。在此手稿中,我们研究了MWP求解中的组成概括。具体来说,我们首先引入了一种数据分割方法,以创建现有MWP数据集的组合分解。同时,我们合成数据以隔离组成的效果。为了改善MWP解决方案中的组成概括,我们提出了一种迭代数据增强方法,该方法将各种组成变化包括在培训数据中,并可以与MWP方法合作。在评估过程中,我们检查了一组方法,发现所有方法都会在评估的数据集中遇到严重的性能损失。我们还发现我们的数据增强方法可以显着改善一般MWP方法的组成概括。代码可在https://github.com/demoleiwang/cgmwp上找到。
translated by 谷歌翻译
数据增强是通过转换为机器学习的人工创建数据的人工创建,是一个跨机器学习学科的研究领域。尽管它对于增加模型的概括功能很有用,但它还可以解决许多其他挑战和问题,从克服有限的培训数据到正规化目标到限制用于保护隐私的数据的数量。基于对数据扩展的目标和应用的精确描述以及现有作品的分类法,该调查涉及用于文本分类的数据增强方法,并旨在为研究人员和从业者提供简洁而全面的概述。我们将100多种方法划分为12种不同的分组,并提供最先进的参考文献来阐述哪种方法可以通过将它们相互关联,从而阐述了哪种方法。最后,提供可能构成未来工作的基础的研究观点。
translated by 谷歌翻译