经过一段时间的减少,对单词一致性的兴趣再次增加,因为它们在类型学研究,跨语言注释投影和机器翻译等领域的有用性中再次增加。通常,对齐算法仅使用bitext,并且不利用许多平行语料库是多面关系的事实。在这里,我们通过考虑所有语言对,计算多种语言对之间的高质量单词对齐。首先,我们创建一个多平行单词对齐图,并将所有双语单词对齐对在一个图中。接下来,我们使用图形神经网络(GNN)来利用图形结构。我们的GNN方法(i)利用有关输入词的含义,位置和语言的信息,(ii)合并了来自多个并行句子的信息,(iii)添加并删除了初始对齐的边缘,并且(iv)产生了预测可以概括训练句子的模型。我们表明,社区检测为多平行单词对齐提供了有价值的信息。我们的方法在三个单词分配数据集和下游任务上的先前工作优于先前的工作。
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
多语言语言模型(\ mllms),如mbert,xlm,xlm-r,\ textit {etc。}已成为一种可行的选择,使预先估计到大量语言的力量。鉴于他们的成功在零射击转移学习中,在(i)建立更大的\ mllms〜覆盖了大量语言(ii)创建覆盖更广泛的任务和语言来评估的详尽工作基准mllms〜(iii)分析单音零点,零拍摄交叉和双语任务(iv)对Monolingual的性能,了解\ mllms〜(v)增强(通常)学习的通用语言模式(如果有的话)有限的容量\ mllms〜以提高他们在已见甚至看不见语言的表现。在这项调查中,我们审查了现有的文学,涵盖了上述与\ MLLMS有关的广泛研究领域。根据我们的调查,我们建议您有一些未来的研究方向。
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
translated by 谷歌翻译
框架转移是翻译中的横向现象,导致相应的语言材料对唤起不同帧。预测帧移位的能力使通过注释投影自动创建多语言架构。这里,我们提出了帧移位预测任务,并演示了图表关注网络,与辅助训练相结合,可以学习跨语言帧到帧对应关系并预测帧移位。
translated by 谷歌翻译
数据饥饿的深度神经网络已经将自己作为许多NLP任务的标准建立为包括传统序列标记的标准。尽管他们在高资源语言上表现最先进的表现,但它们仍然落后于低资源场景的统计计数器。一个方法来反击攻击此问题是文本增强,即,从现有数据生成新的合成训练数据点。虽然NLP最近目睹了一种文本增强技术的负载,但该领域仍然缺乏对多种语言和序列标记任务的系统性能分析。为了填补这一差距,我们调查了三类文本增强方法,其在语法(例如,裁剪子句子),令牌(例如,随机字插入)和字符(例如,字符交换)级别上执行更改。我们系统地将它们与语音标记,依赖解析和语义角色标记的分组进行了比较,用于使用各种模型的各种语言系列,包括依赖于诸如MBERT的普赖金的多语言语境化语言模型的架构。增强最显着改善了解析,然后是语音标记和语义角色标记的依赖性解析。我们发现实验技术通常在形态上丰富的语言,而不是越南语等分析语言。我们的研究结果表明,增强技术可以进一步改善基于MBERT的强基线。我们将字符级方法标识为最常见的表演者,而同义词替换和语法增强仪提供不一致的改进。最后,我们讨论了最大依赖于任务,语言对和模型类型的结果。
translated by 谷歌翻译
In the absence of readily available labeled data for a given task and language, annotation projection has been proposed as one of the possible strategies to automatically generate annotated data which may then be used to train supervised systems. Annotation projection has often been formulated as the task of projecting, on parallel corpora, some labels from a source into a target language. In this paper we present T-Projection, a new approach for annotation projection that leverages large pretrained text2text language models and state-of-the-art machine translation technology. T-Projection decomposes the label projection task into two subtasks: (i) The candidate generation step, in which a set of projection candidates using a multilingual T5 model is generated and, (ii) the candidate selection step, in which the candidates are ranked based on translation probabilities. We evaluate our method in three downstream tasks and five different languages. Our results show that T-projection improves the average F1 score of previous methods by more than 8 points.
translated by 谷歌翻译
我用Hunglish2语料库训练神经电脑翻译任务的模型。这项工作的主要贡献在培训NMT模型期间评估不同的数据增强方法。我提出了5种不同的增强方法,这些方法是结构感知的,这意味着而不是随机选择用于消隐或替换的单词,句子的依赖树用作增强的基础。我首先关于神经网络的详细文献综述,顺序建模,神经机翻译,依赖解析和数据增强。经过详细的探索性数据分析和Hunglish2语料库的预处理之后,我使用所提出的数据增强技术进行实验。匈牙利语的最佳型号达到了33.9的BLEU得分,而英国匈牙利最好的模型达到了28.6的BLEU得分。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
在本文中,我们试图通过引入深度学习模型的句法归纳偏见来建立两所学校之间的联系。我们提出了两个归纳偏见的家族,一个家庭用于选区结构,另一个用于依赖性结构。选区归纳偏见鼓励深度学习模型使用不同的单位(或神经元)分别处理长期和短期信息。这种分离为深度学习模型提供了一种方法,可以从顺序输入中构建潜在的层次表示形式,即更高级别的表示由高级表示形式组成,并且可以分解为一系列低级表示。例如,在不了解地面实际结构的情况下,我们提出的模型学会通过根据其句法结构组成变量和运算符的表示来处理逻辑表达。另一方面,依赖归纳偏置鼓励模型在输入序列中找到实体之间的潜在关系。对于自然语言,潜在关系通常被建模为一个定向依赖图,其中一个单词恰好具有一个父节点和零或几个孩子的节点。将此约束应用于类似变压器的模型之后,我们发现该模型能够诱导接近人类专家注释的有向图,并且在不同任务上也优于标准变压器模型。我们认为,这些实验结果为深度学习模型的未来发展展示了一个有趣的选择。
translated by 谷歌翻译
图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
近年来,目睹了概念地图生成技术的快速发展,因为他们提供了从自由文本提供良好的知识综合。传统的无监督方法不会产生面向任务的概念图,而深度生成型号需要大量的培训数据。在这项工作中,我们提出了GT-D2G(基于图形转换的文档到图),这是一种自动概念地图生成框架,它利用广义的NLP管道推导了富含语义的初始图形,并将它们转化为更弱的监督下更简洁的结构文件标签。这些概念地图的质量和可解释性通过对三个真实世界的语料库进行人体评估验证,他们在下游任务中的效用进一步证明了稀缺文件标签的受控实验。
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
对任何人类语言的文本的语法分析通常涉及许多基本的处理任务,例如令牌化,形态标记和依赖性解析。最先进的系统可以在具有大数据集的语言上实现这些任务的高精度,但是对于几乎没有带注释的数据的他的他加禄语等语言的结果很差。为了解决他加禄语语言的此问题,我们研究了在没有带注释的他加禄语数据的情况下使用辅助数据源来创建特定于任务模型的使用。我们还探索了单词嵌入和数据扩展的使用,以提高性能,而只有少量带注释的他加禄语数据可用。我们表明,与最先进的监督基线相比,这些零射击和几乎没有射击的方法在对域内和域外的塔加尔teact文本进行了语法分析方面进行了实质性改进。
translated by 谷歌翻译
编码单词语义属性的密集词向量或“Word Embeddings”现在已成为机器翻译(MT),问题应答(QA),字感消解(WSD)和信息检索(IR)中的NLP任务的积分。在本文中,我们使用各种现有方法为14个印度语言创建多个单词嵌入。我们将这些嵌入的嵌入式为所有这些语言,萨姆萨姆,孟加拉,古吉拉蒂,印地教派,kannada,konkani,malayalam,marathi,尼泊尔,odiya,punjabi,梵语,泰米尔和泰雅古士在一个单一的存储库中。相对较新的方法,强调迎合上下文(BERT,ELMO等),表明了显着的改进,但需要大量资源来产生可用模型。我们释放使用上下文和非上下文方法生成的预训练嵌入。我们还使用Muse和XLM来培训所有上述语言的交叉语言嵌入。为了展示我们嵌入的效果,我们为所有这些语言评估了我们对XPOS,UPOS和NER任务的嵌入模型。我们使用8种不同的方法释放了436个型号。我们希望他们对资源受限的印度语言NLP有用。本文的标题是指最初在1924年出版的福斯特的着名小说“一段是印度”。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译
Variational Graph Autoencoders (VGAEs) are powerful models for unsupervised learning of node representations from graph data. In this work, we systematically analyze modeling node attributes in VGAEs and show that attribute decoding is important for node representation learning. We further propose a new learning model, interpretable NOde Representation with Attribute Decoding (NORAD). The model encodes node representations in an interpretable approach: node representations capture community structures in the graph and the relationship between communities and node attributes. We further propose a rectifying procedure to refine node representations of isolated notes, improving the quality of these nodes' representations. Our empirical results demonstrate the advantage of the proposed model when learning graph data in an interpretable approach.
translated by 谷歌翻译
事实证明,信息提取方法可有效从结构化或非结构化数据中提取三重。以(头部实体,关系,尾部实体)形式组织这样的三元组的组织称为知识图(kgs)。当前的大多数知识图都是不完整的。为了在下游任务中使用kgs,希望预测kgs中缺少链接。最近,通过将实体和关系嵌入到低维的矢量空间中,旨在根据先前访问的三元组来预测三元组,从而对KGS表示不同的方法。根据如何独立或依赖对三元组进行处理,我们将知识图完成的任务分为传统和图形神经网络表示学习,并更详细地讨论它们。在传统的方法中,每个三重三倍将独立处理,并在基于GNN的方法中进行处理,三倍也考虑了他们的当地社区。查看全文
translated by 谷歌翻译
AMR到文本是NLP社区中旨在从抽象含义表示(AMR)图生成句子的关键技术之一。自2013年提出AMR以来,有关AMR到文本的研究越来越普遍,因为AMR作为自然语言的高级语义描述,由于AMR具有独特的优势,因此作为结构化数据的重要分支变得越来越普遍。在本文中,我们简要介绍了AMR到文本。首先,我们介绍了此技术的当前情况,并指出了它的困难。其次,根据先前研究中使用的方法,我们根据它们各自的机制将它们大致分为五个类别和预先训练的语言模型(PLM)。特别是,我们详细介绍了基于神经网络的方法,并介绍了AMR到文本的最新进展,该方法指的是AMR重建,解码器优化等。此外,我们介绍了AMR-TOXT的基准和评估方法。最终,我们提供了当前技术和未来研究的前景的摘要。
translated by 谷歌翻译