在本文中,我们将概述SV形式共享任务,作为第三届学术文档处理(SDP)的一部分,在Coling 2022.中,在共同的任务中,为参与者提供了变量和变量的词汇,被要求确定全文学术文档中的单个句子中提到了哪些变量。两支球队总共向共享任务排行榜提交了9项意见。尽管所有团队都没有改进基线系统,但我们仍然从他们的意见书中获取见解。此外,我们提供了详细的评估。我们共享任务的数据和基线可在https://github.com/vadis-project/sv-inend上免费获得
translated by 谷歌翻译
Dense retrievers have made significant strides in obtaining state-of-the-art results on text retrieval and open-domain question answering (ODQA). Yet most of these achievements were made possible with the help of large annotated datasets, unsupervised learning for dense retrieval models remains an open problem. In this work, we explore two categories of methods for creating pseudo query-document pairs, named query extraction (QExt) and transferred query generation (TQGen), to augment the retriever training in an annotation-free and scalable manner. Specifically, QExt extracts pseudo queries by document structures or selecting salient random spans, and TQGen utilizes generation models trained for other NLP tasks (e.g., summarization) to produce pseudo queries. Extensive experiments show that dense retrievers trained with individual augmentation methods can perform comparably well with multiple strong baselines, and combining them leads to further improvements, achieving state-of-the-art performance of unsupervised dense retrieval on both BEIR and ODQA datasets.
translated by 谷歌翻译
Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data such as knowledge bases (KB) or databases (DB). However, existing approaches on semantic parsing cannot adapt to both modalities, as they suffer from the exponential growth of the logical form candidates and can hardly generalize to unseen data. In this work, we propose Uni-Parser, a unified semantic parser for question answering (QA) on both KB and DB. We introduce the primitive (relation and entity in KB, and table name, column name and cell value in DB) as an essential element in our framework. The number of primitives grows linearly with the number of retrieved relations in KB and DB, preventing us from dealing with exponential logic form candidates. We leverage the generator to predict final logical forms by altering and composing topranked primitives with different operations (e.g. select, where, count). With sufficiently pruned search space by a contrastive primitive ranker, the generator is empowered to capture the composition of primitives enhancing its generalization ability. We achieve competitive results on multiple KB and DB QA benchmarks more efficiently, especially in the compositional and zero-shot settings.
translated by 谷歌翻译
Timely and effective response to humanitarian crises requires quick and accurate analysis of large amounts of text data - a process that can highly benefit from expert-assisted NLP systems trained on validated and annotated data in the humanitarian response domain. To enable creation of such NLP systems, we introduce and release HumSet, a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. The dataset provides documents in three languages (English, French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021 across the globe. For each document, HUMSET provides selected snippets (entries) as well as assigned classes to each entry annotated using common humanitarian information analysis frameworks. HUMSET also provides novel and challenging entry extraction and multi-label entry classification tasks. In this paper, we take a first step towards approaching these tasks and conduct a set of experiments on Pre-trained Language Models (PLM) to establish strong baselines for future research in this domain. The dataset is available at https://blog.thedeep.io/humset/.
translated by 谷歌翻译
We propose a) a Language Agnostic end-to-end Speech Translation model (LAST), and b) a data augmentation strategy to increase code-switching (CS) performance. With increasing globalization, multiple languages are increasingly used interchangeably during fluent speech. Such CS complicates traditional speech recognition and translation, as we must recognize which language was spoken first and then apply a language-dependent recognizer and subsequent translation component to generate the desired target language output. Such a pipeline introduces latency and errors. In this paper, we eliminate the need for that, by treating speech recognition and translation as one unified end-to-end speech translation problem. By training LAST with both input languages, we decode speech into one target language, regardless of the input language. LAST delivers comparable recognition and speech translation accuracy in monolingual usage, while reducing latency and error rate considerably when CS is observed.
translated by 谷歌翻译
通信搜索是刚性点云注册算法中的重要步骤。大多数方法在每个步骤都保持单个对应关系,并逐渐删除错误的通信。但是,建立一对一的对应关系非常困难,尤其是当将两个点云与许多本地功能匹配时。本文提出了一种优化方法,该方法在将部分点云与完整点云匹配时保留每个关键点的所有可能对应关系。然后,通过考虑匹配成本,这些不确定的对应关系通过估计的刚性转换逐渐更新。此外,我们提出了一个新的点功能描述符,该描述符衡量本地点云区域之间的相似性。广泛的实验表明,即使在同一类别中与不同对象匹配时,我们的方法也优于最先进的方法(SOTA)方法。值得注意的是,我们的方法在将真实世界的噪声深度图像注册为模板形状时的表现优于SOTA方法。
translated by 谷歌翻译
视频框架插值(VFI)是一项基本视觉任务,旨在综合两个连续的原始视频图像之间的几个帧。大多数算法旨在通过仅使用密钥帧来完成VFI,这是一个错误的问题,因为密钥帧通常不会对场景中对象的轨迹产生任何准确的精度。另一方面,基于事件的摄像机在视频的关键帧之间提供了更精确的信息。一些最新的基于事件的最新方法通过利用事件数据来更好地解决此问题,以更好地进行光流估计来通过翘曲插值视频框架。尽管如此,这些方法严重遭受了重影效果。另一方面,仅使用框架作为输入的一些基于内核的VFI方法表明,在用变压器备份时,可变形的卷积可能是处理长期依赖关系的可靠方法。我们提出了基于事件的视频框架插值,并作为一种基于轻质核的方法(E-VFIA)。 E-VFIA通过可变形的卷积将事件信息与标准视频帧融合在一起,以生成高质量的插值框架。所提出的方法表示具有高时间分辨率的事件,并使用多头发项机制来更好地编码基于事件的信息,同时不太容易受到模糊和鬼影的影响;因此,产生更脆的框架。仿真结果表明,该提出的技术优于当前最新方法(基于框架和事件),其模型大小明显较小。
translated by 谷歌翻译
徽标检索是一个具有挑战性的问题,因为与图像检索任务相比,相似性的定义更为主观,并且已知相似性的集合非常稀缺。为了应对这一挑战,在本文中,我们提出了一种简单但有效的基于细分市场的增强策略,以引入人工相似的徽标,以训练徽标检索的深层网络。在这种新颖的增强策略中,我们首先在徽标中找到细分市场,并在细分市场上应用旋转,缩放和颜色变化等转换,这与传统的图像级增强策略不同。此外,我们评估最近引入的基于排名的损失函数Smooth-AP是否是学习徽标检索相似性的更好方法。在大规模的METU商标数据集上,我们表明(i)基于细分市场的增强策略与基线模型或图像级增强策略相比提高了检索性能,并且(ii)平滑 - AP的表现确实比徽标的常规损失更好恢复。
translated by 谷歌翻译
控制器区域网络(CAN)协议的入侵检测需要现代方法才能与其他电气体系结构竞争。指纹入侵检测系统(IDS)提供了一种有希望解决此问题的新方法。通过表征来自已知ECU的网络流量,可以区分危险信息。在本文中,通过神经网络培训对网络流量的步骤响应和光谱表征,使用了修改版的指纹ID版本。通过添加功能集减少和超参数调整,此方法可实现99.4%的可信ECU流量检测率。
translated by 谷歌翻译
利用预训练语言模型的抽象摘要系统在基准数据集上取得了卓越的结果。但是,此类模型已被证明更容易幻觉,这些事实对输入背景不忠。在本文中,我们提出了一种通过实体覆盖范围控制(ECC)来补救实体级外部幻觉的方法。我们首先计算实体覆盖范围的精度,并为每个培训示例提供相应的控制代码,该示例隐含地指导该模型在训练阶段识别忠实的内容。我们通过从Wikipedia提取的大但嘈杂的数据中进行中间调整进一步扩展了我们的方法,以解锁零击摘要。我们表明,根据我们对三个基准数据集XSUM,PubMed和Samsum的实验结果,根据我们在监督的微调和零射击设置中,可以在监督微调和零摄像设置中更加忠实和显着的抽象性汇总。
translated by 谷歌翻译