Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning. While they have been studied extensively for classification and regression problems, how they apply to ranking problems, where the data and metrics have a list structure, is not well understood. Theoretically, we establish a domain adaptation generalization bound for ranking under listwise metrics such as MRR and NDCG. The bound suggests an adaptation method via learning list-level domain-invariant feature representations, whose benefits are empirically demonstrated by unsupervised domain adaptation experiments on real-world ranking tasks, including passage reranking. A key message is that for domain adaptation, the representations should be analyzed at the same level at which the metric is computed, as we show that learning invariant representations at the list level is most effective for adaptation on ranking problems.
translated by 谷歌翻译
密集的检索方法可以克服词汇差距并导致显着改善的搜索结果。但是,它们需要大量的培训数据,这些数据不适用于大多数域。如前面的工作所示(Thakur等,2021b),密集检索的性能在域移位下严重降低。这限制了密集检索方法的使用,只有几个具有大型训练数据集的域。在本文中,我们提出了一种新颖的无监督域适配方法生成伪标签(GPL),其将查询发生器与来自跨编码器的伪标记相结合。在六种代表性域专用数据集中,我们发现所提出的GPL可以优于箱子外的最先进的密集检索方法,最高可达8.9点NDCG @ 10。 GPL需要来自目标域的少(未标记)数据,并且在其培训中比以前的方法更强大。我们进一步调查了六种最近训练方法在检索任务的域改编方案中的作用,其中只有三种可能会产生改善的结果。最好的方法,Tsdae(Wang等,2021)可以与GPL结合,在六个任务中产生了1.0点NDCG @ 10的另一个平均改善。
translated by 谷歌翻译
我们提出了一种以最小计算成本提高广泛检索模型的性能的框架。它利用由基本密度检索方法提取的预先提取的文档表示,并且涉及训练模型以共同评分每个查询的一组检索到的候选文档,同时在其他候选的上下文中暂时转换每个文档的表示。以及查询本身。当基于其与查询的相似性进行评分文档表示时,该模型因此意识到其“对等”文档的表示。我们表明,我们的方法导致基本方法的检索性能以及彼此隔离的评分候选文档进行了大量改善,如在一对培训环境中。至关重要的是,与基于伯特式编码器的术语交互重型器不同,它在运行时在任何第一阶段方法的顶部引发可忽略不计的计算开销,允许它与任何最先进的密集检索方法容易地结合。最后,同时考虑给定查询的一组候选文档,可以在检索中进行额外的有价值的功能,例如评分校准和减轻排名中的社会偏差。
translated by 谷歌翻译
We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models. Retrievers based on hybrid models have been shown to outperform both BM25 and neural models alone. Our approach exploits this improved performance when training a reranker, leading to a robust reranking model. The reranker, a cross-attention neural model, is shown to be robust to different first-stage retrieval systems, achieving better performance than rerankers simply trained upon the first-stage retrievers in the multi-stage systems. We present evaluations on a supervised passage retrieval task using MS MARCO and zero-shot retrieval tasks using BEIR. The empirical results show strong performance on both evaluations.
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
无监督域适应(UDA)的绝大多数现有算法都集中在以一次性的方式直接从标记的源域调整到未标记的目标域。另一方面,逐渐的域适应性(GDA)假设桥接源和目标的$(t-1)$未标记的中间域,并旨在通过利用中间的路径在目标域中提供更好的概括。在某些假设下,Kumar等人。 (2020)提出了一种简单的算法,逐渐自我训练,以及按$ e^{o(t)} \ left的顺序结合的概括(\ varepsilon_0+o \ of \ left(\ sqrt {log(log(log(t)/n log(t)/n) } \ right)\ right)$对于目标域错误,其中$ \ varepsilon_0 $是源域错误,$ n $是每个域的数据大小。由于指数因素,当$ t $仅适中时,该上限变得空虚。在这项工作中,我们在更一般和放松的假设下分析了逐步的自我训练,并证明概括为$ \ varepsilon_0 + o \ left(t \ delta + t/\ sqrt {n} {n} \ right) + \ widetilde { o} \ left(1/\ sqrt {nt} \ right)$,其中$ \ delta $是连续域之间的平均分配距离。与对$ t $作为乘法因素的指数依赖性的现有界限相比,我们的界限仅取决于$ t $线性和添加性。也许更有趣的是,我们的结果意味着存在最佳的$ t $的最佳选择,从而最大程度地减少了概括性错误,并且自然也暗示了一种构造中间域路径的最佳方法,以最大程度地减少累积路径长度$ t \ delta源和目标之间的$。为了证实我们理论的含义,我们检查了对多个半合成和真实数据集的逐步自我训练,这证实了我们的发现。我们相信我们的见解为未来GDA算法设计的途径提供了前进的途径。
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
translated by 谷歌翻译
本文概述了了解信息检索和自然语言处理中最近的发展的概念框架,试图集成密集和稀疏检索方法。我提出了一种代表性方法,将核心文本检索问题与逻辑评分模型和物理检索模型中断。评分模型在编码器方面定义,将查询和文档映射到代表空间,以及计算查询文档分数的比较函数。物理检索模型定义了系统如何从关于查询的任意大语料库产生顶级k $ Scoring文档。分别沿两个维度进一步分析得分模型:密集与稀疏表示和监督(学习)与无监督的方法。我展示了许多最近提出的检索方法,包括多级排名设计,可以看作是本框架中的不同参数化,并且统一视图表明了许多开放的研究问题,为未来的工作提供了路线图。作为奖金,这种概念框架在计算时建立了与自然语言处理和信息访问“技术”中的句子相似任务的连接。
translated by 谷歌翻译
在这项工作中,我们提出了一个系统的实证研究,专注于最先进的多语言编码器在跨越多种不同语言对的交叉语言文档和句子检索任务的适用性。我们首先将这些模型视为多语言文本编码器,并在无监督的ad-hoc句子和文档级CLIR中基准性能。与监督语言理解相比,我们的结果表明,对于无监督的文档级CLIR - 一个没有针对IR特定的微调 - 预训练的多语言编码器的相关性判断,平均未能基于CLWE显着优于早期模型。对于句子级检索,我们确实获得了最先进的性能:然而,通过多语言编码器来满足高峰分数,这些编码器已经进一步专注于监督的时尚,以便句子理解任务,而不是使用他们的香草'现货'变体。在这些结果之后,我们介绍了文档级CLIR的本地化相关性匹配,在那里我们独立地对文件部分进行了查询。在第二部分中,我们评估了在一系列零拍语言和域转移CLIR实验中的英语相关数据中进行微调的微调编码器精细调整的微调我们的结果表明,监督重新排名很少提高多语言变压器作为无监督的基数。最后,只有在域名对比度微调(即,同一域名,只有语言转移),我们设法提高排名质量。我们在目标语言中单次检索的交叉定向检索结果和结果(零拍摄)交叉传输之间的显着实证差异,这指出了在单机数据上训练的检索模型的“单声道过度装备”。
translated by 谷歌翻译
排名模型是信息检索系统的主要组成部分。排名的几种方法是基于传统的机器学习算法,使用一组手工制作的功能。最近,研究人员在信息检索中利用了深度学习模型。这些模型的培训结束于结束,以提取来自RAW数据的特征来排序任务,因此它们克服了手工制作功能的局限性。已经提出了各种深度学习模型,每个模型都呈现了一组神经网络组件,以提取用于排名的特征。在本文中,我们在不同方面比较文献中提出的模型,以了解每个模型的主要贡献和限制。在我们对文献的讨论中,我们分析了有前途的神经元件,并提出了未来的研究方向。我们还显示文档检索和其他检索任务之间的类比,其中排名的项目是结构化文档,答案,图像和视频。
translated by 谷歌翻译
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. Due to the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Unlike previous surveys, this survey paper reviews more than forty representative transfer learning approaches, especially homogeneous transfer learning approaches, from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, over twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.
translated by 谷歌翻译
关于信息检索的许多最新研究集中在如何从一项任务(通常具有丰富的监督数据)转移到有限的其他各种任务,并隐含地假设可以从一个任务概括到所有其余的任务。但是,这忽略了这样一个事实,即有许多多样化和独特的检索任务,每个任务都针对不同的搜索意图,查询和搜索域。在本文中,我们建议使用几乎没有散热的检索,每个任务都有一个简短的描述和一些示例。为了扩大一些示例的功能,我们提出了针对检索器(即将到来)的及时基本查询生成,该查询将大型语言模型(LLM)作为几个弹片查询生成器,并根据生成的数据创建特定于任务的检索器。通过LLM的概括能力提供动力,即要来源使得可以仅基于一些示例{没有自然问题或MS MARCO来训练%问题生成器或双重编码器,就可以仅基于一些示例{没有}来创建特定于任务的端到端检索。出乎意料的是,LLM提示不超过8个示例,允许双重编码器在MARCO(例如Colbert V2)上训练的大量工程模型平均在11个检索套件中超过1.2 NDCG。使用相同生成数据的进一步培训标准尺寸的重新级别可获得5.0点NDCG的改进。我们的研究确定,查询产生比以前观察到的更有效,尤其是在给出少量特定于任务知识的情况下。
translated by 谷歌翻译
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware retrieval system using multi-task instruction tuning, which can follow human-written instructions to find the best documents for a given query. We introduce the first large-scale collection of approximately 40 retrieval datasets with instructions, BERRI, and present TART, a multi-task retrieval system trained on BERRI with instructions. TART shows strong capabilities to adapt to a new retrieval task via instructions and advances the state of the art on two zero-shot retrieval benchmarks, BEIR and LOTTE, outperforming models up to three times larger. We further introduce a new evaluation setup, X^2-Retrieval to better reflect real-world scenarios, where diverse domains and tasks are pooled and a system needs to find documents aligning users' intents. In this setup, TART significantly outperforms competitive baselines, further demonstrating the effectiveness of guiding retrieval with instructions.
translated by 谷歌翻译
Recent work reported the label alignment property in a supervised learning setting: the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix. Inspired by this observation, we derive a regularization method for unsupervised domain adaptation. Instead of regularizing representation learning as done by popular domain adaptation methods, we regularize the classifier so that the target domain predictions can to some extent ``align" with the top singular vectors of the unsupervised data matrix from the target domain. In a linear regression setting, we theoretically justify the label alignment property and characterize the optimality of the solution of our regularization by bounding its distance to the optimal solution. We conduct experiments to show that our method can work well on the label shift problems, where classic domain adaptation methods are known to fail. We also report mild improvement over domain adaptation baselines on a set of commonly seen MNIST-USPS domain adaptation tasks and on cross-lingual sentiment analysis tasks.
translated by 谷歌翻译
基于强大的预训练语言模型(PLM)的密集检索方法(DR)方法取得了重大进步,并已成为现代开放域问答系统的关键组成部分。但是,他们需要大量的手动注释才能进行竞争性,这是不可行的。为了解决这个问题,越来越多的研究作品最近着重于在低资源场景下改善DR绩效。这些作品在培训所需的资源和采用各种技术的资源方面有所不同。了解这种差异对于在特定的低资源场景下选择正确的技术至关重要。为了促进这种理解,我们提供了针对低资源DR的主流技术的彻底结构化概述。根据他们所需的资源,我们将技术分为三个主要类别:(1)仅需要文档; (2)需要文件和问题; (3)需要文档和提问对。对于每种技术,我们都会介绍其一般形式算法,突出显示开放的问题和利弊。概述了有希望的方向以供将来的研究。
translated by 谷歌翻译
Survival analysis is the branch of statistics that studies the relation between the characteristics of living entities and their respective survival times, taking into account the partial information held by censored cases. A good analysis can, for example, determine whether one medical treatment for a group of patients is better than another. With the rise of machine learning, survival analysis can be modeled as learning a function that maps studied patients to their survival times. To succeed with that, there are three crucial issues to be tackled. First, some patient data is censored: we do not know the true survival times for all patients. Second, data is scarce, which led past research to treat different illness types as domains in a multi-task setup. Third, there is the need for adaptation to new or extremely rare illness types, where little or no labels are available. In contrast to previous multi-task setups, we want to investigate how to efficiently adapt to a new survival target domain from multiple survival source domains. For this, we introduce a new survival metric and the corresponding discrepancy measure between survival distributions. These allow us to define domain adaptation for survival analysis while incorporating censored data, which would otherwise have to be dropped. Our experiments on two cancer data sets reveal a superb performance on target domains, a better treatment recommendation, and a weight matrix with a plausible explanation.
translated by 谷歌翻译
我们介绍了Art,这是一种新的语料库级自动编码方法,用于培训密集检索模型,不需要任何标记的培训数据。密集的检索是开放域任务(例如Open QA)的核心挑战,在该任务中,最先进的方法通常需要大量的监督数据集,并具有自定义的硬性采矿和肯定式示例。相反,艺术品仅需要访问未配对的投入和输出(例如问题和潜在的答案文件)。它使用新的文档 - 重新定义自动编码方案,其中(1)输入问题用于检索一组证据文档,并且(2)随后使用文档来计算重建原始问题的概率。基于问题重建的检索培训可以有效地学习文档和问题编码器,以后可以将其纳入完整的QA系统中,而无需任何进一步的填充。广泛的实验表明,ART在多个QA检索基准测试基准上获得最先进的结果,并且仅来自预训练的语言模型的一般初始化,从而消除了对标记的数据和特定于任务的损失的需求。
translated by 谷歌翻译
Due to the ability of deep neural nets to learn rich representations, recent advances in unsupervised domain adaptation have focused on learning domain-invariant features that achieve a small error on the source domain. The hope is that the learnt representation, together with the hypothesis learnt from the source domain, can generalize to the target domain. In this paper, we first construct a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation. In particular, the counterexample exhibits conditional shift: the class-conditional distributions of input features change between source and target domains. To give a sufficient condition for domain adaptation, we propose a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift. Moreover, we shed new light on the problem by proving an information-theoretic lower bound on the joint error of any domain adaptation method that attempts to learn invariant representations. Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target. Finally, we conduct experiments on real-world datasets that corroborate our theoretical findings. We believe these insights are helpful in guiding the future design of domain adaptation and representation learning algorithms.
translated by 谷歌翻译