由于其可扩展性,两阶段推荐人被今天的许多最大的在线平台使用,包括YouTube,Linkedin和Pinterest。这些系统以两个步骤产生建议:(i)多个提名者调整为低预测延迟,从整个项目池中预先选择一个小候选者的小组; (ii)较慢但更准确的排名进一步缩小指定项目,并为用户服务。尽管他们受欢迎,但两级推荐人的文献相对稀缺,算法经常被视为他们的部分的总和。这种治疗假定了通过单独组分的行为解释了两级性能。事实并非如此:使用综合性和现实世界数据,我们证明了排名人员和提名人之间的互动大大影响了整体性能。通过这些调查结果,我们推出了概括下限,表明独立提名培训可能导致均匀随机建议的表现。我们发现,仔细设计项目池,每个项目池分配给不同的提名人,减轻了这些问题。随着手动搜索良好的池分配很难,我们建议使用基于专家的混合方法来学习一个。这显着改善了K的精度和召回。
translated by 谷歌翻译
我们研究了一个顺序决策问题,其中学习者面临$ k $武装的随机匪徒任务的顺序。对手可能会设计任务,但是对手受到限制,以在$ m $ and的较小(但未知)子集中选择每个任务的最佳组。任务边界可能是已知的(强盗元学习设置)或未知(非平稳的强盗设置)。我们设计了一种基于Burnit subsodular最大化的减少的算法,并表明,在大量任务和少数最佳武器的制度中,它在两种情况下的遗憾都比$ \ tilde {o}的简单基线要小。 \ sqrt {knt})$可以通过使用为非平稳匪徒问题设计的标准算法获得。对于固定任务长度$ \ tau $的强盗元学习问题,我们证明该算法的遗憾被限制为$ \ tilde {o}(nm \ sqrt {m \ tau}+n^{2/3} m \ tau)$。在每个任务中最佳武器的可识别性的其他假设下,我们显示了一个带有改进的$ \ tilde {o}(n \ sqrt {m \ tau}+n^{1/2} {1/2} \ sqrt的强盗元学习算法{m k \ tau})$遗憾。
translated by 谷歌翻译
推荐系统在市场中使用时发挥了双重作用:它们可以帮助用户从大型游泳池中选择最需要的物品,并有助于将有限数量的物品分配给最想要它们的用户。尽管在许多现实世界中的推荐设置中,能力限制的流行率普遍存在,但缺乏将它们纳入这些系统设计的原则性方式。在此激励的情况下,我们提出了一个交互式框架,系统提供商可以通过机会主义探索分配来提高向用户的建议质量,从而最大程度地利用用户奖励并使用适当的定价机制尊重容量约束。我们将问题建模为低排名组合的多臂匪徒问题的实例,并在手臂上进行了选择约束。我们采用一种集成方法,使用协作过滤,组合匪徒和最佳资源分配中的技术,以提供一种算法,可证明可以实现次线性遗憾,即$ \ tilde {\ mathcal {\ sqrt {o}}(\ sqrt {\ sqrt {n+m(n+m){n+m(n+m) )rt})$ in $ t $ rounds,用于$ n $用户,$ m $项目和排名$ r $ ney奖励矩阵的问题。关于合成和现实世界数据的实证研究也证明了我们方法的有效性和性能。
translated by 谷歌翻译
可以将相当多的现实问题提出为决策问题,其中必须反复从一组替代方案中做出适当的选择。多次专家判断,无论是人为的还是人为的,都可以帮助做出正确的决定,尤其是在探索替代解决方案的昂贵时。由于专家意见可能会偏离,因此可以通过汇总独立判断来解决找到正确的替代方案的问题作为集体决策问题(CDM)。当前的最新方法集中于有效地找到最佳专家,因此如果所有专家均不合格或过于偏见,则表现不佳,从而可能破坏决策过程。在本文中,我们提出了一种基于上下文多臂匪徒问题(CMAB)的新算法方法,以识别和抵消这种偏见的专业知识。我们探索同质,异质和两极分化的专家小组,并表明这种方法能够有效利用集体专业知识,优于最先进的方法,尤其是当提供的专业知识质量降低时。我们的新型CMAB启发方法实现了更高的最终表现,并且在收敛的同时比以前的自适应算法更快。
translated by 谷歌翻译
我们探索了一个新的强盗实验模型,其中潜在的非组织序列会影响武器的性能。上下文 - 统一算法可能会混淆,而那些执行正确的推理面部信息延迟的算法。我们的主要见解是,我们称之为Deconfounst Thompson采样的算法在适应性和健壮性之间取得了微妙的平衡。它的适应性在易于固定实例中带来了最佳效率,但是在硬性非平稳性方面显示出令人惊讶的弹性,这会导致其他自适应算法失败。
translated by 谷歌翻译
Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation.In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5% click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.
translated by 谷歌翻译
We introduce a new setting, optimize-and-estimate structured bandits. Here, a policy must select a batch of arms, each characterized by its own context, that would allow it to both maximize reward and maintain an accurate (ideally unbiased) population estimate of the reward. This setting is inherent to many public and private sector applications and often requires handling delayed feedback, small data, and distribution shifts. We demonstrate its importance on real data from the United States Internal Revenue Service (IRS). The IRS performs yearly audits of the tax base. Two of its most important objectives are to identify suspected misreporting and to estimate the "tax gap" -- the global difference between the amount paid and true amount owed. Based on a unique collaboration with the IRS, we cast these two processes as a unified optimize-and-estimate structured bandit. We analyze optimize-and-estimate approaches to the IRS problem and propose a novel mechanism for unbiased population estimation that achieves rewards comparable to baseline approaches. This approach has the potential to improve audit efficacy, while maintaining policy-relevant estimates of the tax gap. This has important social consequences given that the current tax gap is estimated at nearly half a trillion dollars. We suggest that this problem setting is fertile ground for further research and we highlight its interesting challenges. The results of this and related research are currently being incorporated into the continual improvement of the IRS audit selection methods.
translated by 谷歌翻译
我们在这里采用贝叶斯非参数混合模型,以将多臂匪徒扩展到尤其是汤普森采样,以扩展到存在奖励模型不确定性的场景。在随机的多臂强盗中,播放臂的奖励是由未知分布产生的。奖励不确定性,即缺乏有关奖励生成分布的知识,引起了探索 - 开发权的权衡:强盗代理需要同时了解奖励分布的属性,并顺序决定下一步要采取哪种操作。在这项工作中,我们通过采用贝叶斯非参数高斯混合模型来进行奖励模型不确定性,将汤普森的抽样扩展到场景中,以进行灵活的奖励密度估计。提出的贝叶斯非参数混合物模型汤普森采样依次学习了奖励模型,该模型最能近似于真实但未知的每臂奖励分布,从而实现了成功的遗憾表现。我们基于基于后验分析的新颖的分析得出的,这是一种针对该方法的渐近遗憾。此外,我们从经验上评估了其在多样化和以前难以捉摸的匪徒环境中的性能,例如,在指数级的家族中,奖励不受异常值和不同的每臂奖励分布。我们表明,拟议的贝叶斯非参数汤普森取样优于表现,无论是平均累积的遗憾和遗憾的波动,最先进的替代方案。在存在强盗奖励模型不确定性的情况下,提出的方法很有价值,因为它避免了严格的逐案模型设计选择,但提供了重要的遗憾。
translated by 谷歌翻译
在潜在的强盗问题中,学习者可以访问奖励分布,并且 - 对于非平稳的变体 - 环境的过渡模型。奖励分布在手臂和未知的潜在状态下进行条件。目的是利用奖励历史来识别潜在状态,从而使未来的武器选择最佳。潜在的匪徒设置将自己适用于许多实际应用,例如推荐人和决策支持系统,其中丰富的数据允许在线学习的环境模型的离线估算仍然是关键组成部分。在这种情况下,以前的解决方案始终根据代理商对国家的信念选择最高的奖励组,而不是明确考虑信息收集臂的价值。这种信息收集的武器不一定会提供最高的奖励,因此永远不会选择始终选择最高奖励武器的代理商选择。在本文中,我们提出了一种潜在土匪信息收集的方法。鉴于特殊的奖励结构和过渡矩阵,我们表明,鉴于代理商对国家的信念,选择最好的手臂会产生更高的遗憾。此外,我们表明,通过仔细选择武器,我们可以改善对国家分布的估计,从而通过将来通过更好的手臂选择来降低累积后悔。我们在合成和现实世界数据集上评估了我们的方法,显示出对最新方法的遗憾显着改善。
translated by 谷歌翻译
Thompson sampling is one of oldest heuristic to address the exploration / exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heuristic is very easy to implement, we argue that it should be part of the standard baselines to compare against.
translated by 谷歌翻译
我们介绍了一个多臂强盗模型,其中奖励是多个随机变量的总和,每个动作只会改变其中的分布。每次动作之后,代理都会观察所有变量的实现。该模型是由营销活动和推荐系统激励的,在该系统中,变量代表单个客户的结果,例如点击。我们提出了UCB风格的算法,以估计基线上的动作的提升。我们研究了问题的多种变体,包括何时未知基线和受影响的变量,并证明所有这些变量均具有sublrinear后悔界限。我们还提供了较低的界限,以证明我们的建模假设的必要性是合理的。关于合成和现实世界数据集的实验显示了估计不使用这种结构的策略的振奋方法的好处。
translated by 谷歌翻译
考虑在线学习算法同时做出决策并从反馈中学习。此类算法被广泛部署在产品和数字内容的推荐系统中。本文展示了在线学习算法偏见的偏低替代方案,以及它如何塑造建议系统的需求。首先,我们考虑$ k $武装的土匪。我们证明,$ \ varepsilon $ - 果岭选择一个无风险的手臂,而不是一个具有均等预期奖励的风险臂,概率是任意接近一个的概率。这是对不良奖励估计的武器采样的结果。通过实验,我们表明其他在线学习算法也表现出风险规避。在推荐系统环境中,我们表明,该算法对用户的嘈杂奖励减少的内容受到算法的青睐。结合使战略内容创建者朝着相似的预期质量的内容驱动战略性创建者的平衡力,对内容的优势不一定更好,挥发性较小,被夸大了。
translated by 谷歌翻译
推荐系统正面临审查,因为它们对我们可以获得的机会的影响越来越大。目前对公平的审计仅限于敏感群体水平的粗粒度评估。我们建议审核嫉妒 - 狂喜,一个与个别偏好对齐的更精细的标准:每个用户都应该更喜欢他们的建议给其他用户的建议。由于审计要求估计用户超出现有建议的用户的偏好,因此我们将审计作为多武装匪徒的新纯粹探索问题。我们提出了一种采样的效率算法,具有理论上的保证,它不会恶化用户体验。我们还研究了现实世界推荐数据集实现的权衡。
translated by 谷歌翻译
土匪算法已成为交互式建议的参考解决方案。但是,由于这种算法直接与用户进行改进的建议,因此对其实际使用提出了严重的隐私问题。在这项工作中,我们通过基于树的机制提出了一种差异性的线性上下文匪徒算法,以将拉普拉斯或高斯噪声添加到模型参数中。我们的关键见解是,随着模型在在线更新过程中收敛时,其参数的全局灵敏度随着时间的推移而缩小(因此命名为动态全局灵敏度)。与现有解决方案相比,我们动态的全球敏感性分析使我们能够减少噪声以获得$(\ epsilon,\ delta)$ - 差异隐私,并具有$ \ tilde o(\ log {t} \ sqrt中的噪声注入引起的额外遗憾) {t}/\ epsilon)$。我们通过动态全局灵敏度和我们提出的算法的相应上后悔界限提供了严格的理论分析。合成和现实世界数据集的实验结果证实了该算法对现有解决方案的优势。
translated by 谷歌翻译
多臂匪徒(MAB)提供了一种原则性的在线学习方法,以达到探索和剥削之间的平衡。由于表现出色和反馈学习低,没有学习在多种情况下采取行动,因此多臂匪徒在诸如推荐系统等应用程序中引起了广泛的关注。同样,在推荐系统中,协作过滤(CF)可以说是推荐系统中最早,最具影响力的方法。至关重要的是,新用户和不断变化的推荐项目池是推荐系统需要解决的挑战。对于协作过滤,经典方法是训练模型离线,然后执行在线测试,但是这种方法无法再处理用户偏好的动态变化,即所谓的冷启动。那么,如何在没有有效信息的情况下有效地向用户推荐项目?为了解决上述问题,已经提出了一个基于多臂强盗的协作过滤推荐系统,名为BanditMF。 BANDITMF旨在解决多军强盗算法和协作过滤中的两个挑战:(1)如何在有效信息稀缺的条件下解决冷启动问题以进行协作过滤,(2)强大社会关系域中的强盗算法问题是由独立估计与每个用户相关的未知参数并忽略用户之间的相关性引起的。
translated by 谷歌翻译
Counterfactual reasoning from logged data has become increasingly important for many applications such as web advertising or healthcare. In this paper, we address the problem of learning stochastic policies with continuous actions from the viewpoint of counterfactual risk minimization (CRM). While the CRM framework is appealing and well studied for discrete actions, the continuous action case raises new challenges about modelization, optimization, and~offline model selection with real data which turns out to be particularly challenging. Our paper contributes to these three aspects of the CRM estimation pipeline. First, we introduce a modelling strategy based on a joint kernel embedding of contexts and actions, which overcomes the shortcomings of previous discretization approaches. Second, we empirically show that the optimization aspect of counterfactual learning is important, and we demonstrate the benefits of proximal point algorithms and differentiable estimators. Finally, we propose an evaluation protocol for offline policies in real-world logged systems, which is challenging since policies cannot be replayed on test data, and we release a new large-scale dataset along with multiple synthetic, yet realistic, evaluation setups.
translated by 谷歌翻译
亚马逊客户服务每年为数百万客户联系提供实时支持。尽管Bot-Resolver有助于自动化一些流量,但我们仍然看到对人类代理商的需求很高,也称为主题专家(SME)。客户在不同域中的问题(返回策略,设备故障排除等)进行宣传。根据他们的培训,并非所有中小型企业都有资格处理所有联系人。与合格的中小型企业的路由联系是一个非平凡的问题,因为中小企业的域名资格受训练质量的影响,并且可以随着时间的推移而改变。为了在同时学习真正的资格状态的同时,我们建议使用非参数上下文的强盗算法(K-Boot)以及资格控制(EC)算法来制定路由问题。 K-Boot模型以$ K $ -NN选择的类似样品和Bootstrap Thompson采样进行探索,并以类似的样本进行奖励。 EC通过最初符合系统的资格过滤武器(SME),并动态验证该信息的可靠性。提出的K-boot是一种通用匪徒算法,EC适用于其他土匪。我们的仿真研究表明,K-boot在最新的匪徒模型上进行性能,并且当存在随机弹性信号时,EC会提高K-Boot性能。
translated by 谷歌翻译
动态治疗方案(DTRS)是个性化的,适应性的,多阶段的治疗计划,可将治疗决策适应个人的初始特征,并在随后的每个阶段中的中级结果和特征,在前阶段受到决策的影响。例子包括对糖尿病,癌症和抑郁症等慢性病的个性化一线和二线治疗,这些治疗适应患者对一线治疗,疾病进展和个人特征的反应。尽管现有文献主要集中于估算离线数据(例如从依次随机试验)中的最佳DTR,但我们研究了以在线方式开发最佳DTR的问题,在线与每个人的互动都会影响我们的累积奖励和我们的数据收集,以供我们的数据收集。未来的学习。我们将其称为DTR匪徒问题。我们提出了一种新颖的算法,通过仔细平衡探索和剥削,可以保证当过渡和奖励模型是线性时,可以实现最佳的遗憾。我们证明了我们的算法及其在合成实验和使用现实世界中对重大抑郁症的适应性治疗的案例研究中的好处。
translated by 谷歌翻译
We study bandit model selection in stochastic environments. Our approach relies on a meta-algorithm that selects between candidate base algorithms. We develop a meta-algorithm-base algorithm abstraction that can work with general classes of base algorithms and different type of adversarial meta-algorithms. Our methods rely on a novel and generic smoothing transformation for bandit algorithms that permits us to obtain optimal $O(\sqrt{T})$ model selection guarantees for stochastic contextual bandit problems as long as the optimal base algorithm satisfies a high probability regret guarantee. We show through a lower bound that even when one of the base algorithms has $O(\log T)$ regret, in general it is impossible to get better than $\Omega(\sqrt{T})$ regret in model selection, even asymptotically. Using our techniques, we address model selection in a variety of problems such as misspecified linear contextual bandits, linear bandit with unknown dimension and reinforcement learning with unknown feature maps. Our algorithm requires the knowledge of the optimal base regret to adjust the meta-algorithm learning rate. We show that without such prior knowledge any meta-algorithm can suffer a regret larger than the optimal base regret.
translated by 谷歌翻译
Large-scale online recommendation systems must facilitate the allocation of a limited number of items among competing users while learning their preferences from user feedback. As a principled way of incorporating market constraints and user incentives in the design, we consider our objectives to be two-fold: maximal social welfare with minimal instability. To maximize social welfare, our proposed framework enhances the quality of recommendations by exploring allocations that optimistically maximize the rewards. To minimize instability, a measure of users' incentives to deviate from recommended allocations, the algorithm prices the items based on a scheme derived from the Walrasian equilibria. Though it is known that these equilibria yield stable prices for markets with known user preferences, our approach accounts for the inherent uncertainty in the preferences and further ensures that the users accept their recommendations under offered prices. To the best of our knowledge, our approach is the first to integrate techniques from combinatorial bandits, optimal resource allocation, and collaborative filtering to obtain an algorithm that achieves sub-linear social welfare regret as well as sub-linear instability. Empirical studies on synthetic and real-world data also demonstrate the efficacy of our strategy compared to approaches that do not fully incorporate all these aspects.
translated by 谷歌翻译