In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O T d ln 3 (KT ln(T )/δ) regret bound that holds with probability 1 − δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ T d) for this setting, matching the upper bound up to logarithmic factors.
translated by 谷歌翻译
Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the stateof-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We provide the first theoretical guarantees for the contextual version of Thompson Sampling. We prove a high probability regret bound of Õ(d 3/2 √ T ) (or Õ(d T log(N ))), which is the best regret bound achieved by any computationally efficient algorithm for this problem, and is within a factor of √ d (or log(N )) of the information-theoretic lower bound for this problem.
translated by 谷歌翻译
We study bandit model selection in stochastic environments. Our approach relies on a meta-algorithm that selects between candidate base algorithms. We develop a meta-algorithm-base algorithm abstraction that can work with general classes of base algorithms and different type of adversarial meta-algorithms. Our methods rely on a novel and generic smoothing transformation for bandit algorithms that permits us to obtain optimal $O(\sqrt{T})$ model selection guarantees for stochastic contextual bandit problems as long as the optimal base algorithm satisfies a high probability regret guarantee. We show through a lower bound that even when one of the base algorithms has $O(\log T)$ regret, in general it is impossible to get better than $\Omega(\sqrt{T})$ regret in model selection, even asymptotically. Using our techniques, we address model selection in a variety of problems such as misspecified linear contextual bandits, linear bandit with unknown dimension and reinforcement learning with unknown feature maps. Our algorithm requires the knowledge of the optimal base regret to adjust the meta-algorithm learning rate. We show that without such prior knowledge any meta-algorithm can suffer a regret larger than the optimal base regret.
translated by 谷歌翻译
我们研究了批量线性上下文匪徒的最佳批量遗憾权衡。对于任何批次数$ M $,操作次数$ k $,时间范围$ t $和维度$ d $,我们提供了一种算法,并证明了其遗憾的保证,这是由于技术原因,具有两阶段表达作为时间的时间$ t $ grose。我们还证明了一个令人奇迹的定理,令人惊讶地显示了在问题参数的“问题参数”中的两相遗憾(最高〜对数因子)的最优性,因此建立了确切的批量后悔权衡。与最近的工作\ citep {ruan2020linear}相比,这表明$ m = o(\ log \ log t)$批次实现无需批处理限制的渐近最佳遗憾的渐近最佳遗憾,我们的算法更简单,更易于实际实现。此外,我们的算法实现了所有$ t \ geq d $的最佳遗憾,而\ citep {ruan2020linear}要求$ t $大于$ d $的不切实际的大多项式。沿着我们的分析,我们还证明了一种新的矩阵集中不平等,依赖于他们的动态上限,这是我们的知识,这是其文学中的第一个和独立兴趣。
translated by 谷歌翻译
最近,提出了经典多军强盗的多代理变体来解决在线学习中的公平问题。受社会选择和经济学方面的长期工作的启发,目标是优化NASH的社会福利,而不是全面的效用。不幸的是,就回合$ t $的数量而言,以前的算法要么不是有效的,要么实现次级遗憾。我们提出了一种新的有效算法,其遗憾也比以前效率低下的算法要低。对于$ n $ agents,$ k $ ands和$ t $ rounds,我们的方法遗憾的是$ \ tilde {o}(\ sqrt {nkt} + nk)$。这是对先前方法的改进,后者对$ \ tilde {o}(\ min(nk,\ sqrt {n} k^{3/2})\ sqrt {t})$的遗憾。我们还使用$ \ tilde {o}(\ sqrt {kt} + n^2k)$遗憾的方法来补充有效算法。实验发现证实了与先前方法相比,我们有效算法的有效性。
translated by 谷歌翻译
We show how a standard tool from statistics -namely confidence bounds -can be used to elegantly deal with situations which exhibit an exploitation-exploration trade-off. Our technique for designing and analyzing algorithms for such situations is general and can be applied when an algorithm has to make exploitation-versus-exploration decisions based on uncertain information provided by a random process.We apply our technique to two models with such an exploitation-exploration trade-off. For the adversarial bandit problem with shifting our new algorithm suffers only Õ (ST ) 1/2 regret with high probability over T trials with S shifts. Such a regret bound was previously known only in expectation. The second model we consider is associative reinforcement learning with linear value functions. For this model our technique improves the regret from Õ T 3/4 to Õ T 1/2 .
translated by 谷歌翻译
我们调查了一个非旋转的强盗设置,其中不立即向玩家充满行动的丢失,而是以普遍的方式蔓延到后续轮。通过每轮末端观察到的瞬时损失是先前播放动作的许多损耗组件的总和。此设置包括一个特殊情况,该特例是具有延迟反馈的匪徒的特殊情况,是播放器单独观察延迟损耗的良好反馈。我们的第一个贡献是将标准强盗算法转换为可以在更难的设置中运行的一般减少:我们在原始算法的稳定性和后悔方面绑定了转换算法的遗憾。然后,我们表明,使用Tsallis熵的适当调谐的ftrl的转换具有令人遗憾的$ \ sqrt {(d + 1)kt} $,其中$ d $是最大延迟,$ k $是武器数量,$ t $是时间范围。最后,我们表明我们的结果通常不能通过在此设置中运行的任何算法的遗憾上展示匹配(最多一个日志因子)下限。
translated by 谷歌翻译
在古典语境匪徒问题中,在每轮$ t $,学习者观察一些上下文$ c $,选择一些动作$ i $执行,并收到一些奖励$ r_ {i,t}(c)$。我们考虑此问题的变体除了接收奖励$ r_ {i,t}(c)$之外,学习者还要学习其他一些上下文$的$ r_ {i,t}(c')$的值C'$ in设置$ \ mathcal {o} _i(c)$;即,通过在不同的上下文下执行该行动来实现的奖励\ mathcal {o} _i(c)$。这种变体出现在若干战略设置中,例如学习如何在非真实的重复拍卖中出价,最热衷于随着许多平台转换为运行的第一价格拍卖。我们将此问题称为交叉学习的上下文匪徒问题。古典上下围匪徒问题的最佳算法达到$ \ tilde {o}(\ sqrt {ckt})$遗憾针对所有固定策略,其中$ c $是上下文的数量,$ k $的行动数量和$ $次数。我们设计并分析了交叉学习的上下文匪徒问题的新算法,并表明他们的遗憾更好地依赖上下文的数量。在选择动作时学习所有上下文的奖励的完整交叉学习下,即设置$ \ mathcal {o} _i(c)$包含所有上下文,我们显示我们的算法实现后悔$ \ tilde {o}( \ sqrt {kt})$,删除$ c $的依赖。对于任何其他情况,即在部分交叉学习下,$ | \ mathcal {o} _i(c)| <c $ for $(i,c)$,遗憾界限取决于如何设置$ \ mathcal o_i(c)$影响上下文之间的交叉学习的程度。我们从Ad Exchange运行一流拍卖的广告交换中模拟了我们的真实拍卖数据的算法,并表明了它们优于传统的上下文强盗算法。
translated by 谷歌翻译
The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. More precisely, for the stochastic two-armed bandit problem, the expected regret in time T is O( ln T ∆ + 1 ∆ 3 ). And, for the stochastic N -armed bandit problem, the expected regret in time) 2 ln T ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.
translated by 谷歌翻译
我们考虑一个多武装的强盗设置,在每一轮的开始时,学习者接收嘈杂的独立,并且可能偏见,\ emph {评估}每个臂的真正奖励,它选择$ k $武器的目标累积尽可能多的奖励超过$ $ rounds。在假设每轮在每个臂的真正奖励从固定分发中汲取的,我们得出了不同的算法方法和理论保证,具体取决于评估的生成方式。首先,在观察功能是真正奖励的遗传化线性函数时,我们在一般情况下展示$ \ widetilde {o}(t ^ {2/3})$后悔。另一方面,当观察功能是真正奖励的嘈杂线性函数时,我们就可以派生改进的$ \ widetilde {o}(\ sqrt {t})$后悔。最后,我们报告了一个实证验证,确认我们的理论发现,与替代方法进行了彻底的比较,并进一步支持在实践中实现这一环境的兴趣。
translated by 谷歌翻译
我们在非稳定性或时间变化偏好下,在$ k $的武器{动态遗憾最小化}中研究了\ mpph {动态遗憾最小化}。这是一个在线学习设置,其中代理在每个轮中选择一对项目,并仅观察该对的相对二进制`的次数“反馈,从该圆的底层偏好矩阵中采样。我们首先研究对抗性偏好序列的静态后悔最小化问题,并使用$ O(\ SQRT {kt})为高概率遗憾设计了高效的算法。我们接下来使用类似的算法思想,提出一种在非实践中的两种概念下的动态遗为最小化的高效且可透明的最佳算法。特别是,我们建立$ \ to(\ sqrt {skt})$和$ \ to({v_t ^ {1/3} k ^ {1/3} t ^ {2/3}})$动态后悔保证,$ S $是基础偏好关系中的“有效交换机”的总数,以及$ V_T $的衡量标准的“连续变化”非公平性。尽管现实世界系统中的非静止环境实用性,但在这项工作之前尚未研究这些问题的复杂性。我们通过证明在上述非实践概念下的符合下限保证匹配的匹配的算法来证明我们的算法的最优性。最后,我们通过广泛的模拟来证实我们的结果,并比较我们算法在最先进的基线上的功效。
translated by 谷歌翻译
在本文中,我们考虑了在规避风险的标准下线性收益的上下文多臂强盗问题。在每个回合中,每个手臂都会揭示上下文,决策者选择一只手臂拉动并获得相应的奖励。特别是,我们将均值变化视为风险标准,最好的组是具有最大均值奖励的均值。我们将汤普森采样算法应用于脱节模型,并为提出算法的变体提供全面的遗憾分析。对于$ t $ rounds,$ k $ Actions和$ d $ - 维功能向量,我们证明了$ o((1+ \ rho+\ frac {1} {1} {\ rho}){\ rho})d \ ln t \ ln t \ ln的遗憾。 \ frac {k} {\ delta} \ sqrt {d k t^{1+2 \ epsilon} \ ln \ frac {k} {\ delta} \ frac {1} {\ epsilon}} $ 1 - \ \ delta $在带有风险公差$ \ rho $的均值方差标准下,对于任何$ 0 <\ epsilon <\ frac {1} {2} $,$ 0 <\ delta <1 $。我们提出的算法的经验性能通过投资组合选择问题来证明。
translated by 谷歌翻译
我们考虑基于嘈杂的强盗反馈优化黑盒功能的问题。内核强盗算法为此问题显示了强大的实证和理论表现。然而,它们严重依赖于模型所指定的模型,并且没有它可能会失败。相反,我们介绍了一个\ emph {isspecified}内塞的强盗设置,其中未知函数可以是$ \ epsilon $ - 在一些再现内核希尔伯特空间(RKHS)中具有界限范数的函数均匀近似。我们设计高效实用的算法,其性能在模型误操作的存在下最微小地降低。具体而言,我们提出了一种基于高斯过程(GP)方法的两种算法:一种乐观的EC-GP-UCB算法,需要了解误操作误差,并相断的GP不确定性采样,消除型算法,可以适应未知模型拼盘。我们在$ \ epsilon $,时间范围和底层内核方面提供累积遗憾的上限,我们表明我们的算法达到了$ \ epsilon $的最佳依赖性,而没有明确的误解知识。此外,在一个随机的上下文设置中,我们表明EC-GP-UCB可以有效地与遗憾的平衡策略有效地结合,尽管不知道$ \ epsilon $尽管不知道,但仍然可以获得类似的遗憾范围。
translated by 谷歌翻译
我们为线性上下文匪徒提出了一种新颖的算法(\ sqrt {dt \ log t})$遗憾,其中$ d $是上下文的尺寸,$ t $是时间范围。我们提出的算法配备了一种新型估计量,其中探索通过显式随机化嵌入。根据随机化的不同,我们提出的估计器从所有武器的上下文或选定的上下文中都取得了贡献。我们为我们的估计器建立了一个自称的绑定,这使累积遗憾的新颖分解为依赖添加剂的术语而不是乘法术语。在我们的问题设置下,我们还证明了$ \ omega(\ sqrt {dt})$的新颖下限。因此,我们提出的算法的遗憾与对数因素的下限相匹配。数值实验支持理论保证,并表明我们所提出的方法的表现优于现有的线性匪徒算法。
translated by 谷歌翻译
我们研究了标准匪徒问题的扩展,其中有很多专家。多层专家按一层进行选择,只有最后一层的专家才能发挥作用。学习政策的目的是最大程度地减少该等级专家环境中的遗憾。我们首先分析了总遗憾随着层数线性增长的案例。然后,我们关注的是所有专家都在施加上层信心(UCB)策略,并在不同情况下给出了几个子线上界限。最后,我们设计了一些实验,以帮助对分层UCB结构的一般情况进行遗憾分析,并显示我们理论结果的实际意义。本文提供了许多有关合理层次决策结构的见解。
translated by 谷歌翻译
We consider the stochastic linear contextual bandit problem with high-dimensional features. We analyze the Thompson sampling (TS) algorithm, using special classes of sparsity-inducing priors (e.g. spike-and-slab) to model the unknown parameter, and provide a nearly optimal upper bound on the expected cumulative regret. To the best of our knowledge, this is the first work that provides theoretical guarantees of Thompson sampling in high dimensional and sparse contextual bandits. For faster computation, we use spike-and-slab prior to model the unknown parameter and variational inference instead of MCMC to approximate the posterior distribution. Extensive simulations demonstrate improved performance of our proposed algorithm over existing ones.
translated by 谷歌翻译
我们在存在对抗性腐败的情况下研究线性上下文的强盗问题,在场,每回合的奖励都被对手损坏,腐败级别(即,地平线上的腐败总数)为$ c \ geq 0 $。在这种情况下,最著名的算法受到限制,因为它们要么在计算效率低下,要么需要对腐败做出强烈的假设,或者他们的遗憾至少比没有腐败的遗憾差的$ C $倍。在本文中,为了克服这些局限性,我们提出了一种基于不确定性的乐观原则的新算法。我们算法的核心是加权山脊回归,每个选择动作的重量都取决于其置信度,直到一定的阈值。 We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds.因此,我们的算法几乎是两种情况的对数因素的最佳选择。值得注意的是,我们的算法同时对腐败和未腐败的案件($ c = 0 $)实现了近乎最理想的遗憾。
translated by 谷歌翻译
在在线学习问题中,利用低方差在获得紧密性能保证方面发挥着重要作用,但仍然是挑战的,因为差异通常不知道先验。最近,张等人取得了相当大的进展。 (2021)在没有知识的情况下获得用于线性匪徒的方差 - 自适应遗憾,没有知识的差异和对​​线性混合物Markov决策过程(MDP)的无差异的无差异遗憾。在本文中,我们提出了一种新的分析,从而显着改善了他们的遗憾。对于线性匪徒,我们实现$ \ tilde o(d ^ {1.5} \ sqrt {\ sum_ {k} ^ k \ sigma_k ^ 2} + d ^ 2)$ why $ d $是功能的维度$ k $是时间横向,$ \ sigma_k ^ 2 $是时间步骤$ k $的噪声差异,而$ \ tilde o $忽略了polylogarithmic依赖,这是$ d ^ 3 $的因素。对于线性混合MDP,我们达到$ \ tilde o(d ^ {1.5} \ sqrt {k} + d ^ 3)$ white $ d $的地平线遗憾的遗憾遗憾的遗憾 - 是基本型号的数量和$ k $剧集的数量。这是较低的术语和下订单中的持续期限和D ^ 6美元的倍数。我们的分析依稀依赖于新颖的椭圆潜力“计数”的引理。这种引理允许基于剥离的遗憾分析,这可以是独立的兴趣。
translated by 谷歌翻译
In this paper, we address the stochastic contextual linear bandit problem, where a decision maker is provided a context (a random set of actions drawn from a distribution). The expected reward of each action is specified by the inner product of the action and an unknown parameter. The goal is to design an algorithm that learns to play as close as possible to the unknown optimal policy after a number of action plays. This problem is considered more challenging than the linear bandit problem, which can be viewed as a contextual bandit problem with a \emph{fixed} context. Surprisingly, in this paper, we show that the stochastic contextual problem can be solved as if it is a linear bandit problem. In particular, we establish a novel reduction framework that converts every stochastic contextual linear bandit instance to a linear bandit instance, when the context distribution is known. When the context distribution is unknown, we establish an algorithm that reduces the stochastic contextual instance to a sequence of linear bandit instances with small misspecifications and achieves nearly the same worst-case regret bound as the algorithm that solves the misspecified linear bandit instances. As a consequence, our results imply a $O(d\sqrt{T\log T})$ high-probability regret bound for contextual linear bandits, making progress in resolving an open problem in (Li et al., 2019), (Li et al., 2021). Our reduction framework opens up a new way to approach stochastic contextual linear bandit problems, and enables improved regret bounds in a number of instances including the batch setting, contextual bandits with misspecifications, contextual bandits with sparse unknown parameters, and contextual bandits with adversarial corruption.
translated by 谷歌翻译
随机通用的线性匪徒是针对顺序决策问题的一个很好理解的模型,许多算法在立即反馈下实现了近乎最佳的遗憾。但是,在许多现实世界中,立即观察奖励的要求不适用。在这种情况下,不再理解标准算法。我们通过在选择动作和获得奖励之间引入延迟,以理论方式研究延迟奖励的现象。随后,我们表明,基于乐观原则的算法通过消除对决策集和延迟的延迟分布和放松假设的需要,从而改善了本设置的现有方法。这也导致从$ \ widetilde o(\ sqrt {dt} \ sqrt {d + \ mathbb {e} [\ tau]})$改善遗憾保证。 ^{3/2} \ mathbb {e} [\ tau])$,其中$ \ mathbb {e} [\ tau] $表示预期的延迟,$ d $是尺寸,$ t $ t $ the Time Horizo​​n,我们我们抑制了对数术语。我们通过对模拟数据进行实验来验证我们的理论结果。
translated by 谷歌翻译