We study reinforcement learning (RL) with linear function approximation. For episodic time-inhomogeneous linear Markov decision processes (linear MDPs) whose transition dynamic can be parameterized as a linear function of a given feature mapping, we propose the first computationally efficient algorithm that achieves the nearly minimax optimal regret $\tilde O(d\sqrt{H^3K})$, where $d$ is the dimension of the feature mapping, $H$ is the planning horizon, and $K$ is the number of episodes. Our algorithm is based on a weighted linear regression scheme with a carefully designed weight, which depends on a new variance estimator that (1) directly estimates the variance of the \emph{optimal} value function, (2) monotonically decreases with respect to the number of episodes to ensure a better estimation accuracy, and (3) uses a rare-switching policy to update the value function estimator to control the complexity of the estimated value function class. Our work provides a complete answer to optimal RL with linear MDPs, and the developed algorithm and theoretical tools may be of independent interest.
translated by 谷歌翻译
我们研究联合的上下文线性匪徒,其中$ m $代理相互合作,在中央服务器的帮助下解决全球上下文线性匪徒问题。我们考虑了异步设置,所有代理商都独立工作,一个代理和服务器之间的通信不会触发其他代理的通信。我们提出了一种基于乐观原理的简单算法\ texttt {fedlinucb}。我们证明\ texttt {fedlinucb}的遗憾是由$ \ tilde {o}(d \ sqrt {\ sum_ {m = 1}^m t_m})$界定的,通信复杂性是$ \ tilde {o}(o}(o}(o}(o}(o))dm^2)$,其中$ d $是上下文向量的尺寸,$ t_m $是与环境的交互总数,$ m $ -th代理。据我们所知,这是第一种可证明有效的算法,它允许联合上下文线性匪徒完全异步通信,同时获得与单一代理设置相同的遗憾保证。
translated by 谷歌翻译
我们在存在对抗性腐败的情况下研究线性上下文的强盗问题,在场,每回合的奖励都被对手损坏,腐败级别(即,地平线上的腐败总数)为$ c \ geq 0 $。在这种情况下,最著名的算法受到限制,因为它们要么在计算效率低下,要么需要对腐败做出强烈的假设,或者他们的遗憾至少比没有腐败的遗憾差的$ C $倍。在本文中,为了克服这些局限性,我们提出了一种基于不确定性的乐观原则的新算法。我们算法的核心是加权山脊回归,每个选择动作的重量都取决于其置信度,直到一定的阈值。 We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds.因此,我们的算法几乎是两种情况的对数因素的最佳选择。值得注意的是,我们的算法同时对腐败和未腐败的案件($ c = 0 $)实现了近乎最理想的遗憾。
translated by 谷歌翻译
我们研究了具有线性函数近似增强学习中的随机最短路径(SSP)问题,其中过渡内核表示为未知模型的线性混合物。我们将此类别的SSP问题称为线性混合物SSP。我们提出了一种具有Hoeffding-type置信度的新型算法,用于学习线性混合物SSP,可以获得$ \ tilde {\ Mathcal {o}}}}(d B _ {\ star}^{1.5} \ sqrt {k/c_ {k/c_ {k/c_ {k/c_ { \ min}})$遗憾。这里$ k $是情节的数量,$ d $是混合模型中功能映射的维度,$ b _ {\ star} $限制了最佳策略的预期累积成本,$ c _ {\ min}>> 0 $是成本函数的下限。当$ c _ {\ min} = 0 $和$ \ tilde {\ mathcal {o}}}(k^{2/3})$遗憾时,我们的算法也适用于情况。据我们所知,这是第一个具有sublrinear遗憾保证线性混合物SSP的算法。此外,我们设计了精致的伯恩斯坦型信心集并提出了改进的算法,该算法可实现$ \ tilde {\ Mathcal {o}}}(d b _ {\ star} \ sqrt {k/c/c/c {k/c _ {\ min}}) $遗憾。为了补充遗憾的上限,我们还证明了$ \ omega(db _ {\ star} \ sqrt {k})$的下限。因此,我们的改进算法将下限匹配到$ 1/\ sqrt {c _ {\ min}} $ factor和poly-logarithmic因素,从而实现了近乎最佳的遗憾保证。
translated by 谷歌翻译
我们研究了线性函数近似的强化学习(RL)。此问题的现有算法仅具有高概率遗憾和/或可能大致正确(PAC)样本复杂性保证,这不能保证对最佳政策的趋同。在本文中,为了克服现有算法的限制,我们提出了一种新的算法,称为长笛,它享有统一-PAC收敛到具有高概率的最佳政策。统一-PAC保证是文献中强化学习的最强烈保证,它可以直接意味着PAC和高概率遗憾,使我们的算法优于具有线性函数近似的所有现有算法。在我们的算法的核心,是一种新颖的最小值函数估计器和多级别分区方案,以从历史观察中选择训练样本。这两种技术都是新的和独立的兴趣。
translated by 谷歌翻译
在表格设置下,我们研究了折扣马尔可夫决策过程(MDP)的强化学习问题。我们提出了一种名为UCBVI - $ \ Gamma $的基于模型的算法,该算法基于\ emph {面对不确定原理}和伯尔斯坦型奖金的乐观。我们展示了UCBVI - $ \ Gamma $实现了一个$ \ tilde {o} \ big({\ sqrt {sat}} / {(1- \ gamma)^ {1.5}} \ big)$后悔,在哪里$ s $是州的数量,$ a $是行动的数量,$ \ gamma $是折扣因子,$ t $是步数。此外,我们构建了一类硬MDP并表明对于任何算法,预期的遗憾是至少$ \ tilde {\ omega} \ big({\ sqrt {sat}} / {(1- \ gamma)^ {1.5}} \大)$。我们的上限与对数因子的最低限度相匹配,这表明UCBVI - $ \ Gamma $几乎最小的贴现MDP。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译