With the attention mechanism, transformers achieve significant empirical successes. Despite the intuitive understanding that transformers perform relational inference over long sequences to produce desirable representations, we lack a rigorous theory on how the attention mechanism achieves it. In particular, several intriguing questions remain open: (a) What makes a desirable representation? (b) How does the attention mechanism infer the desirable representation within the forward pass? (c) How does a pretraining procedure learn to infer the desirable representation through the backward pass? We observe that, as is the case in BERT and ViT, input tokens are often exchangeable since they already include positional encodings. The notion of exchangeability induces a latent variable model that is invariant to input sizes, which enables our theoretical analysis. - To answer (a) on representation, we establish the existence of a sufficient and minimal representation of input tokens. In particular, such a representation instantiates the posterior distribution of the latent variable given input tokens, which plays a central role in predicting output labels and solving downstream tasks. - To answer (b) on inference, we prove that attention with the desired parameter infers the latent posterior up to an approximation error, which is decreasing in input sizes. In detail, we quantify how attention approximates the conditional mean of the value given the key, which characterizes how it performs relational inference over long sequences. - To answer (c) on learning, we prove that both supervised and self-supervised objectives allow empirical risk minimization to learn the desired parameter up to a generalization error, which is independent of input sizes. Particularly, in the self-supervised setting, we identify a condition number that is pivotal to solving downstream tasks.
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
Motivated by the human-machine interaction such as training chatbots for improving customer satisfaction, we study human-guided human-machine interaction involving private information. We model this interaction as a two-player turn-based game, where one player (Alice, a human) guides the other player (Bob, a machine) towards a common goal. Specifically, we focus on offline reinforcement learning (RL) in this game, where the goal is to find a policy pair for Alice and Bob that maximizes their expected total rewards based on an offline dataset collected a priori. The offline setting presents two challenges: (i) We cannot collect Bob's private information, leading to a confounding bias when using standard RL methods, and (ii) a distributional mismatch between the behavior policy used to collect data and the desired policy we aim to learn. To tackle the confounding bias, we treat Bob's previous action as an instrumental variable for Alice's current decision making so as to adjust for the unmeasured confounding. We develop a novel identification result and use it to propose a new off-policy evaluation (OPE) method for evaluating policy pairs in this two-player turn-based game. To tackle the distributional mismatch, we leverage the idea of pessimism and use our OPE method to develop an off-policy learning algorithm for finding a desirable policy pair for both Alice and Bob. Finally, we prove that under mild assumptions such as partial coverage of the offline data, the policy pair obtained through our method converges to the optimal one at a satisfactory rate.
translated by 谷歌翻译
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
translated by 谷歌翻译
Deep latent variable models have achieved significant empirical successes in model-based reinforcement learning (RL) due to their expressiveness in modeling complex transition dynamics. On the other hand, it remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of RL. In this paper, we provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle in the face of uncertainty for exploration. In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models. Theoretically, we establish the sample complexity of the proposed approach in the online and offline settings. Empirically, we demonstrate superior performance over current state-of-the-art algorithms across various benchmarks.
translated by 谷歌翻译
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making, which includes Markov decision process (MDP), partially observable Markov decision process (POMDP), and predictive state representation (PSR) as special cases. Toward finding the minimum assumption that empowers sample efficient learning, we propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation in online interactive decision making. In specific, GEC captures the hardness of exploration by comparing the error of predicting the performance of the updated policy with the in-sample training error evaluated on the historical data. We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR, where generalized regular PSR, a new tractable PSR class identified by us, includes nearly all known tractable POMDPs. Furthermore, in terms of algorithm design, we propose a generic posterior sampling algorithm, which can be implemented in both model-free and model-based fashion, under both fully observable and partially observable settings. The proposed algorithm modifies the standard posterior sampling algorithm in two aspects: (i) we use an optimistic prior distribution that biases towards hypotheses with higher values and (ii) a loglikelihood function is set to be the empirical loss evaluated on the historical data, where the choice of loss function supports both model-free and model-based learning. We prove that the proposed algorithm is sample efficient by establishing a sublinear regret upper bound in terms of GEC. In summary, we provide a new and unified understanding of both fully observable and partially observable RL.
translated by 谷歌翻译
与置换不变的代理框架的合作多元化学习(MARL)在现实世界应用中取得了巨大的经验成功。不幸的是,由于许多代理商的诅咒以及对现有作品中的关系推理的有限探索,对这个MARL问题的理论理解缺乏。在本文中,我们验证了变压器是否实现了复杂的关系推理,并提出和分析了与变压器近似器的无模型和基于模型的离线MARL算法。我们证明,基于模型和基于模型的算法的次级次数差距分别与代理数量分别独立于和对数,这减轻了许多试剂的诅咒。这些结果是变压器的新概括误差结合的结果以及对变压器系统动力学的最大似然估计(MLE)的新分析。我们的基于模型的算法是第一个明确利用代理的置换不变性的可证明有效的MARL算法。
translated by 谷歌翻译
我们在面对未衡量的混杂因素时研究离线增强学习(RL)。由于缺乏与环境的在线互动,离线RL面临以下两个重大挑战:(i)代理可能会被未观察到的状态变量混淆; (ii)提前收集的离线数据不能为环境提供足够的覆盖范围。为了应对上述挑战,我们借助工具变量研究了混杂的MDP中的政策学习。具体而言,我们首先建立了基于和边缘化的重要性采样(MIS)的识别结果,以确定混杂的MDP中的预期总奖励结果。然后,通过利用悲观主义和我们的认同结果,我们提出了各种政策学习方法,并具有有限样本的次级临时性保证,可以在最小的数据覆盖范围和建模假设下找到最佳的课堂政策。最后,我们广泛的理论研究和一项由肾脏移植动机的数值研究证明了该方法的有希望的表现。
translated by 谷歌翻译
一场堆放堡拥堵游戏(SCG)是一个双重计划,领导者的目标是通过预测和操纵均衡状态来最大程度地提高自己的收益,在该状态下,追随者通过玩拥堵游戏而定居。大规模的SCG以其顽固性和复杂性而闻名。这项研究通过可区分的编程来处理SCG,该编程将机器学习的最新发展与常规方法结合在一起。核心思想以模仿logit动力学形成的进化路径代表低级平衡问题。它可以在朝着平衡的演化路径上使用自动分化,从而导致双环梯度下降算法。我们进一步表明,对低级平衡的固定可能是一个自我强加的计算障碍。取而代之的是,领导者只能沿着追随者的演变路径向前看几个步骤,同时通过共同进化过程更新其决策。启示产生了一种单循环算法,该算法在记忆消耗和计算时间方面都更有效。通过涵盖广泛基准问题的数值实验,我们发现单循环算法始终达到解决方案质量和效率之间的良好平衡,不仅优于标准的双环实现,而且优于文献中的其他方法。重要的是,我们的结果既突出了“充分期待”的浪费和“零预期”的危险。如果需要快速启发术来解决一个非常大的SCG,则提议的单环算法具有一步的外观,使其成为理想的候选人。
translated by 谷歌翻译
鉴于它在提取功能表示方面的力量,对比性的自我监督学习已成功整合到(深)强化学习(RL)的实践中,从而在各种应用程序中提供了有效的政策学习。尽管取得了巨大的经验成功,但对RL的对比学习的理解仍然难以捉摸。为了缩小这样的差距,我们研究了Markov决策过程(MDP)和Markov Games(MGS)的对比度学习如何赋予RL的能力。对于这两种模型,我们建议通过最大程度地减少对比度损失来提取低级别模型的正确特征表示。此外,在在线环境下,我们提出了新颖的上限置信界(UCB)型算法,该算法将这种对比度损失与MDP或MGS的在线RL算法结合在一起。从理论上讲,我们进一步证明了我们的算法恢复了真实表示形式,并同时在学习MDP和MGS中学习最佳策略和NASH平衡方面同时实现了样本效率。我们还提供实证研究,以证明基于UCB的RL的对比度学习方法的功效。据我们所知,我们提供了第一种可证明有效的在线RL算法,该算法结合了代表学习的对比学习。我们的代码可从https://github.com/baichenjia/contrastive-ucb获得。
translated by 谷歌翻译