A default assumption in reinforcement learning and optimal control is that experience arrives at discrete time points on a fixed clock cycle. Many applications, however, involve continuous systems where the time discretization is not fixed but instead can be managed by a learning algorithm. By analyzing Monte-Carlo value estimation for LQR systems in both finite-horizon and infinite-horizon settings, we uncover a fundamental trade-off between approximation and statistical error in value estimation. Importantly, these two errors behave differently with respect to time discretization, which implies that there is an optimal choice for the temporal resolution that depends on the data budget. These findings show how adapting the temporal resolution can provably improve value estimation quality in LQR systems from finite data. Empirically, we demonstrate the trade-off in numerical simulations of LQR instances and several non-linear environments.
translated by 谷歌翻译
在本文中,我们提出了一种基于深度学习的数值方案,用于强烈耦合FBSDE,这是由随机控制引起的。这是对深度BSDE方法的修改,其中向后方程的初始值不是一个免费参数,并且新的损失函数是控制问题的成本的加权总和,而差异项与与该的差异相吻合终端条件下的平均误差。我们通过一个数值示例表明,经典深度BSDE方法的直接扩展为FBSDE,失败了简单的线性季度控制问题,并激励新方法为何工作。在定期和有限性的假设上,对时间连续和时间离散控制问题的确切控制,我们为我们的方法提供了错误分析。我们从经验上表明,该方法收敛于三个不同的问题,一个方法是直接扩展Deep BSDE方法的问题。
translated by 谷歌翻译
我们研究有限的时间范围连续时间线性季节增强学习问题,在情节环境中,控制器的状态和控制系数都不清楚。我们首先提出了基于连续时间观察和控件的最小二乘算法,并建立对数的对数遗憾,以$ o((\ ln m)(\ ln \ ln m))$,$ m $是数字学习情节。该分析由两个部分组成:扰动分析,这些分析利用了相关的riccati微分方程的规律性和鲁棒性;和参数估计误差,依赖于连续的最小二乘估计器的亚指数属性。我们进一步提出了一种基于离散时间观察和分段恒定控制的实际实现最小二乘算法,该算法根据算法中使用的时间步骤明确地取决于额外的术语,从而实现相似的对数后悔。
translated by 谷歌翻译
我们介绍了一种新颖的几何形状不可逆的扰动,该扰动加速了langevin算法的贝叶斯计算的收敛性。有充分的文献证明,兰格文动力学存在扰动,该动力学在加速其收敛的同时保留其不变度的度量。不可逆的扰动和可逆扰动(例如Riemannian歧管Langevin Dynamics(RMLD))已被单独显示以改善Langevin Samplers的性能。我们同时考虑了这两种扰动,通过呈现一种新型的RMLD不可逆扰动形式,该形式由基础几何形状告知。通过数值示例,我们表明,这种新的不可逆扰动可以改善估计性性能,而不是不可逆的扰动,而这些扰动不会考虑到几何。此外,我们证明,不可逆转的扰动通常可以与Langevin算法的随机梯度版本结合使用。最后,尽管连续的不可逆扰动不能损害兰格文估计器的性能,但考虑离散化时,情况有时会更加复杂。为此,我们描述了一个离散的示例,其中不可逆性增加了所得估计量的偏差和差异。
translated by 谷歌翻译
根据线性随机微分方程进化的扩散过程是连续时间动态决策模型的重要家族。最佳政策对它们进行了充分研究,并确定了漂移矩阵。然而,对于不确定的漂移矩阵的扩散过程的数据驱动的控制知之甚少,因为常规离散时间分析技术不适用。此外,尽管该任务可以被视为涉及探索和剥削权衡取舍的强化学习问题,但确保系统稳定性是设计最佳政策的基本组成部分。我们确定流行的汤普森采样算法可以快速学习最佳动作,仅产生了时间根的遗憾,并在短时间内稳定了系统。据我们所知,这是汤普森在扩散过程控制问题中抽样的第一个结果。我们通过从两个飞机和血糖控制的两个设置的实际参数矩阵的经验模拟来验证理论结果。此外,我们观察到,与最先进的算法相比,汤普森采样显着改善(最坏的)遗憾,这表明汤普森采样以一种更加保护的方式探索。我们的理论分析涉及特定的特定最优歧管,该歧管将漂移参数的局部几何形状与扩散过程的最佳控制。我们希望这项技术具有更广泛的兴趣。
translated by 谷歌翻译
We study non-parametric estimation of the value function of an infinite-horizon $\gamma$-discounted Markov reward process (MRP) using observations from a single trajectory. We provide non-asymptotic guarantees for a general family of kernel-based multi-step temporal difference (TD) estimates, including canonical $K$-step look-ahead TD for $K = 1, 2, \ldots$ and the TD$(\lambda)$ family for $\lambda \in [0,1)$ as special cases. Our bounds capture its dependence on Bellman fluctuations, mixing time of the Markov chain, any mis-specification in the model, as well as the choice of weight function defining the estimator itself, and reveal some delicate interactions between mixing time and model mis-specification. For a given TD method applied to a well-specified model, its statistical error under trajectory data is similar to that of i.i.d. sample transition pairs, whereas under mis-specification, temporal dependence in data inflates the statistical error. However, any such deterioration can be mitigated by increased look-ahead. We complement our upper bounds by proving minimax lower bounds that establish optimality of TD-based methods with appropriately chosen look-ahead and weighting, and reveal some fundamental differences between value function estimation and ordinary non-parametric regression.
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译
当使用有限的阶梯尺寸\ citep {shi20211undanding}时,Nesterov的加速梯度(NAG)进行优化的性能比其连续的时间限制(无噪声动力学Langevin)更好。这项工作探讨了该现象的采样对应物,并提出了一个扩散过程,其离散化可以产生基于梯度的MCMC方法。更确切地说,我们将NAG的优化器重新制定为强烈凸功能(NAG-SC)作为无Hessian的高分辨率ODE,将其高分辨率系数更改为超参数,注入适当的噪声,并将其离散化。新的超参数的加速效应是量化的,它不是由时间响应创造的人造效应。取而代之的是,在连续动力学级别和离散算法级别上,在$ w_2 $距离中以$ W_2 $距离的加速度均已定量确定。在对数符号和多模式案例中的经验实验也证明了这一加速度。
translated by 谷歌翻译
离线政策评估(OPE)被认为是强化学习(RL)的基本且具有挑战性的问题。本文重点介绍了基于从无限 - 马尔可夫决策过程的框架下从可能不同策略生成的预收集的数据的目标策略的价值估计。由RL最近开发的边际重要性采样方法和因果推理中的协变量平衡思想的动机,我们提出了一个新颖的估计器,具有大约投影的国家行动平衡权重,以进行策略价值估计。我们获得了这些权重的收敛速率,并表明拟议的值估计量在技术条件下是半参数有效的。就渐近学而言,我们的结果比例均以每个轨迹的轨迹数量和决策点的数量进行扩展。因此,当决策点数量分歧时,仍然可以使用有限的受试者实现一致性。此外,我们开发了一个必要且充分的条件,以建立贝尔曼操作员在政策环境中的适当性,这表征了OPE的困难,并且可能具有独立的利益。数值实验证明了我们提出的估计量的有希望的性能。
translated by 谷歌翻译
In this thesis, we consider two simple but typical control problems and apply deep reinforcement learning to them, i.e., to cool and control a particle which is subject to continuous position measurement in a one-dimensional quadratic potential or in a quartic potential. We compare the performance of reinforcement learning control and conventional control strategies on the two problems, and show that the reinforcement learning achieves a performance comparable to the optimal control for the quadratic case, and outperforms conventional control strategies for the quartic case for which the optimal control strategy is unknown. To our knowledge, this is the first time deep reinforcement learning is applied to quantum control problems in continuous real space. Our research demonstrates that deep reinforcement learning can be used to control a stochastic quantum system in real space effectively as a measurement-feedback closed-loop controller, and our research also shows the ability of AI to discover new control strategies and properties of the quantum systems that are not well understood, and we can gain insights into these problems by learning from the AI, which opens up a new regime for scientific research.
translated by 谷歌翻译
本论文主要涉及解决深层(时间)高斯过程(DGP)回归问题的状态空间方法。更具体地,我们代表DGP作为分层组合的随机微分方程(SDES),并且我们通过使用状态空间过滤和平滑方法来解决DGP回归问题。由此产生的状态空间DGP(SS-DGP)模型生成丰富的电视等级,与建模许多不规则信号/功能兼容。此外,由于他们的马尔可道结构,通过使用贝叶斯滤波和平滑方法可以有效地解决SS-DGPS回归问题。本论文的第二次贡献是我们通过使用泰勒力矩膨胀(TME)方法来解决连续离散高斯滤波和平滑问题。这诱导了一类滤波器和SmooThers,其可以渐近地精确地预测随机微分方程(SDES)解决方案的平均值和协方差。此外,TME方法和TME过滤器和SmoOthers兼容模拟SS-DGP并解决其回归问题。最后,本文具有多种状态 - 空间(深)GPS的应用。这些应用主要包括(i)来自部分观察到的轨迹的SDES的未知漂移功能和信号的光谱 - 时间特征估计。
translated by 谷歌翻译
Offline policy evaluation is a fundamental statistical problem in reinforcement learning that involves estimating the value function of some decision-making policy given data collected by a potentially different policy. In order to tackle problems with complex, high-dimensional observations, there has been significant interest from theoreticians and practitioners alike in understanding the possibility of function approximation in reinforcement learning. Despite significant study, a sharp characterization of when we might expect offline policy evaluation to be tractable, even in the simplest setting of linear function approximation, has so far remained elusive, with a surprising number of strong negative results recently appearing in the literature. In this work, we identify simple control-theoretic and linear-algebraic conditions that are necessary and sufficient for classical methods, in particular Fitted Q-iteration (FQI) and least squares temporal difference learning (LSTD), to succeed at offline policy evaluation. Using this characterization, we establish a precise hierarchy of regimes under which these estimators succeed. We prove that LSTD works under strictly weaker conditions than FQI. Furthermore, we establish that if a problem is not solvable via LSTD, then it cannot be solved by a broad class of linear estimators, even in the limit of infinite data. Taken together, our results provide a complete picture of the behavior of linear estimators for offline policy evaluation, unify previously disparate analyses of canonical algorithms, and provide significantly sharper notions of the underlying statistical complexity of offline policy evaluation.
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
我们考虑使用时间差异学习算法进行连续时间过程的政策评估问题。更确切地说,从随机微分方程的时间离散化,我们打算使用TD(0)学习连续的值函数。首先,我们证明标准TD(0)算法注定要失败,因为动力学的随机部分由于时间步骤趋于零。然后,我们提出对时间差的添加零均值校正,使其相对于消失的时间步骤进行稳健。我们提出了两种算法:第一种算法是基于模型的,因为它需要了解动力学的漂移函数。第二个是无模型的。我们证明了基于模型的算法在两个不同的方案中的线性参数化假设下与连续时间解的收敛性:一个具有问题的凸正则化;第二次使用具有恒定步长且无正则化的Polyak-juditsy平均方法。在后一种方案中获得的收敛速率与最简单的使用随机梯度下降方法的线性回归问题相媲美。从完全不同的角度来看,我们的方法可以应用于使用机器学习以非发散形式求解二阶椭圆方程。
translated by 谷歌翻译
Reinforcement learning is a framework for interactive decision-making with incentives sequentially revealed across time without a system dynamics model. Due to its scaling to continuous spaces, we focus on policy search where one iteratively improves a parameterized policy with stochastic policy gradient (PG) updates. In tabular Markov Decision Problems (MDPs), under persistent exploration and suitable parameterization, global optimality may be obtained. By contrast, in continuous space, the non-convexity poses a pathological challenge as evidenced by existing convergence results being mostly limited to stationarity or arbitrary local extrema. To close this gap, we step towards persistent exploration in continuous space through policy parameterizations defined by distributions of heavier tails defined by tail-index parameter alpha, which increases the likelihood of jumping in state space. Doing so invalidates smoothness conditions of the score function common to PG. Thus, we establish how the convergence rate to stationarity depends on the policy's tail index alpha, a Holder continuity parameter, integrability conditions, and an exploration tolerance parameter introduced here for the first time. Further, we characterize the dependence of the set of local maxima on the tail index through an exit and transition time analysis of a suitably defined Markov chain, identifying that policies associated with Levy Processes of a heavier tail converge to wider peaks. This phenomenon yields improved stability to perturbations in supervised learning, which we corroborate also manifests in improved performance of policy search, especially when myopic and farsighted incentives are misaligned.
translated by 谷歌翻译
在本文中,我们提出了一种高效的差异减少了马尔可夫链的附加功能,依赖于新颖的离散时间鞅表示。我们的方法是完全非渐近性的,不需要了解静止分布(甚至任何类型的遍义)或潜在密度的特定结构。通过严格分析所提出的算法的收敛性,我们表明其成本方差产品确实小于一个天真算法之一。Langevin型马尔可夫链蒙特卡罗(MCMC)方法说明了新方法的数值性能。
translated by 谷歌翻译
由于在数据稀缺的设置中,交叉验证的性能不佳,我们提出了一个新颖的估计器,以估计数据驱动的优化策略的样本外部性能。我们的方法利用优化问题的灵敏度分析来估计梯度关于数据中噪声量的最佳客观值,并利用估计的梯度将策略的样本中的表现为依据。与交叉验证技术不同,我们的方法避免了为测试集牺牲数据,在训练和因此非常适合数据稀缺的设置时使用所有数据。我们证明了我们估计量的偏见和方差范围,这些问题与不确定的线性目标优化问题,但已知的,可能是非凸的,可行的区域。对于更专业的优化问题,从某种意义上说,可行区域“弱耦合”,我们证明结果更强。具体而言,我们在估算器的错误上提供明确的高概率界限,该估计器在策略类别上均匀地保持,并取决于问题的维度和策略类的复杂性。我们的边界表明,在轻度条件下,随着优化问题的尺寸的增长,我们的估计器的误差也会消失,即使可用数据的量仍然很小且恒定。说不同的是,我们证明我们的估计量在小型数据中的大规模政权中表现良好。最后,我们通过数值将我们提出的方法与最先进的方法进行比较,通过使用真实数据调度紧急医疗响应服务的案例研究。我们的方法提供了更准确的样本外部性能估计,并学习了表现更好的政策。
translated by 谷歌翻译
贝叶斯推理允许在贝叶斯神经网络的上下文中获取有关模型参数的有用信息,或者在贝叶斯神经网络的背景下。通常的Monte Carlo方法的计算成本,用于在贝叶斯推理中对贝叶斯推理的后验法律进行线性点的数量与数据点的数量进行线性。将其降低到这一成本的一小部分的一种选择是使用Langevin动态的未经调整的离散化来诉诸Mini-Batching,在这种情况下,只使用数据的随机分数来估计梯度。然而,这导致动态中的额外噪声,因此在马尔可夫链采样的不变度量上的偏差。我们倡导使用所谓的自适应Langevin动态,这是一种改进标准惯性Langevin动态,其动态摩擦力,可自动校正迷你批次引起的增加的噪声。我们调查假设适应性Langevin的假设(恒定协方差估计梯度的恒定协方差),这在贝叶斯推理的典型模型中不满足,并在这种情况下量化小型匹配诱导的偏差。我们还展示了如何扩展ADL,以便通过考虑根据参数的当前值来系统地减少后部分布的偏置。
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
There has been a great deal of recent interest in learning and approximation of functions that can be expressed as expectations of a given nonlinearity with respect to its random internal parameters. Examples of such representations include "infinitely wide" neural nets, where the underlying nonlinearity is given by the activation function of an individual neuron. In this paper, we bring this perspective to function representation by neural stochastic differential equations (SDEs). A neural SDE is an It\^o diffusion process whose drift and diffusion matrix are elements of some parametric families. We show that the ability of a neural SDE to realize nonlinear functions of its initial condition can be related to the problem of optimally steering a certain deterministic dynamical system between two given points in finite time. This auxiliary system is obtained by formally replacing the Brownian motion in the SDE by a deterministic control input. We derive upper and lower bounds on the minimum control effort needed to accomplish this steering; these bounds may be of independent interest in the context of motion planning and deterministic optimal control.
translated by 谷歌翻译