Autoregressive processes naturally arise in a large variety of real-world scenarios, including e.g., stock markets, sell forecasting, weather prediction, advertising, and pricing. When addressing a sequential decision-making problem in such a context, the temporal dependence between consecutive observations should be properly accounted for converge to the optimal decision policy. In this work, we propose a novel online learning setting, named Autoregressive Bandits (ARBs), in which the observed reward follows an autoregressive process of order $k$, whose parameters depend on the action the agent chooses, within a finite set of $n$ actions. Then, we devise an optimistic regret minimization algorithm AutoRegressive Upper Confidence Bounds (AR-UCB) that suffers regret of order $\widetilde{\mathcal{O}} \left( \frac{(k+1)^{3/2}\sqrt{nT}}{(1-\Gamma)^2} \right)$, being $T$ the optimization horizon and $\Gamma < 1$ an index of the stability of the system. Finally, we present a numerical validation in several synthetic and one real-world setting, in comparison with general and specific purpose bandit baselines showing the advantages of the proposed approach.
translated by 谷歌翻译
Behavioral Cloning (BC) aims at learning a policy that mimics the behavior demonstrated by an expert. The current theoretical understanding of BC is limited to the case of finite actions. In this paper, we study BC with the goal of providing theoretical guarantees on the performance of the imitator policy in the case of continuous actions. We start by deriving a novel bound on the performance gap based on Wasserstein distance, applicable for continuous-action experts, holding under the assumption that the value function is Lipschitz continuous. Since this latter condition is hardy fulfilled in practice, even for Lipschitz Markov Decision Processes and policies, we propose a relaxed setting, proving that value function is always Holder continuous. This result is of independent interest and allows obtaining in BC a general bound for the performance of the imitator policy. Finally, we analyze noise injection, a common practice in which the expert action is executed in the environment after the application of a noise kernel. We show that this practice allows deriving stronger performance guarantees, at the price of a bias due to the noise addition.
translated by 谷歌翻译
This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e., those sequential selection techniques able to learn online using only the feedback given by the chosen option (a.k.a. arm). We study a particular case of the rested and restless bandits in which the arms' expected payoff is monotonically non-decreasing. This characteristic allows designing specifically crafted algorithms that exploit the regularity of the payoffs to provide tight regret bounds. We design an algorithm for the rested case (R-ed-UCB) and one for the restless case (R-less-UCB), providing a regret bound depending on the properties of the instance and, under certain circumstances, of $\widetilde{\mathcal{O}}(T^{\frac{2}{3}})$. We empirically compare our algorithms with state-of-the-art methods for non-stationary MABs over several synthetically generated tasks and an online model selection problem for a real-world dataset. Finally, using synthetic and real-world data, we illustrate the effectiveness of the proposed approaches compared with state-of-the-art algorithms for the non-stationary bandits.
translated by 谷歌翻译
随着全球经济和市场的持续增长,资源不平衡已成为实际逻辑场景中的核心问题之一。在海洋运输中,这种贸易不平衡导致空容器重新定位(ECR)问题。一旦将货物从出口国交付到进口国,Laden将变成空容器,需要重新定位以满足出口国中新商品请求。在这样的问题中,任何合作重新定位政策的绩效都可以严格取决于船舶将遵循的路线(即车队部署)。从历史上看,提出了行动研究(OR)方法,以与船只一起共同优化重新定位政策。但是,容器的未来供应和需求的随机性以及环境中存在的黑框和非线性约束,使这些方法不适合这些情况。在本文中,我们介绍了一个新颖的框架,可配置的半POMDP,以建模这种类型的问题。此外,我们提供了一种两阶段的学习算法“配置和征服”(CC),该算法首先通过找到最佳机队部署策略的近似来配置环境,然后通过在此调整后的这种调整中学习ECR政策来“征服”它环境环境。我们在这个问题的大型和现实世界中验证了我们的方法。我们的实验强调,CC避免了或方法的陷阱,并且成功地优化了ECR政策和船队的船队,从而在世界贸易环境中取得了出色的表现。
translated by 谷歌翻译
由于新的数据智能技术,仓库管理系统一直在不断发展和改进。但是,许多当前的优化已应用于特定情况,或者非常需要手动相互作用。这是强化学习技术发挥作用的地方,提供自动化和适应当前优化政策的能力。在本文中,我们介绍了一个可自定义的环境,它概括了用于强化学习的仓库模拟的定义。我们还验证了这种环境,以防止最新的增强学习算法,并将这些结果与人类和随机政策进行比较。
translated by 谷歌翻译
在最大的状态熵探索框架中,代理商与无奖励环境进行交互,以学习最大程度地提高其正在引起的预期国有访问的熵的政策。 Hazan等。 (2019年)指出,马尔可夫随机策略类别足以满足最大状态熵目标,而在这种情况下,利用非马克维亚性通常被认为是毫无意义的。在本文中,我们认为非马克维亚性是有限样本制度中最大状态熵探索至关重要的。尤其是,我们重新阐明了目标在一次试验中针对诱发的国有访问的预期熵的目标。然后,我们表明,非马克维亚确定性政策的类别足以满足引入的目标,而马尔可夫政策总体上遭受了非零的遗憾。但是,我们证明找到最佳的非马克维亚政策的问题是NP-HARD。尽管结果有负面的结果,但我们讨论了以一种可行的方式解决该问题的途径,以及非马克维亚探索如何使未来工作中在线增强学习的样本效率受益。
translated by 谷歌翻译
最近的几项工程致力于在一个环境中致力于无监督的加固学习,其中一项政策首先使用无监督的互动预测,然后微调在相同环境上定义的几个下游监督任务的最佳政策。沿着这一条线,我们解决了一类多种环境中无监督的加强学习问题,其中策略预先培训了从整个类的交互接受,然后在课堂的任何环境中进行微调。值得注意的是,问题本质上是多目标,因为我们可以在许多方面折交环境之间的预训练目标。在这项工作中,我们培养了对课堂内最不利的案件敏感的探索策略。因此,我们将探索问题作为勘探策略在整类环境中探索熵诱导的临界百分点的最大值的最大化。然后,我们提出了一种策略梯度算法,$ \ Alpha $ Mepol,通过与类的介导的交互来优化引入的目标。最后,我们经验展示了算法在学习探索挑战性的连续环境中的能力,我们展示了加强学习从预先接受训练的探索策略W.R.T.从头开始学习。
translated by 谷歌翻译
在终身环境中学习,动态不断发展,是对电流加强学习算法的艰难挑战。然而,这将是实际应用的必要特征。在本文中,我们提出了一种学习超策略的方法,其输入是时间,输出当时要查询的策略的参数。此超级策略验证,以通过引入受控偏置的成本来最大限度地提高估计的未来性能,有效地重用过去数据。我们将未来的性能估计与过去的绩效相结合,以减轻灾难性遗忘。为避免过度接收收集的数据,我们派生了我们嵌入惩罚期限的可差化方差。最后,我们在与最先进的算法相比,在逼真的环境中,经验验证了我们的方法,包括水资源管理和交易。
translated by 谷歌翻译
政策梯度(PG)算法是备受期待的强化学习对现实世界控制任务(例如机器人技术)的最佳候选人之一。但是,每当必须在物理系统上执行学习过程本身或涉及任何形式的人类计算机相互作用时,这些方法的反复试验性质就会提出安全问题。在本文中,我们解决了一种特定的安全公式,其中目标和危险都以标量奖励信号进行编码,并且学习代理被限制为从不恶化其性能,以衡量为预期的奖励总和。通过从随机优化的角度研究仅行为者的政策梯度,我们为广泛的参数政策建立了改进保证,从而将现有结果推广到高斯政策上。这与策略梯度估计器的差异的新型上限一起,使我们能够识别出具有很高概率的单调改进的元参数计划。两个关键的元参数是参数更新的步长和梯度估计的批处理大小。通过对这些元参数的联合自适应选择,我们获得了具有单调改进保证的政策梯度算法。
translated by 谷歌翻译
Robot assistants are emerging as high-tech solutions to support people in everyday life. Following and assisting the user in the domestic environment requires flexible mobility to safely move in cluttered spaces. We introduce a new approach to person following for assistance and monitoring. Our methodology exploits an omnidirectional robotic platform to detach the computation of linear and angular velocities and navigate within the domestic environment without losing track of the assisted person. While linear velocities are managed by a conventional Dynamic Window Approach (DWA) local planner, we trained a Deep Reinforcement Learning (DRL) agent to predict optimized angular velocities commands and maintain the orientation of the robot towards the user. We evaluate our navigation system on a real omnidirectional platform in various indoor scenarios, demonstrating the competitive advantage of our solution compared to a standard differential steering following.
translated by 谷歌翻译