We consider the problem of decision-making under uncertainty in an environment with safety constraints. Many business and industrial applications rely on real-time optimization with changing inputs to improve key performance indicators. In the case of unknown environmental characteristics, real-time optimization becomes challenging, particularly for the satisfaction of safety constraints. We propose the ARTEO algorithm, where we cast multi-armed bandits as a mathematical programming problem subject to safety constraints and learn the environmental characteristics through changes in optimization inputs and through exploration. We quantify the uncertainty in unknown characteristics by using Gaussian processes and incorporate it into the utility function as a contribution which drives exploration. We adaptively control the size of this contribution using a heuristic in accordance with the requirements of the environment. We guarantee the safety of our algorithm with a high probability through confidence bounds constructed under the regularity assumptions of Gaussian processes. Compared to existing safe-learning approaches, our algorithm does not require an exclusive exploration phase and follows the optimization goals even in the explored points, which makes it suitable for safety-critical systems. We demonstrate the safety and efficiency of our approach with two experiments: an industrial process and an online bid optimization benchmark problem.
translated by 谷歌翻译
本文提出了一类新的实时优化方案,以克服不确定过程的系统模型不匹配。这项工作的新颖性在于在贝叶斯优化框架内集成无衍生优化的优化方案和多保真高斯进程。所提出的方案对随机系统进行了两个高斯过程,通过测量来模拟(已知)过程模型,另一个,真实系统。以这种方式,可以通过模型获得低保真度样本,而通过系统的测量获得高保真样本。该框架在非参数时捕获系统的行为,同时通过采集函数驾驶探索。使用高斯进程代表系统的好处是能够实时地执行不确定性量化,并允许有机会限制以满足高信任。这导致一种实用的方法,其在数值案例研究中示出,包括半批量光生物反应器优化问题。
translated by 谷歌翻译
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown constraint and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. Our approach is naturally applicable to continuous domains and does not require additional hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we explore by learning about the constraint up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
translated by 谷歌翻译
在预测功能(假设)中获得可靠的自适应置信度集是顺序决策任务的核心挑战,例如土匪和基于模型的强化学习。这些置信度集合通常依赖于对假设空间的先前假设,例如,繁殖核Hilbert Space(RKHS)的已知核。手动设计此类内核是容易发生的,错误指定可能导致性能差或不安全。在这项工作中,我们建议从离线数据(meta-kel)中进行元学习核。对于未知核是已知碱基核的组合的情况,我们基于结构化的稀疏性开发估计量。在温和的条件下,我们保证我们的估计RKHS会产生有效的置信度集,随着越来越多的离线数据的量,它变得与鉴于真正未知内核的置信度一样紧。我们展示了我们关于内核化强盗问题(又称贝叶斯优化)的方法,我们在其中建立了遗憾的界限,与鉴于真正的内核的人竞争。我们还经验评估方法对贝叶斯优化任务的有效性。
translated by 谷歌翻译
强化学习(RL)控制器在控制社区中产生了兴奋。 RL控制器相对于现有方法的主要优点是它们能够优化不确定的系统,独立于明确假设过程不确定性。最近对工程应用的关注是针对安全RL控制器的发展。以前的作品已经提出了通过从随机模型预测控制领域的限制收紧来解释约束满足的方法。在这里,我们将这些方法扩展到植物模型不匹配。具体地,我们提出了一种利用离线仿真模型的高斯过程的数据驱动方法,并使用相关的后部不确定预测来解释联合机会限制和植物模型不匹配。该方法通过案例研究反对非线性模型预测控制的基准测试。结果证明了方法理解过程不确定性的能力,即使在植物模型错配的情况下也能满足联合机会限制。
translated by 谷歌翻译
Decision-making problems are commonly formulated as optimization problems, which are then solved to make optimal decisions. In this work, we consider the inverse problem where we use prior decision data to uncover the underlying decision-making process in the form of a mathematical optimization model. This statistical learning problem is referred to as data-driven inverse optimization. We focus on problems where the underlying decision-making process is modeled as a convex optimization problem whose parameters are unknown. We formulate the inverse optimization problem as a bilevel program and propose an efficient block coordinate descent-based algorithm to solve large problem instances. Numerical experiments on synthetic datasets demonstrate the computational advantage of our method compared to standard commercial solvers. Moreover, the real-world utility of the proposed approach is highlighted through two realistic case studies in which we consider estimating risk preferences and learning local constraint parameters of agents in a multiplayer Nash bargaining game.
translated by 谷歌翻译
我们考虑了持续的武装匪徒问题,在汇总反馈下的固定预算范围内推荐最好的武器。这是通过精确奖励不可能或获得昂贵的应用程序的激励,而可提供聚合奖励或反馈,例如子集的平均值。我们假设它们来自高斯进程并提出高斯工艺乐观优化(GPOO)算法来限制一组奖励功能。我们自适应地构造一个树的树,作为臂空间的子集,在那里反馈是节点代表的聚合奖励。我们为建议武器的汇总反馈提出了一个新的简单遗憾概念。我们为所提出的算法提供理论分析,并将单点反馈恢复为特殊情况。我们说明了GPoo并将其与模拟数据的相关算法进行比较。
translated by 谷歌翻译
高斯流程已成为各种安全至关重要环境的有前途的工具,因为后方差可用于直接估计模型误差并量化风险。但是,针对安全 - 关键环境的最新技术取决于核超参数是已知的,这通常不适用。为了减轻这种情况,我们在具有未知的超参数的设置中引入了强大的高斯过程统一误差界。我们的方法计算超参数空间中的一个置信区域,这使我们能够获得具有任意超参数的高斯过程模型误差的概率上限。我们不需要对超参数的任何界限,这是相关工作中常见的假设。相反,我们能够以直观的方式从数据中得出界限。我们还采用了建议的技术来为一类基于学习的控制问题提供绩效保证。实验表明,界限的性能明显优于香草和完全贝叶斯高斯工艺。
translated by 谷歌翻译
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
translated by 谷歌翻译
过去半年来,从控制和强化学习社区的真实机器人部署的安全学习方法的贡献数量急剧上升。本文提供了一种简洁的但整体审查,对利用机器学习实现的最新进展,以实现在不确定因素下的安全决策,重点是统一控制理论和加固学习研究中使用的语言和框架。我们的评论包括:基于学习的控制方法,通过学习不确定的动态,加强学习方法,鼓励安全或坚固性的加固学习方法,以及可以正式证明学习控制政策安全的方法。随着基于数据和学习的机器人控制方法继续获得牵引力,研究人员必须了解何时以及如何最好地利用它们在安全势在必行的现实情景中,例如在靠近人类的情况下操作时。我们突出了一些开放的挑战,即将在未来几年推动机器人学习领域,并强调需要逼真的物理基准的基准,以便于控制和加固学习方法之间的公平比较。
translated by 谷歌翻译
寻找最佳个性化的治疗方案被认为是最具挑战性的精确药物问题之一。各种患者特征会影响对治疗的反应,因此,没有一种尺寸适合 - 所有方案。此外,甚至在治疗过程中均不服用单一不安全剂量可能对患者的健康产生灾难性后果。因此,个性化治疗模型必须确保患者{\ EM安全} {\ EM有效}优化疗程。在这项工作中,我们研究了一种普遍的和基本的医学问题,其中治疗旨在在范围内保持生理变量,优选接近目标水平。这样的任务也与其他域中相关。我们提出ESCADA,这是一个用于这个问题结构的通用算法,在确保患者安全的同时制作个性化和背景感知最佳剂量推荐。我们在Escada的遗憾中获得了高概率的上限以及安全保证。最后,我们对1型糖尿病疾病的{\ em推注胰岛素剂量}分配问题进行了广泛的模拟,并比较ESCADA对汤普森采样,规则的剂量分配者和临床医生的表现。
translated by 谷歌翻译
In robotics, optimizing controller parameters under safety constraints is an important challenge. Safe Bayesian optimization (BO) quantifies uncertainty in the objective and constraints to safely guide exploration in such settings. Hand-designing a suitable probabilistic model can be challenging, however. In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations. Here, we propose a data-driven approach to this problem by meta-learning priors for safe BO from offline data. We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity. As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner via empirical uncertainty metrics and a frontier search algorithm. On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches while maintaining safety.
translated by 谷歌翻译
我们以已知的奖励和未知的约束来研究顺序决策,这是由约束代表昂贵评估人类偏好(例如安全舒适的驾驶行为)的情况所激发的。我们将互动学习这些约束作为新的线性匪徒问题的挑战正式化,我们称之为约束的线性最佳臂识别。为了解决这个问题,我们提出了自适应约束学习(ACOL)算法。我们为约束线性最佳臂识别提供了一个依赖实例的下限,并表明Acol的样品复杂性与最坏情况下的下限匹配。在平均情况下,ACOL的样品复杂性结合仍然比简单方法的边界更紧密。在合成实验中,ACOL与Oracle溶液相同,并且表现优于一系列基准。作为应用程序,我们考虑学习限制,以代表驾驶模拟中的人类偏好。对于此应用,ACOL比替代方案要高得多。此外,我们发现学习偏好作为约束对驾驶场景的变化比直接编码奖励函数中的偏好更强大。
translated by 谷歌翻译
Safety is one of the biggest concerns to applying reinforcement learning (RL) to the physical world. In its core part, it is challenging to ensure RL agents persistently satisfy a hard state constraint without white-box or black-box dynamics models. This paper presents an integrated model learning and safe control framework to safeguard any agent, where its dynamics are learned as Gaussian processes. The proposed theory provides (i) a novel method to construct an offline dataset for model learning that best achieves safety requirements; (ii) a parameterization rule for safety index to ensure the existence of safe control; (iii) a safety guarantee in terms of probabilistic forward invariance when the model is learned using the aforementioned dataset. Simulation results show that our framework guarantees almost zero safety violation on various continuous control tasks.
translated by 谷歌翻译
增强学习(RL)是多能管理系统的有前途的最佳控制技术。它不需要先验模型 - 降低了前期和正在进行的项目特定工程工作,并且能够学习基础系统动力学的更好表示。但是,香草RL不能提供约束满意度的保证 - 导致其在安全至关重要的环境中产生各种不安全的互动。在本文中,我们介绍了两种新颖的安全RL方法,即SafeFallback和Afvafe,其中安全约束配方与RL配方脱钩,并且提供了硬构成满意度,可以保证在培训(探索)和开发过程中(近距离) )最佳政策。在模拟的多能系统案例研究中,我们已经表明,这两种方法均与香草RL基准相比(94,6%和82,8%,而35.5%)和香草RL基准相比明显更高的效用(即有用的政策)开始。提出的SafeFallback方法甚至可以胜过香草RL基准(102,9%至100%)。我们得出的结论是,这两种方法都是超越RL的安全限制处理技术,正如随机代理所证明的,同时仍提供坚硬的保证。最后,我们向I.A.提出了基本的未来工作。随着更多数据可用,改善约束功能本身。
translated by 谷歌翻译
在安全关键方案中利用自主系统需要在存在影响系统动态的不确定性和黑匣子组件存在下验证其行为。在本文中,我们开发了一个框架,用于验证部分可观察到的离散时间动态系统,从给定的输入输出数据集中具有针对时间逻辑规范的未暗模式可分散的动态系统。验证框架采用高斯进程(GP)回归,以了解数据集中的未知动态,并将连续空间系统抽象为有限状态,不确定的马尔可夫决策过程(MDP)。这种抽象依赖于通过使用可重复的内核Hilbert空间分析以及通过离散化引起的不确定性来捕获由于GP回归中的错误而捕获不确定性的过渡概率间隔。该框架利用现有的模型检查工具来验证对给定时间逻辑规范的不确定MDP抽象。我们建立将验证结果扩展到潜在部分可观察系统的抽象结果的正确性。我们表明框架的计算复杂性在数据集和离散抽象的大小中是多项式。复杂性分析说明了验证结果质量与处理较大数据集和更精细抽象的计算负担之间的权衡。最后,我们展示了我们的学习和验证框架在具有线性,非线性和切换动力系统的几种案例研究中的功效。
translated by 谷歌翻译
由于其数据效率,贝叶斯优化已经出现在昂贵的黑盒优化的最前沿。近年来,关于新贝叶斯优化算法及其应用的发展的研究激增。因此,本文试图对贝叶斯优化的最新进展进行全面和更新的调查,并确定有趣的开放问题。我们将贝叶斯优化的现有工作分为九个主要群体,并根据所提出的算法的动机和重点。对于每个类别,我们介绍了替代模型的构建和采集功能的适应的主要进步。最后,我们讨论了开放的问题,并提出了有希望的未来研究方向,尤其是在分布式和联合优化系统中的异质性,隐私保护和公平性方面。
translated by 谷歌翻译
我们考虑使用个性化的联合学习,除了全球目标外,每个客户还对最大化个性化的本地目标感兴趣。我们认为,在一般连续的动作空间设置下,目标函数属于繁殖的内核希尔伯特空间。我们提出了基于替代高斯工艺(GP)模型的算法,该算法达到了最佳的遗憾顺序(要归结为各种因素)。此外,我们表明,GP模型的稀疏近似显着降低了客户之间的沟通成本。
translated by 谷歌翻译
安全勘探是在安全关键系统中应用强化学习(RL)的关键。现有的安全勘探方法在规律的假设下保证安全,并且很难将它们应用于大规模的真正问题。我们提出了一种新颖的算法,SPO-LF,它们优化代理的策略,同时学习通过传感器和环境奖励/安全使用的本地可用功能与使用广义线性函数近似之间的关系。我们提供了对其安全性和最优性的理论保障。我们通过实验表明,我们的算法在样本复杂性和计算成本方面更有效,2)更适用于比以前的安全RL方法具有理论保证的方法,以及3)与现有的相当相当的样本和更安全。具有安全限制的高级深度RL方法。
translated by 谷歌翻译
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
translated by 谷歌翻译