Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation , and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion , Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
translated by 谷歌翻译
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the posterior estimate of the objective. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm's performance .
translated by 谷歌翻译
基于高斯过程模型的贝叶斯优化(BO)是优化评估成本昂贵的黑盒函数的有力范例。虽然几个BO算法可证明地收敛到未知函数的全局最优,但他们认为内核的超参数是已知的。在实践中情况并非如此,并且错误指定经常导致这些算法收敛到较差的局部最优。在本文中,我们提出了第一个BO算法,它可以证明是无后悔的,并且在不参考超参数的情况下收敛到最优。我们慢慢地调整了固定核的超参数,从而扩展了相关的函数类超时,使BO算法考虑了更复杂的函数候选。基于理论上的见解,我们提出了几种实用的算法,通过在线超参数估计来实现BO的经验数据效率,但是保留理论收敛保证。我们评估了几个基准问题的方法。
translated by 谷歌翻译
最近,人们越来越关注贝叶斯优化 - 一种未知函数的优化,其假设通常由高斯过程(GP)先前表示。我们研究了一种直接使用函数argmax估计的优化策略。该策略提供了实践和理论上的优势:不需要选择权衡参数,而且,我们建立与流行的GP-UCB和GP-PI策略的紧密联系。我们的方法可以被理解为自动和自适应地在GP-UCB和GP-PI中进行勘探和利用。我们通过对遗憾的界限以及对机器人和视觉任务的广泛经验评估来说明这种自适应调整的效果,展示了该策略对一系列性能标准的稳健性。
translated by 谷歌翻译
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall opti-misation process. However, most multi-fidelity methods assume only a finite number of approximations. In many practical applications however, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (N, T). In this work, we develop a Bayesian optimisa-tion method, BOCA, for this setting. We char-acterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments .
translated by 谷歌翻译
已知贝叶斯优化难以扩展到高维度,因为获取步骤需要在相同的搜索空间中解决非凸优化问题。为了扩展方法并保持其优点,我们提出了一种算法(LineBO),该算法将问题限制为一系列选择性的一维子问题。我们表明,当函数是强凸的时,我们的算法会全局收敛并获得快速的本地速率。此外,如果目标具有不变的子空间,则我们的方法在不改变算法的情况下自动适应有效维度。我们的方法可以很好地扩展到高维度并且使用全局高斯过程模型。当结合SafeOpt算法来解决子问题时,我们得到了第一个安全的贝叶斯优化算法,其理论保证适用于高维设置。我们在多个综合基准测试中评估我们的方法,我们获得了竞争性能。此外,我们部署我们的算法,以最多40个参数优化瑞士自由电子激光的光束强度,同时满足安全操作约束。
translated by 谷歌翻译
Bayesian Optimisation (BO) is a technique used in optimising a$D$-dimensional function which is typically expensive to evaluate. While therehave been many successes for BO in low dimensions, scaling it to highdimensions has been notoriously difficult. Existing literature on the topic areunder very restrictive settings. In this paper, we identify two key challengesin this endeavour. We tackle these challenges by assuming an additive structurefor the function. This setting is substantially more expressive and contains aricher class of functions than previous work. We prove that, for additivefunctions the regret has only linear dependence on $D$ even though the functiondepends on all $D$ dimensions. We also demonstrate several other statisticaland computational benefits in our framework. Via synthetic examples, ascientific simulation and a face detection problem we demonstrate that ourmethod outperforms naive BO on additive functions and on several examples wherethe function is not additive.
translated by 谷歌翻译
贝叶斯优化通常假设给出贝叶斯先验。然而,贝叶斯优化中强有力的理论保证在实践中经常因为先验中的未知参数而受到损害。在本文中,我们采用经验贝叶斯的变量并表明,通过估计从同一个先前采样的离线数据之前的高斯过程和构建后验的无偏估计,GP-UCB的变体和改进概率实现近乎零的后悔界限,其随着离线数据和离线数据的数量减少到与观测噪声成比例的常数。在线评估的数量增加。根据经验,我们已经验证了我们的方法,以挑战模拟机器人问题为特色的任务和运动规划。
translated by 谷歌翻译
许多应用程序需要优化评估成本昂贵的未知噪声函数。我们将这项任务正式化为一个多臂强盗问题,其中支付函数要么是从高斯过程(GP)中采样,要么是具有低RKHS范数。我们解决了导致此设置后悔限制的重要开放问题,这意味着GP优化的新收敛速度。 Weanalyze GP-UCB,一种直观的基于上置信度的算法,并且在最大信息增益方面限制了它的累积遗憾,在GP优化和实验设计之间建立了新的连接。此外,根据运算符光谱对后者进行处理,我们获得了许多常用协方差函数的显式次线性区域边界。在一些重要的案例中,我们的界限对维度的依赖程度令人惊讶。在我们对真实传感器数据的实验中,GP-UCB与其他的GP优化方法相比具有优势。
translated by 谷歌翻译
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (-algorithm , and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data,-compares favorably with other heuristical GP optimization approaches.
translated by 谷歌翻译
We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments-active user modelling with preferences, and hierarchical reinforcement learning-and a discussion of the pros and cons of Bayesian optimization based on our experiences.
translated by 谷歌翻译
贝叶斯优化(BO)是指用于对昂贵的黑盒函数进行全局优化的一套技术,它使用函数的内省贝叶斯模型来有效地找到最优值。虽然BO已经在许多应用中成功应用,但现代优化任务迎来了传统方法失败的新挑战。在这项工作中,我们展示了Dragonfly,这是一个开源Python库,用于可扩展和强大的BO.Dragonfly包含多个最近开发的方法,允许BO应用于具有挑战性的现实世界环境;这些包括更好的处理更高维域的方法,当昂贵函数的廉价近似可用时处理多保真评估的方法,优化结构化组合空间的方法,例如神经网络架构的空间,以及处理并行评估的方法。此外,我们在BO中开发了新的方法改进,用于选择贝叶斯模型,选择采集函数,以及优化具有不同变量类型和附加约束的过复杂域。我们将Dragonfly与一套用于全局优化的其他软件包和算法进行比较,并证明当上述方法集成时,它们可以显着改善BO的性能。 Dragonfly图书馆可在dragonfly.github.io上找到。
translated by 谷歌翻译
Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models. Somewhat ironically, setting up the hyper-parameters of Bayesian optimisation methods is notoriously hard. While reasonable practical solutions have been advanced, they can often fail to find the best optima. Surprisingly, there is little theoretical analysis of this crucial problem in the literature. To address this, we derive a cumulative regret bound for Bayesian optimisation with Gaussian processes and unknown kernel hyper-parameters in the stochastic setting. The bound, which applies to the expected improvement acquisition function and sub-Gaussian observation noise, provides us with guidelines on how to design hyper-parameter estimation methods. A simple simulation demonstrates the importance of following these guidelines.
translated by 谷歌翻译
In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
translated by 谷歌翻译
贝叶斯优化(BO)及其批量扩展是成功的,以优化昂贵的黑盒功能。然而,当BO的计算成本可以主导评估黑盒功能的成本时,这些传统的BOapproach还不是优化较便宜功能的理想选择。这些较便宜的功能的例子是便宜的机器学习模型,通过模拟器的廉价物理实验,以及贝叶斯优化中的获取功能优化。在本文中,我们考虑了功能评估较便宜的情况下的abatch BO设置。我们的模型基于一种新的探索策略,使用几何距离提供了另一种探索方式,选择远离观察位置的点。使用这种直觉,我们建议使用Sobol序列来指导探索,这将消除运行多个全局优化步骤,如以前的工作中所使用的那样。基于提出的距离探索,我们提出了一种有效的批量BO方法。我们证明,当函数评估较便宜时,我们的方法优于其他基线和全局优化方法。
translated by 谷歌翻译
We propose minimum regret search (MRS), a novel acquisition function for Bayesian optimization. MRS bears similarities with information-theoretic approaches such as en-tropy search (ES). However, while ES aims in each query at maximizing the information gain with respect to the global maximum, MRS aims at minimizing the expected simple regret of its ultimate recommendation for the optimum. While empirically ES and MRS perform similar in most of the cases, MRS produces fewer out-liers with high simple regret than ES. We provide empirical results both for a synthetic single-task optimization problem as well as for a simulated multi-task robotic control problem.
translated by 谷歌翻译
This paper analyzes the problem of Gaus-sian process (GP) bandits with determin-istic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of O 1 √ t , where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymp-totically according to O e − τ t (ln t) d/4 with high probability. Here, d is the dimension of the search space and τ is a constant that depends on the behaviour of the objective function near its global maximum.
translated by 谷歌翻译
在本文中,我们考虑了高斯过程(GP)优化的问题,增加了鲁棒性要求:返回点可能受到厌恶扰动,并且我们要求函数值即使在此扰动之后仍保持尽可能高。该问题的动机是在优化和实施阶段期间的基础功能不同,或者当人们有兴趣找到优于单个点的良好输入的整个区域时。我们证明标准GP优化算法没有表现出所需的鲁棒性,并为此目的提供了基于置信限制的算法StableOpt。大力建立StableOpt所需的样本数量以找到最佳点,我们用独立于算法的下限来补充这种保证。我们使用真实世界的数据集实验性地展示了几个感兴趣的潜在应用程序,并且我们展示了SattableOpt一直在寻找一个稳定的最大化器,其中几个baseline方法失败。
translated by 谷歌翻译
How can we take advantage of opportunities for experimental parallelization in exploration-exploitation tradeoffs? In many experimental scenarios, it is often desirable to execute experiments simultaneously or in batches, rather than only performing one at a time. Additionally , observations may be both noisy and expensive. We introduce Gaussian Process Batch Upper Confidence Bound (GP-BUCB), an upper confidence bound-based algorithm, which models the reward function as a sample from a Gaussian process and which can select batches of experiments to run in parallel. We prove a general regret bound for GP-BUCB, as well as the surprising result that for some common kernels, the asymptotic average regret can be made independent of the batch size. The GP-BUCB algorithm is also applicable in the related case of a delay between initiation of an experiment and observation of its results , for which the same regret bounds hold. We also introduce Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB), a variant of GP-BUCB which can exploit parallelism in an adaptive manner. We evaluate GP-BUCB and GP-AUCB on several simulated and real data sets. These experiments show that GP-BUCB and GP-AUCB are competitive with state-of-the-art heuristics. 1
translated by 谷歌翻译
In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural networks in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm-the parallel knowledge gradient method. By construction, this method provides the one-step Bayes optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy.
translated by 谷歌翻译