Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation , and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion , Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
translated by 谷歌翻译
最近,人们越来越关注贝叶斯优化 - 一种未知函数的优化,其假设通常由高斯过程(GP)先前表示。我们研究了一种直接使用函数argmax估计的优化策略。该策略提供了实践和理论上的优势:不需要选择权衡参数,而且,我们建立与流行的GP-UCB和GP-PI策略的紧密联系。我们的方法可以被理解为自动和自适应地在GP-UCB和GP-PI中进行勘探和利用。我们通过对遗憾的界限以及对机器人和视觉任务的广泛经验评估来说明这种自适应调整的效果,展示了该策略对一系列性能标准的稳健性。
translated by 谷歌翻译
在本文中,我们提出了一种学习算法,可以加速搜索任务和运动规划问题。我们的算法为学习提高计划效率中出现的三种不同挑战提出了解决方案:预测内容,如何表示计划问题实例,以及如何将知识从一个问题实例转移到另一个问题实例。我们提出了一种方法,它基于计划问题实例的通用表示来预测对搜索空间的约束,称为得分空间,其中我们根据尝试的一组解决方案的性能来表示问题实例。使用这种表示,我们以约束形式从基于得分空间相似性的先前问题转移知识。我们设计了一种能够有效预测这些约束的顺序算法,并在三个不同的挑战性任务和运动规划问题中对其进行评估。结果表明我们的方法比anuided计划者执行的数量级更快。
translated by 谷歌翻译
基于高斯过程模型的贝叶斯优化(BO)是优化评估成本昂贵的黑盒函数的有力范例。虽然几个BO算法可证明地收敛到未知函数的全局最优,但他们认为内核的超参数是已知的。在实践中情况并非如此,并且错误指定经常导致这些算法收敛到较差的局部最优。在本文中,我们提出了第一个BO算法,它可以证明是无后悔的,并且在不参考超参数的情况下收敛到最优。我们慢慢地调整了固定核的超参数,从而扩展了相关的函数类超时,使BO算法考虑了更复杂的函数候选。基于理论上的见解,我们提出了几种实用的算法,通过在线超参数估计来实现BO的经验数据效率,但是保留理论收敛保证。我们评估了几个基准问题的方法。
translated by 谷歌翻译
许多应用程序需要优化评估成本昂贵的未知噪声函数。我们将这项任务正式化为一个多臂强盗问题,其中支付函数要么是从高斯过程(GP)中采样,要么是具有低RKHS范数。我们解决了导致此设置后悔限制的重要开放问题,这意味着GP优化的新收敛速度。 Weanalyze GP-UCB,一种直观的基于上置信度的算法,并且在最大信息增益方面限制了它的累积遗憾,在GP优化和实验设计之间建立了新的连接。此外,根据运算符光谱对后者进行处理,我们获得了许多常用协方差函数的显式次线性区域边界。在一些重要的案例中,我们的界限对维度的依赖程度令人惊讶。在我们对真实传感器数据的实验中,GP-UCB与其他的GP优化方法相比具有优势。
translated by 谷歌翻译
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the posterior estimate of the objective. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm's performance .
translated by 谷歌翻译
Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models. Somewhat ironically, setting up the hyper-parameters of Bayesian optimisation methods is notoriously hard. While reasonable practical solutions have been advanced, they can often fail to find the best optima. Surprisingly, there is little theoretical analysis of this crucial problem in the literature. To address this, we derive a cumulative regret bound for Bayesian optimisation with Gaussian processes and unknown kernel hyper-parameters in the stochastic setting. The bound, which applies to the expected improvement acquisition function and sub-Gaussian observation noise, provides us with guidelines on how to design hyper-parameter estimation methods. A simple simulation demonstrates the importance of following these guidelines.
translated by 谷歌翻译
Bayesian optimization techniques have been successfully applied to robotics,planning, sensor placement, recommendation, advertising, intelligent userinterfaces and automatic algorithm configuration. Despite these successes, theapproach is restricted to problems of moderate dimension, and several workshopson Bayesian optimization have identified its scaling to high-dimensions as oneof the holy grails of the field. In this paper, we introduce a novel randomembedding idea to attack this problem. The resulting Random EMbedding BayesianOptimization (REMBO) algorithm is very simple, has important invarianceproperties, and applies to domains with both categorical and continuousvariables. We present a thorough theoretical analysis of REMBO. Empiricalresults confirm that REMBO can effectively solve problems with billions ofdimensions, provided the intrinsic dimensionality is low. They also show thatREMBO achieves state-of-the-art performance in optimizing the 47 discreteparameters of a popular mixed integer linear programming solver.
translated by 谷歌翻译
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall opti-misation process. However, most multi-fidelity methods assume only a finite number of approximations. In many practical applications however, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (N, T). In this work, we develop a Bayesian optimisa-tion method, BOCA, for this setting. We char-acterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments .
translated by 谷歌翻译
贝叶斯优化和Lipschitz优化已经开发出用于优化黑盒功能的替代技术。它们各自利用关于函数的不同形式的先验。在这项工作中,我们探索了这些技术的策略,以便更好地进行全局优化。特别是,我们提出了在传统BO算法中使用Lipschitz连续性假设的方法,我们称之为Lipschitz贝叶斯优化(LBO)。这种方法不会增加渐近运行时间,并且在某些情况下会大大提高性能(而在最坏的情况下,性能类似)。实际上,在一个特定的环境中,我们证明使用Lipschitz信息产生与后悔相同或更好的界限,而不是单独使用贝叶斯优化。此外,我们提出了一个简单的启发式方法来估计Lipschitz常数,并证明Lipschitz常数的增长估计在某种意义上是“无害的”。我们对具有4个采集函数的15个数据集进行的实验表明,在最坏的情况下,LBO的表现类似于底层BO方法,而在某些情况下,它的表现要好得多。特别是汤普森采样通常看到了极大的改进(因为Lipschitz信息已经得到了很好的修正) - 探索“现象”及其LBO变体通常优于其他采集功能。
translated by 谷歌翻译
In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
translated by 谷歌翻译
We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments-active user modelling with preferences, and hierarchical reinforcement learning-and a discussion of the pros and cons of Bayesian optimization based on our experiences.
translated by 谷歌翻译
Bayesian Optimisation (BO) is a technique used in optimising a$D$-dimensional function which is typically expensive to evaluate. While therehave been many successes for BO in low dimensions, scaling it to highdimensions has been notoriously difficult. Existing literature on the topic areunder very restrictive settings. In this paper, we identify two key challengesin this endeavour. We tackle these challenges by assuming an additive structurefor the function. This setting is substantially more expressive and contains aricher class of functions than previous work. We prove that, for additivefunctions the regret has only linear dependence on $D$ even though the functiondepends on all $D$ dimensions. We also demonstrate several other statisticaland computational benefits in our framework. Via synthetic examples, ascientific simulation and a face detection problem we demonstrate that ourmethod outperforms naive BO on additive functions and on several examples wherethe function is not additive.
translated by 谷歌翻译
在本文中,我们考虑了高斯过程(GP)优化的问题,增加了鲁棒性要求:返回点可能受到厌恶扰动,并且我们要求函数值即使在此扰动之后仍保持尽可能高。该问题的动机是在优化和实施阶段期间的基础功能不同,或者当人们有兴趣找到优于单个点的良好输入的整个区域时。我们证明标准GP优化算法没有表现出所需的鲁棒性,并为此目的提供了基于置信限制的算法StableOpt。大力建立StableOpt所需的样本数量以找到最佳点,我们用独立于算法的下限来补充这种保证。我们使用真实世界的数据集实验性地展示了几个感兴趣的潜在应用程序,并且我们展示了SattableOpt一直在寻找一个稳定的最大化器,其中几个baseline方法失败。
translated by 谷歌翻译
这项工作的目的是通过学习来增加机器人的基本能力,以使用新的感觉运动原语来解决复杂的长期问题。解决复杂领域中的长期问题需要灵活的生成规划,这种规划可以结合原始能力的新组合来解决世界上出现的问题。为了将原始行为结合起来,我们必须有先决条件的模型来影响这些行为:在什么情况下执行这个原则会在世界上产生某种特殊的影响?我们使用并开发了最先进的方法进行有趣学习和采样的新颖改进。我们使用高斯过程方法从机器人实验中收集的少量昂贵的训练样本中学习操作员有效性的条件。我们开发了自适应采样方法,用于在规划解决新任务期间生成连续集的各种元素(例如机器人配置和对象姿势),从而使计划尽可能高效。我们在一个集成系统中演示了这些方法,将新学习的模型与高效的连续空间机器人任务和运动规划器相结合,学习如何比以前更有效地解决长期问题。
translated by 谷歌翻译
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (-algorithm , and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data,-compares favorably with other heuristical GP optimization approaches.
translated by 谷歌翻译
在许多科学和工程应用中,我们的任务是评估昂贵的黑盒功能$ f $。这个问题的传统设置只假设这个单一函数的可用性。但是,在许多情况下,可以获得$ f $的便宜近似值。例如,机器人的昂贵的现实世界行为可以通过acheap计算机模拟来近似。我们可以使用这些近似值来廉价地消除低功能值区域,并在尽可能小的区域中使用昂贵的$ f $评估并快速确定最佳值。我们将此任务形式化为\ emph {多保真}强盗问题,其中目标函数和近似值是从高斯过程中采样的。我们开发了基于上置信界限技术的MF-GP-UCB,anovel方法。在我们的理论分析中,我们证明它恰好表现出上述行为,并且比忽略多保真信息的策略更令人遗憾。实际上,MF-GP-UCB在几个合成和实际实验中优于这种天真策略和其他多保真方法。
translated by 谷歌翻译
Bayesian optimization is a sample-efficient method for black-box global optimization. However , the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function , and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misleading. To address this issue, we introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio construction which is motivated by information theoretic considerations. We show that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks. We not only show that ESP is able to offer performance as good as the best, but unknown, acquisition function, but surprisingly it often gives better performance. Finally , over a wide range of conditions we find that ESP is robust to the inclusion of poor acquisition functions.
translated by 谷歌翻译
How can we take advantage of opportunities for experimental parallelization in exploration-exploitation tradeoffs? In many experimental scenarios, it is often desirable to execute experiments simultaneously or in batches, rather than only performing one at a time. Additionally , observations may be both noisy and expensive. We introduce Gaussian Process Batch Upper Confidence Bound (GP-BUCB), an upper confidence bound-based algorithm, which models the reward function as a sample from a Gaussian process and which can select batches of experiments to run in parallel. We prove a general regret bound for GP-BUCB, as well as the surprising result that for some common kernels, the asymptotic average regret can be made independent of the batch size. The GP-BUCB algorithm is also applicable in the related case of a delay between initiation of an experiment and observation of its results , for which the same regret bounds hold. We also introduce Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB), a variant of GP-BUCB which can exploit parallelism in an adaptive manner. We evaluate GP-BUCB and GP-AUCB on several simulated and real data sets. These experiments show that GP-BUCB and GP-AUCB are competitive with state-of-the-art heuristics. 1
translated by 谷歌翻译
We present a new algorithm, truncated variance reduction (TRUVAR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TRUVAR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms , including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for TRUVAR covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.
translated by 谷歌翻译