我们应用数值方法结合有限差分时域(FDTD)模拟,利用新颖的多保真高斯过程方法,利用五维参数空间上的多目标品质因数优化等离子体镜面滤色器的传输特性。我们将这些结果与传统的无导数全局搜索算法进行比较,例如(单保真)高斯过程优化方案和粒子群优化 - 纳米光子学社区中常用的方法,这是在Lumerical商业光子学软件中实现的。我们在几个预先收集的现实数据集上展示了各种数值优化方法的性能,并表明通过廉价模拟适当地交易廉价信息源,可以更有效地优化具有固定预算的传输属性。
translated by 谷歌翻译
在许多科学和工程应用中,我们的任务是评估昂贵的黑盒功能$ f $。这个问题的传统设置只假设这个单一函数的可用性。但是,在许多情况下,可以获得$ f $的便宜近似值。例如,机器人的昂贵的现实世界行为可以通过acheap计算机模拟来近似。我们可以使用这些近似值来廉价地消除低功能值区域,并在尽可能小的区域中使用昂贵的$ f $评估并快速确定最佳值。我们将此任务形式化为\ emph {多保真}强盗问题,其中目标函数和近似值是从高斯过程中采样的。我们开发了基于上置信界限技术的MF-GP-UCB,anovel方法。在我们的理论分析中,我们证明它恰好表现出上述行为,并且比忽略多保真信息的策略更令人遗憾。实际上,MF-GP-UCB在几个合成和实际实验中优于这种天真策略和其他多保真方法。
translated by 谷歌翻译
贝叶斯优化(BO)是黑盒优化的有效工具,其中目标函数评估通常非常昂贵。在实践中,目标函数的低保真度近似值通常是可用的。最近,多保真贝叶斯优化(MFBO)引起了人们的关注,因为它可以通过使用那些更便宜的观测来显着加速优化过程。我们提出了一种新的MFBO信息理论方法。基于信息的方法在BO中很受欢迎,但是基于信息的MFBO的现有研究受到难以准确估计信息增益的困扰。 Ourapproach基于一种基于信息的BO变体,称为最大值熵搜索(MES),它极大地便于评估MFBO中的信息增益。实际上,我们的采集函数的计算是在分析上编写的,除了一维积分和采样之外,可以有效和准确地计算。我们通过使用合成和基准数据集证明了我们方法的有效性,并进一步展示了材料科学数据的实际应用。
translated by 谷歌翻译
How can we take advantage of opportunities for experimental parallelization in exploration-exploitation tradeoffs? In many experimental scenarios, it is often desirable to execute experiments simultaneously or in batches, rather than only performing one at a time. Additionally , observations may be both noisy and expensive. We introduce Gaussian Process Batch Upper Confidence Bound (GP-BUCB), an upper confidence bound-based algorithm, which models the reward function as a sample from a Gaussian process and which can select batches of experiments to run in parallel. We prove a general regret bound for GP-BUCB, as well as the surprising result that for some common kernels, the asymptotic average regret can be made independent of the batch size. The GP-BUCB algorithm is also applicable in the related case of a delay between initiation of an experiment and observation of its results , for which the same regret bounds hold. We also introduce Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB), a variant of GP-BUCB which can exploit parallelism in an adaptive manner. We evaluate GP-BUCB and GP-AUCB on several simulated and real data sets. These experiments show that GP-BUCB and GP-AUCB are competitive with state-of-the-art heuristics. 1
translated by 谷歌翻译
Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation , and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion , Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
translated by 谷歌翻译
In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f. Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.
translated by 谷歌翻译
In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
translated by 谷歌翻译
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall opti-misation process. However, most multi-fidelity methods assume only a finite number of approximations. In many practical applications however, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (N, T). In this work, we develop a Bayesian optimisa-tion method, BOCA, for this setting. We char-acterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments .
translated by 谷歌翻译
最近,人们越来越关注贝叶斯优化 - 一种未知函数的优化,其假设通常由高斯过程(GP)先前表示。我们研究了一种直接使用函数argmax估计的优化策略。该策略提供了实践和理论上的优势:不需要选择权衡参数,而且,我们建立与流行的GP-UCB和GP-PI策略的紧密联系。我们的方法可以被理解为自动和自适应地在GP-UCB和GP-PI中进行勘探和利用。我们通过对遗憾的界限以及对机器人和视觉任务的广泛经验评估来说明这种自适应调整的效果,展示了该策略对一系列性能标准的稳健性。
translated by 谷歌翻译
Bayesian Optimisation (BO) is a technique used in optimising a$D$-dimensional function which is typically expensive to evaluate. While therehave been many successes for BO in low dimensions, scaling it to highdimensions has been notoriously difficult. Existing literature on the topic areunder very restrictive settings. In this paper, we identify two key challengesin this endeavour. We tackle these challenges by assuming an additive structurefor the function. This setting is substantially more expressive and contains aricher class of functions than previous work. We prove that, for additivefunctions the regret has only linear dependence on $D$ even though the functiondepends on all $D$ dimensions. We also demonstrate several other statisticaland computational benefits in our framework. Via synthetic examples, ascientific simulation and a face detection problem we demonstrate that ourmethod outperforms naive BO on additive functions and on several examples wherethe function is not additive.
translated by 谷歌翻译
When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (vari-ance) in the GP model, and AD D-, or E-optimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual information is NP-complete. To address this issue, we describe a polynomial-time approximation that is within (1 − 1/e) of the optimum by exploiting the submodularity of mutual information. We also show how submodularity can be used to obtain online bounds, and design branch and bound search procedures. We then extend our algorithm to exploit lazy evaluations and local structure in the GP, yielding significant speedups. We also extend our approach to find placements which are robust against node failures and uncertainties in the model. These extensions are again associated with rigorous theoretical approximation guarantees, exploiting the submodu-larity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two real-world data sets.
translated by 谷歌翻译
基于高斯过程模型的贝叶斯优化(BO)是优化评估成本昂贵的黑盒函数的有力范例。虽然几个BO算法可证明地收敛到未知函数的全局最优,但他们认为内核的超参数是已知的。在实践中情况并非如此,并且错误指定经常导致这些算法收敛到较差的局部最优。在本文中,我们提出了第一个BO算法,它可以证明是无后悔的,并且在不参考超参数的情况下收敛到最优。我们慢慢地调整了固定核的超参数,从而扩展了相关的函数类超时,使BO算法考虑了更复杂的函数候选。基于理论上的见解,我们提出了几种实用的算法,通过在线超参数估计来实现BO的经验数据效率,但是保留理论收敛保证。我们评估了几个基准问题的方法。
translated by 谷歌翻译
We consider Bayesian methods for multi-information source optimization (MISO), in which we seek to optimize an expensive-to-evaluate black-box objective function while also accessing cheaper but biased and noisy approximations ("information sources"). We present a novel algorithm that outperforms the state of the art for this problem by using a Gaussian process covariance kernel better suited to MISO than those used by previous approaches, and an acquisition function based on a one-step optimality analysis supported by efficient parallelization. We also provide a novel technique to guarantee the asymptotic quality of the solution provided by this algorithm. Experimental evaluations demonstrate that this algorithm consistently finds designs of higher value at less cost than previous approaches.
translated by 谷歌翻译
We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments-active user modelling with preferences, and hierarchical reinforcement learning-and a discussion of the pros and cons of Bayesian optimization based on our experiences.
translated by 谷歌翻译
许多应用程序需要优化评估成本昂贵的未知噪声函数。我们将这项任务正式化为一个多臂强盗问题,其中支付函数要么是从高斯过程(GP)中采样,要么是具有低RKHS范数。我们解决了导致此设置后悔限制的重要开放问题,这意味着GP优化的新收敛速度。 Weanalyze GP-UCB,一种直观的基于上置信度的算法,并且在最大信息增益方面限制了它的累积遗憾,在GP优化和实验设计之间建立了新的连接。此外,根据运算符光谱对后者进行处理,我们获得了许多常用协方差函数的显式次线性区域边界。在一些重要的案例中,我们的界限对维度的依赖程度令人惊讶。在我们对真实传感器数据的实验中,GP-UCB与其他的GP优化方法相比具有优势。
translated by 谷歌翻译
We present a new algorithm, truncated variance reduction (TRUVAR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TRUVAR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms , including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for TRUVAR covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.
translated by 谷歌翻译
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (-algorithm , and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data,-compares favorably with other heuristical GP optimization approaches.
translated by 谷歌翻译
How should we design experiments to maximize performance of a complex system, taking into account uncontrollable environmental conditions? How should we select relevant documents (ads) to display, given information about the user? These tasks can be formalized as contextual bandit problems, where at each round, we receive context (about the experimental conditions, the query), and have to choose an action (parameters, documents). The key challenge is to trade off exploration by gathering data for estimating the mean payoff function over the context-action space, and to exploit by choosing an action deemed optimal based on the gathered data. We model the payoff function as a sample from a Gaussian process defined over the joint context-action space, and develop CGP-UCB, an intuitive upper-confidence style algorithm. We show that by mixing and matching kernels for contexts and actions, CGP-UCB can handle a variety of practical applications. We further provide generic tools for deriving regret bounds when using such composite kernel functions. Lastly, we evaluate our algorithm on two case studies, in the context of automated vaccine design and sensor management. We show that context-sensitive optimization outperforms no or naive use of context.
translated by 谷歌翻译
贝叶斯优化通常假设给出贝叶斯先验。然而,贝叶斯优化中强有力的理论保证在实践中经常因为先验中的未知参数而受到损害。在本文中,我们采用经验贝叶斯的变量并表明,通过估计从同一个先前采样的离线数据之前的高斯过程和构建后验的无偏估计,GP-UCB的变体和改进概率实现近乎零的后悔界限,其随着离线数据和离线数据的数量减少到与观测噪声成比例的常数。在线评估的数量增加。根据经验,我们已经验证了我们的方法,以挑战模拟机器人问题为特色的任务和运动规划。
translated by 谷歌翻译
We propose minimum regret search (MRS), a novel acquisition function for Bayesian optimization. MRS bears similarities with information-theoretic approaches such as en-tropy search (ES). However, while ES aims in each query at maximizing the information gain with respect to the global maximum, MRS aims at minimizing the expected simple regret of its ultimate recommendation for the optimum. While empirically ES and MRS perform similar in most of the cases, MRS produces fewer out-liers with high simple regret than ES. We provide empirical results both for a synthetic single-task optimization problem as well as for a simulated multi-task robotic control problem.
translated by 谷歌翻译