Contextual policy search allows adapting robotic movement primitives to different situations. For instance, a locomotion primitive might be adapted to different terrain inclinations or desired walking speeds. Such an adaptation is often achievable by modifying a small number of hyperparameters. However, learning, when performed on real robotic systems, is typically restricted to a small number of trials. Bayesian optimization has recently been proposed as a sample-efficient means for contextual policy search that is well suited under these conditions. In this work, we extend entropy search, a variant of Bayesian optimization, such that it can be used for active contextual policy search where the agent selects those tasks during training in which it expects to learn the most. Empirical results in simulation suggest that this allows learning successful behavior with less trials.
translated by 谷歌翻译
Contextual policy search allows adapting robotic movement primitives to different situations. For instance, a locomotion primitive might be adapted to different terrain inclinations or desired walking speeds. Such an adaptation is often achievable by modifying a relatively small number of hyperparameters; however, learning when performed on an actual robotic system is typically restricted to a relatively small number of trials. In black-box optimization, Bayesian optimization is a popular global search approach for addressing such problems with low-dimensional search space but expensive cost function. We present an extension of Bayesian optimization to contextual policy search. Preliminary results suggest that Bayesian optimization outperforms local search approaches on low-dimensional contextual policy search problems.
translated by 谷歌翻译
Bayesian optimization is a sample-efficient method for black-box global optimization. However , the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function , and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misleading. To address this issue, we introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio construction which is motivated by information theoretic considerations. We show that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks. We not only show that ESP is able to offer performance as good as the best, but unknown, acquisition function, but surprisingly it often gives better performance. Finally , over a wide range of conditions we find that ESP is robust to the inclusion of poor acquisition functions.
translated by 谷歌翻译
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore , PES can easily perform a fully Bayesian treatment of the model hy-perparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.
translated by 谷歌翻译
Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation , and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion , Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
translated by 谷歌翻译
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the posterior estimate of the objective. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm's performance .
translated by 谷歌翻译
We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments-active user modelling with preferences, and hierarchical reinforcement learning-and a discussion of the pros and cons of Bayesian optimization based on our experiences.
translated by 谷歌翻译
最近,人们越来越关注贝叶斯优化 - 一种未知函数的优化,其假设通常由高斯过程(GP)先前表示。我们研究了一种直接使用函数argmax估计的优化策略。该策略提供了实践和理论上的优势:不需要选择权衡参数,而且,我们建立与流行的GP-UCB和GP-PI策略的紧密联系。我们的方法可以被理解为自动和自适应地在GP-UCB和GP-PI中进行勘探和利用。我们通过对遗憾的界限以及对机器人和视觉任务的广泛经验评估来说明这种自适应调整的效果,展示了该策略对一系列性能标准的稳健性。
translated by 谷歌翻译
机器学习的许多实际应用需要数据有效的黑盒功能优化,例如,识别超参数或过程设置。然而,容易获得的算法通常被设计为通用优化器,因此对于特定任务而言通常是次优的。因此,提出了一种学习优化器的方法,该优化器自动适应于给定类别的目标函数,例如,在sim-to-realapplications的上下文中。所提出的方法不是从头开始学习优化,而是基于着名的贝叶斯优化框架。只有采集函数(AF)被学习的神经网络所取代,因此得到的算法仍然能够利用高斯过程的经过验证的广义化能力。我们在几个模拟以及模拟到真实传输任务上进行实验。结果表明,学习的优化器(1)在一般函数类上始终表现优于或与已知AF相媲美,并且(2)可以使用廉价模拟自动识别函数类的结构属性并转换该知识以快速适应实际硬件任务,从而显着优于现有的与问题无关的AF。
translated by 谷歌翻译
We consider the problem of learning skills that are versatilely applicable. One popular approach for learning such skills is contextual policy search in which the individual tasks are represented as context vectors. We are interested in settings in which the agent is able to actively select the tasks that it examines during the learning process. We argue that there is a better way than selecting each task equally often because some tasks might be easier to learn at the beginning and the knowledge that the agent can extract from these tasks can be transferred to similar but more difficult tasks. The methods that we propose for addressing the task-selection problem model the learning process as a non-stationary multi-armed bandit problem with custom intrinsic reward heuristics so that the estimated learning progress will be maximized. This approach does neither make any assumptions about the underlying contextual policy search algorithm nor about the policy representation. We present empirical results on an artificial benchmark problem and a ball throwing problem with a simulated Mitsubishi PA-10 robot arm which show that active context selection can improve the learning of skills considerably.
translated by 谷歌翻译
Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.
translated by 谷歌翻译
We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions. At each iteration , PPES aims to select a batch of points which will maximize the information gain about the global maximizer of the objective. Well known strategies exist for suggesting a single evaluation point based on previous observations, while far fewer are known for selecting batches of points to evaluate in parallel. The few batch selection schemes that have been studied all resort to greedy methods to compute an optimal batch. To the best of our knowledge, PPES is the first non-greedy batch Bayesian optimization strategy. We demonstrate the benefit of this approach in optimization performance on both synthetic and real world applications , including problems in machine learning, rocket science and robotics.
translated by 谷歌翻译
大多数策略搜索算法需要数千个训练集才能找到有效的策略,这对于物理机器人来说通常是不可行的。这篇调查文章侧重于极端的另一端:arobot如何才能适应少数试验(一打)和几分钟?通过“大数据”这个词,我们将这一挑战称为“微数据增强学习”。我们表明,第一种策略是利用政策结构(例如,动态运动原语),政策参数(例如演示)或动态(例如模拟器)的预知。第二种策略是创建预期奖励(例如,贝叶斯优化)或动态模型(例如,基于模型的策略搜索)的数据驱动的替代模型,以便策略优化器查询模型而不是真实系统。总的来说,所有成功的微观数据算法都通过改变模型的类型和先验知识来结合这两种策略。当前的科学挑战主要围绕扩展tocomplex机器人(例如人形机器人),设计通用先验,以及优化计算时间。
translated by 谷歌翻译
贝叶斯优化和Lipschitz优化已经开发出用于优化黑盒功能的替代技术。它们各自利用关于函数的不同形式的先验。在这项工作中,我们探索了这些技术的策略,以便更好地进行全局优化。特别是,我们提出了在传统BO算法中使用Lipschitz连续性假设的方法,我们称之为Lipschitz贝叶斯优化(LBO)。这种方法不会增加渐近运行时间,并且在某些情况下会大大提高性能(而在最坏的情况下,性能类似)。实际上,在一个特定的环境中,我们证明使用Lipschitz信息产生与后悔相同或更好的界限,而不是单独使用贝叶斯优化。此外,我们提出了一个简单的启发式方法来估计Lipschitz常数,并证明Lipschitz常数的增长估计在某种意义上是“无害的”。我们对具有4个采集函数的15个数据集进行的实验表明,在最坏的情况下,LBO的表现类似于底层BO方法,而在某些情况下,它的表现要好得多。特别是汤普森采样通常看到了极大的改进(因为Lipschitz信息已经得到了很好的修正) - 探索“现象”及其LBO变体通常优于其他采集功能。
translated by 谷歌翻译
稀缺数据是将机器人学习扩展到真正的互补性的一个主要挑战,因为我们需要在不同的任务框架上概括本地学习的政策。上下文策略搜索通过在参数上下文空间上明确地调整策略来提供数据有效的学习和概括。在本文中,我们进一步构建了上下文策略表示。我们建议将上下文分解为两个组成部分:描述任务目标的目标上下文,例如,投掷目标的目标位置;以及表征环境的环境背景,例如初始位置或球的质量。我们的关键观察是经验可以直接在目标背景下进行推广。我们表明,这可以在上下文策略搜索算法中轻松开发。特别是,我们将基于采样和主动学习设置的贝叶斯优化方法应用于贝叶斯优化方法以进行上下文策略搜索。我们的仿真结果表明在各种机器人领域中学习速度更快,通用性更好。 Seeour补充视频:https://youtu.be/MNTbBAOufDY。
translated by 谷歌翻译
We present a new algorithm, truncated variance reduction (TRUVAR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TRUVAR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms , including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for TRUVAR covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.
translated by 谷歌翻译
贝叶斯优化(BO)是用于解决挑战优化任务的流行算法。它适用于目标函数评估成本昂贵的问题,可能无法以精确的形式提供,没有梯度信息并且可能返回噪声值。不同版本的算法在采集功能的选择上有所不同,它建议点在下一步查询目标。最初,研究人员专注于基于改进的收购,而最近注意力已经转移了计算上昂贵的信息理论措施。在本文中,我们提出了两个主要的文献贡献。首先,我们提出了一种新的基于改进的采集功能,该功能可以推荐高可信度提高的查询点。所提出的算法在全局优化文献的大量基准函数上进行评估,其中至少与当前最先进的采集函数一样,并且通常更好。这表明它是BO的强大默认选择。然后将新颖的策略与有用的全局优化求解器进行比较,以确认BO方法通过保持数量的函数评估较小来降低优化的计算成本。第二个主要贡献代表了对精准医学的应用,其中兴趣在于估计人肺血循环系统的偏微分方程模型的参数。一旦推断,这些参数可以帮助临床医生诊断患有肺动脉高压的患者,而无需通过右心导管插入术的标准侵入性程序,这可导致toside效应和并发症(例如严重疼痛,内出血,血栓形成)。
translated by 谷歌翻译
Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parameterization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date, such as grid search and evolutionary algorithms. In this article, we thoroughly discuss multiple of these optimization methods in the context of automatic gait optimization. Moreover, we extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.
translated by 谷歌翻译
贝叶斯优化(BO)是黑盒优化的有效工具,其中目标函数评估通常非常昂贵。在实践中,目标函数的低保真度近似值通常是可用的。最近,多保真贝叶斯优化(MFBO)引起了人们的关注,因为它可以通过使用那些更便宜的观测来显着加速优化过程。我们提出了一种新的MFBO信息理论方法。基于信息的方法在BO中很受欢迎,但是基于信息的MFBO的现有研究受到难以准确估计信息增益的困扰。 Ourapproach基于一种基于信息的BO变体,称为最大值熵搜索(MES),它极大地便于评估MFBO中的信息增益。实际上,我们的采集函数的计算是在分析上编写的,除了一维积分和采样之外,可以有效和准确地计算。我们通过使用合成和基准数据集证明了我们方法的有效性,并进一步展示了材料科学数据的实际应用。
translated by 谷歌翻译
贝叶斯优化在寻找最佳位置$ x ^ {*} $和值$ f ^ {*} = f(x ^ {*})= \ max_ {x \ in \ mathcal {X}} f(x )$黑盒功能$ f $。然而,在一些应用中,最佳值是预先已知的,目标是找到相应的最佳位置。贝叶斯优化(BO)中的现有工作没有有效地利用$ f ^ {*} $的知识进行优化。在本文中,我们考虑一个新的设置BO,其中可以获得最佳值的知识。我们的目标是开发有关$ f ^ {*} $的知识,以有效地搜索$ x ^ {*} $的位置。为了实现这一目标,我们首先使用关于最佳值的信息来变换高斯过程。然后,我们提出了两个获取函数,称为置信约束最小化和期望的最小化,它利用关于最优值的知识来有效地识别最佳位置。我们表明,我们的方法既可以直观地,也可以定量地实现与标准BO方法相比更好的性能。我们演示了在CartPole问题上调整深度加强学习算法和在Skin Segmentationdataset上调整XGBoost的实际应用,其中最佳值是公开可用的。
translated by 谷歌翻译