Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations. It relies on querying a distribution over functions defined by a relatively cheap surrogate model. An accurate model for this distribution over functions is critical to the effectiveness of the approach, and is typically fit using Gaussian processes (GPs). However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization.In this work, we explore the use of neural networks as an alternative to GPs to model distributions over functions. We show that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks using convolutional networks, and image caption generation using neural language models.
translated by 谷歌翻译
Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.
translated by 谷歌翻译
Bayesian optimization has recently been proposed as a framework for automatically tuning the hyperparameters of machine learning models and has been shown to yield state-of-the-art performance with impressive ease and efficiency. In this paper, we explore whether it is possible to transfer the knowledge gained from previous optimizations to new tasks in order to find optimal hyperparameter settings more efficiently. Our approach is based on extending multi-task Gaussian processes to the framework of Bayesian optimization. We show that this method significantly speeds up the optimization process when compared to the standard single-task approach. We further propose a straightforward extension of our algorithm in order to jointly minimize the average error across multiple tasks and demonstrate how this can be used to greatly speed up k-fold cross-validation. Lastly, we propose an adaptation of a recently developed acquisition function, entropy search, to the cost-sensitive, multi-task setting. We demonstrate the utility of this new acquisition function by leveraging a small dataset to explore hyperparameter settings for a large dataset. Our algorithm dynamically chooses which dataset to query in order to yield the most information per unit cost.
translated by 谷歌翻译
由于其数据效率,贝叶斯优化已经出现在昂贵的黑盒优化的最前沿。近年来,关于新贝叶斯优化算法及其应用的发展的研究激增。因此,本文试图对贝叶斯优化的最新进展进行全面和更新的调查,并确定有趣的开放问题。我们将贝叶斯优化的现有工作分为九个主要群体,并根据所提出的算法的动机和重点。对于每个类别,我们介绍了替代模型的构建和采集功能的适应的主要进步。最后,我们讨论了开放的问题,并提出了有希望的未来研究方向,尤其是在分布式和联合优化系统中的异质性,隐私保护和公平性方面。
translated by 谷歌翻译
贝叶斯优化(BO)已成为许多昂贵现实世界功能的全球优化的流行策略。与普遍认为BO适合优化黑框功能的信念相反,它实际上需要有关这些功能特征的域知识才能成功部署BO。这样的领域知识通常表现在高斯流程先验中,这些先验指定了有关功能的初始信念。但是,即使有专家知识,选择先验也不是一件容易的事。对于复杂的机器学习模型上的超参数调谐问题尤其如此,在这种模型中,调整目标的景观通常很难理解。我们寻求一种设定这些功能性先验的替代实践。特别是,我们考虑了从类似功能的数据中,使我们可以先验地进行更紧密的分布。从理论上讲,我们与预先训练的先验表示对BO的遗憾。为了验证我们在现实的模型培训设置中的方法,我们通过训练在流行图像和文本数据集上的数以万计的近状态模型配置来收集了大型多任务超参数调谐数据集,以及蛋白质序列数据集。我们的结果表明,平均而言,我们的方法能够比最佳竞争方法更有效地定位良好的超参数。
translated by 谷歌翻译
Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible. On the other hand, bandit-based configuration evaluation approaches based on random search lack guidance and do not converge to the best configurations as quickly. Here, we propose to combine the benefits of both Bayesian optimization and banditbased methods, in order to achieve the best of both worlds: strong anytime performance and fast convergence to optimal configurations. We propose a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian neural networks, deep reinforcement learning, and convolutional neural networks. Our method is robust and versatile, while at the same time being conceptually simple and easy to implement.
translated by 谷歌翻译
超参数优化构成了典型的现代机器学习工作流程的很大一部分。这是由于这样一个事实,即机器学习方法和相应的预处理步骤通常只有在正确调整超参数时就会产生最佳性能。但是在许多应用中,我们不仅有兴趣仅仅为了预测精度而优化ML管道;确定最佳配置时,必须考虑其他指标或约束,从而导致多目标优化问题。由于缺乏知识和用于多目标超参数优化的知识和容易获得的软件实现,因此通常在实践中被忽略。在这项工作中,我们向读者介绍了多个客观超参数优化的基础知识,并激励其在应用ML中的实用性。此外,我们从进化算法和贝叶斯优化的领域提供了现有优化策略的广泛调查。我们说明了MOO在几个特定ML应用中的实用性,考虑了诸如操作条件,预测时间,稀疏,公平,可解释性和鲁棒性之类的目标。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
贝叶斯优化(BO)是用于全局优化昂贵的黑盒功能的流行范式,但是在许多域中,该函数并不完全是黑色框。数据可能具有一些已知的结构(例如对称性)和/或数据生成过程可能是一个复合过程,除优化目标的值外,还可以产生有用的中间或辅助信息。但是,传统上使用的代孕模型,例如高斯工艺(GPS),随数据集大小的规模较差,并且不容易适应已知的结构。取而代之的是,我们使用贝叶斯神经网络,这是具有感应偏见的一类可扩展和灵活的替代模型,将BO扩展到具有高维度的复杂,结构化问题。我们证明了BO在物理和化学方面的许多现实问题,包括使用卷积神经网络对光子晶体材料进行拓扑优化,以及使用图神经网络对分子进行化学性质优化。在这些复杂的任务上,我们表明,就抽样效率和计算成本而言,神经网络通常优于GP作为BO的替代模型。
translated by 谷歌翻译
贝叶斯优化(BO)算法在涉及昂贵的黑盒功能的应用中表现出了显着的成功。传统上,BO被设置为一个顺序决策过程,该过程通过采集函数和先前的功能(例如高斯过程)来估计查询点的实用性。然而,最近,通过密度比率估计(BORE)对BO进行重新制定允许将采集函数重新诠释为概率二进制分类器,从而消除了对函数的显式先验和提高可伸缩性的需求。在本文中,我们介绍了对孔的遗憾和算法扩展的理论分析,并提高了不确定性估计。我们还表明,通过将问题重新提交为近似贝叶斯推断,可以自然地扩展到批处理优化设置。所得算法配备了理论性能保证,并在一系列实验中对其他批处理基本线进行了评估。
translated by 谷歌翻译
神经线性模型(NLM)是深度贝叶斯模型,通过从数据中学习特征,然后对这些特征进行贝叶斯线性回归来产生预测的不确定性。尽管他们受欢迎,但很少有作品专注于有条理地评估这些模型的预测性不确定性。在这项工作中,我们证明了NLMS的传统培训程序急剧低估了分发输入的不确定性,因此它们不能在风险敏感的应用中暂时部署。我们确定了这种行为的基本原因,并提出了一种新的培训框架,捕获下游任务的有用预测不确定性。
translated by 谷歌翻译
大多数机器学习算法由一个或多个超参数配置,必须仔细选择并且通常会影响性能。为避免耗时和不可递销的手动试验和错误过程来查找性能良好的超参数配置,可以采用各种自动超参数优化(HPO)方法,例如,基于监督机器学习的重新采样误差估计。本文介绍了HPO后,本文审查了重要的HPO方法,如网格或随机搜索,进化算法,贝叶斯优化,超带和赛车。它给出了关于进行HPO的重要选择的实用建议,包括HPO算法本身,性能评估,如何将HPO与ML管道,运行时改进和并行化结合起来。这项工作伴随着附录,其中包含关于R和Python的特定软件包的信息,以及用于特定学习算法的信息和推荐的超参数搜索空间。我们还提供笔记本电脑,这些笔记本展示了这项工作的概念作为补充文件。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
我们研究了回归中神经网络(NNS)的模型不确定性的方法。为了隔离模型不确定性的效果,我们专注于稀缺训练数据的无噪声环境。我们介绍了关于任何方法都应满足的模型不确定性的五个重要的逃亡者。但是,我们发现,建立的基准通常无法可靠地捕获其中一些逃避者,即使是贝叶斯理论要求的基准。为了解决这个问题,我们介绍了一种新方法来捕获NNS的模型不确定性,我们称之为基于神经优化的模型不确定性(NOMU)。 NOMU的主要思想是设计一个由两个连接的子NN组成的网络体系结构,一个用于模型预测,一个用于模型不确定性,并使用精心设计的损耗函数进行训练。重要的是,我们的设计执行NOMU满足我们的五个Desiderata。由于其模块化体系结构,NOMU可以为任何给定(先前训练)NN提供模型不确定性,如果访问其培训数据。我们在各种回归任务和无嘈杂的贝叶斯优化(BO)中评估NOMU,并具有昂贵的评估。在回归中,NOMU至少和最先进的方法。在BO中,Nomu甚至胜过所有考虑的基准。
translated by 谷歌翻译
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
translated by 谷歌翻译
采集函数是贝叶斯优化(BO)中的关键组成部分,通常可以写为在替代模型下对效用函数的期望。但是,为了确保采集功能是可以优化的,必须对替代模型和实用程序功能进行限制。为了将BO扩展到更广泛的模型和实用程序,我们提出了不含可能性的BO(LFBO),这是一种基于无似然推理的方法。 LFBO直接对采集函数进行建模,而无需单独使用概率替代模型进行推断。我们表明,可以将计算LFBO中的采集函数缩小为优化加权分类问题,而权重对应于所选择的实用程序。通过为预期改进选择实用程序功能,LFBO在几个现实世界优化问题上都优于各种最新的黑盒优化方法。 LFBO还可以有效利用目标函数的复合结构,从而进一步改善了其遗憾。
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
我们考虑基于活动的运输模拟器的校准和不确定性分析问题。基于活动的模型(ABM)依靠单个旅行者行为的统计模型来预测大都市地区的高阶旅行模式。输入参数通常是使用最大似然从旅行者调查中估算的。我们开发了一种使用高斯工艺模拟器使用流量流数据校准这些参数的方法。我们的方法扩展了传统的模拟器,以处理运输模拟器的高维和非平稳性。我们介绍了一个深度学习维度降低模型,该模型与高斯工艺模型共同估计以近似模拟器。我们使用几个模拟示例以及校准伊利诺伊州布卢明顿的关键参数来证明方法。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
贝叶斯优化(BO)被广泛用于优化随机黑匣子功能。尽管大多数BO方法都集中在优化条件期望上,但许多应用程序都需要规避风险的策略,并且需要考虑分配尾巴的替代标准。在本文中,我们提出了针对贝叶斯分位数和预期回归的新变异模型,这些模型非常适合异形的噪声设置。我们的模型分别由有条件分位数(或期望)的两个潜在高斯过程和不对称可能性函数的比例参数组成。此外,我们提出了基于最大值熵搜索和汤普森采样的两种BO策略,这些策略是针对此类型号量身定制的,可以容纳大量点。与现有的BO进行规避风险优化的方法相反,我们的策略可以直接针对分位数和预期进行优化,而无需复制观测值或假设噪声的参数形式。如实验部分所示,所提出的方法清楚地表现出异质的非高斯案例中的最新状态。
translated by 谷歌翻译