We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation. We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization, using the rollout algorithm and some of its variations. We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of stochastic and adaptive control. We finally consider related search and sequential decoding problems, and a rollout algorithm for the approximate solution of the Wordle and Mastermind puzzles, recently developed in the paper [BBB22].
translated by 谷歌翻译
In this paper we address the solution of the popular Wordle puzzle, using new reinforcement learning methods, which apply more generally to adaptive control of dynamic systems and to classes of Partially Observable Markov Decision Process (POMDP) problems. These methods are based on approximation in value space and the rollout approach, admit a straightforward implementation, and provide improved performance over various heuristic approaches. For the Wordle puzzle, they yield on-line solution strategies that are very close to optimal at relatively modest computational cost. Our methods are viable for more complex versions of Wordle and related search problems, for which an optimal strategy would be impossible to compute. They are also applicable to a wide range of adaptive sequential decision problems that involve an unknown or frequently changing environment whose parameters are estimated on-line.
translated by 谷歌翻译
信息性测量是获取有关未知状态信息的最有效方法。我们给出了一般目的动态编程算法的第一原理推导,通过顺序地最大化可能的测量结果的熵来返回一系列信息测量。该算法可以由自主代理或机器人使用,以确定最佳测量的位置,规划对应于信息序列的最佳信息序列的路径。该算法适用于具有连续或离散的状态和控制,以及随机或确定性的代理动态;包括马尔可夫决策过程。最近的近似动态规划和强化学习的结果,包括卷展栏和蒙特卡罗树搜索等在线近似,允许代理或机器人实时解决测量任务。由此产生的近最佳溶液包括非近视路径和测量序列,其通常可以优于超过,有时基本上使用的贪婪启发式,例如最大化每个测量结果的熵。这是针对全球搜索问题的说明,其中发现使用扩展本地搜索的在线规划来减少搜索中的测量数。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
最近的文学建立了神经网络可以代表供应链和物流中一系列随机动态模型的良好政策。我们提出了一种结合方差减少技术的新算法,以克服通常在文献中使用的算法的限制,以学习此类神经网络策略。对于古典丢失的销售库存模型,该算法了解到使用无模型算法学习的神经网络策略,同时始于最优于数量级的最佳启发式基准。该算法是一个有趣的候选者,适用于供应链和物流中的其他随机动态问题,因为其开发中的思想是通用的。
translated by 谷歌翻译
This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.
translated by 谷歌翻译
我们考虑了路径计划和网络传输中的一些经典优化问题,并引入了基于拍卖的新算法,以实现其最佳和次优的解决方案。这些算法是基于与对象和随之而来的市场平衡的竞争竞标相关的数学思想,这些算法是拍卖过程的基础。但是,我们算法的起点是不同的,即在有向图中加权和未加权的路径构造,而不是将人分配给对象。新算法比现有方法具有多种潜在的优势:在某些重要情况下,它们在经验上更快,例如Max-Flow,它们非常适合在线重新融合,并且可以适应分布式的异步操作。此外,它们允许任意初始价格,而无需互补的懈怠限制,因此非常适合利用加强学习方法,这些方法将使用数据使用离线培训以及实时操作期间的在线培训。新算法还可以在涉及近似的增强学习环境中找到使用,例如Multistep LookAhead和Tree搜索方案和/或推出算法。
translated by 谷歌翻译
有效计划的能力对于生物体和人造系统都是至关重要的。在认知神经科学和人工智能(AI)中广泛研究了基于模型的计划和假期,但是从不同的角度来看,以及难以调和的考虑(生物现实主义与可伸缩性)的不同意见(生物现实主义与可伸缩性)。在这里,我们介绍了一种新颖的方法来计划大型POMDP(Active Tree search(ACT)),该方法结合了神经科学中领先的计划理论的规范性特征和生物学现实主义(主动推论)和树木搜索方法的可扩展性AI。这种统一对两种方法都是有益的。一方面,使用树搜索可以使生物学接地的第一原理,主动推断的方法可应用于大规模问题。另一方面,主动推理为探索 - 开发困境提供了一种原则性的解决方案,该解决方案通常在树搜索方法中以启发性解决。我们的模拟表明,ACT成功地浏览了对基于抽样的方法,需要自适应探索的问题以及大型POMDP问题“ RockSample”的二进制树,其中ACT近似于最新的POMDP解决方案。此外,我们说明了如何使用ACT来模拟人类和其他解决大型计划问题的人类和其他动物的神经生理反应(例如,在海马和前额叶皮层)。这些数值分析表明,主动树搜索是神经科学和AI计划理论的原则性实现,既具有生物现实主义和可扩展性。
translated by 谷歌翻译
贝叶斯优化是黑匣子功能优化的流行框架。多重方法方法可以通过利用昂贵目标功能的低保真表示来加速贝叶斯优化。流行的多重贝叶斯策略依赖于采样政策,这些策略解释了在特定意见下评估目标函数的立即奖励,从而排除了更多的信息收益,这些收益可能会获得更多的步骤。本文提出了一个非侧重多倍数贝叶斯框架,以掌握优化的未来步骤的长期奖励。我们的计算策略具有两步的lookahead多因素采集函数,可最大程度地提高累积奖励,从而测量解决方案的改进,超过了前面的两个步骤。我们证明,所提出的算法在流行的基准优化问题上优于标准的多尺寸贝叶斯框架。
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
具有很多玩家的非合作和合作游戏具有许多应用程序,但是当玩家数量增加时,通常仍然很棘手。由Lasry和Lions以及Huang,Caines和Malham \'E引入的,平均野外运动会(MFGS)依靠平均场外近似值,以使玩家数量可以成长为无穷大。解决这些游戏的传统方法通常依赖于以完全了解模型的了解来求解部分或随机微分方程。最近,增强学习(RL)似乎有望解决复杂问题。通过组合MFGS和RL,我们希望在人口规模和环境复杂性方面能够大规模解决游戏。在这项调查中,我们回顾了有关学习MFG中NASH均衡的最新文献。我们首先确定最常见的设置(静态,固定和进化)。然后,我们为经典迭代方法(基于最佳响应计算或策略评估)提供了一个通用框架,以确切的方式解决MFG。在这些算法和与马尔可夫决策过程的联系的基础上,我们解释了如何使用RL以无模型的方式学习MFG解决方案。最后,我们在基准问题上介绍了数值插图,并以某些视角得出结论。
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
在这项工作中,我们证明了如何通过预期最大化算法来处理随机和风险敏感的最佳控制问题。我们展示了这种处理如何实现为两个独立的迭代程序,每个迭代程序都会产生一个独特但密切相关的密度函数序列。我们激励将这些密度解释为信念,将ERGO作为确定性最佳政策的概率代理。更正式的两个固定点迭代方案是根据代表可靠的期望最大化方法的确定性最佳策略一致的固定点得出的。我们倾向于指出我们的结果与控制范式密切相关。在此推理中的控制是指旨在将最佳控制作为概率推断的实例的方法集合。尽管所说的范式已经导致了几种强大的强化学习算法的发展,但基本问题陈述通常是由目的论论证引入的。我们认为,目前的结果表明,较早的控制作为推理框架实际上将一个步骤与所提出的迭代程序中的一个步骤隔离。在任何情况下,本疗法都为他们提供了有效性的义学论点。通过暴露基本的技术机制,我们旨在为控制作为一种推断为取代当前最佳控制范式的框架的普遍接受。为了激发提出的治疗的普遍相关性,我们在勾勒出未来算法开发的大纲之前,进一步讨论了与路径积分控制和其他研究领域的相似之处。
translated by 谷歌翻译
我们探索了一个新的强盗实验模型,其中潜在的非组织序列会影响武器的性能。上下文 - 统一算法可能会混淆,而那些执行正确的推理面部信息延迟的算法。我们的主要见解是,我们称之为Deconfounst Thompson采样的算法在适应性和健壮性之间取得了微妙的平衡。它的适应性在易于固定实例中带来了最佳效率,但是在硬性非平稳性方面显示出令人惊讶的弹性,这会导致其他自适应算法失败。
translated by 谷歌翻译
Bayesian Optimization(BO)是一种优化昂贵对评估黑匣子功能的采样有效的方法。大多数BO方法忽略了评估成本如何在优化域中变化。然而,这些成本可以是高度异质的并且通常提前未知。这发生在许多实际设置中,例如机器学习算法或基于物理的仿真优化的超参数调整。此外,那些确认成本异质性的现有方法并不自然地适应总评估成本的预算限制。这种未知的成本和预算限制的组合引入了勘探开发权衡的新维度,其中关于成本的学习成本本身。现有方法没有原因地理由以原则的方式对此问题的各种权衡,经常导致性能不佳。我们通过证明,每单位成本的预期改进和预期改善,可以使这两个最广泛使用的采购职能在实践中的预期改进和预期的索赔可以是任意劣等的。为了克服现有方法的缺点,我们提出了预算的多步预期改进,是一个非近视收购函数,以概括为异质和未知评估成本的古典预期改进。最后,我们表明我们的采集功能优于各种合成和实际问题的现有方法。
translated by 谷歌翻译
Uncertainty is prevalent in engineering design, statistical learning, and decision making broadly. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measure of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From its beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.
translated by 谷歌翻译
我们调查部分观察到的Markov决策过程(POMDPS),通过描述状态,观察和控制不确定性的熵术语规范化的成本函数。标准POMDP技术显示为对这些熵正则化的POMDP提供有界误差解决方案,当正规化涉及状态,观察和控制轨迹的联合熵时,具有精确的解决方案。我们的联合熵结果特别令人惊讶,因为它构成了一种新颖的,无解决的活性状态估计的制剂。
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be highly challenging, since the corresponding likelihood function is often intractable, and model simulation may be computationally burdensome or infeasible. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to base Bayesian inference directly on the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimising a transform of the approximate posterior that minimises a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.
translated by 谷歌翻译