在过去的十年中,深入的强化学习(RL)已经取得了长足的进步。同时,最先进的RL算法在培训时间融合方面需要大量的计算预算。最近的工作已经开始通过量子计算的角度来解决这个问题,这有望为几项传统上的艰巨任务做出理论上的速度。在这项工作中,我们研究了一类混合量子古典RL算法,我们共同称为变异量子Q-NETWORKS(VQ-DQN)。我们表明,VQ-DQN方法受到导致学习政策分歧的不稳定性的约束,研究了基于经典模拟的既定结果的重复性,并执行系统的实验以识别观察到的不稳定性的潜在解释。此外,与大多数现有的量子增强学习中现有工作相反,我们在实际量子处理单元(IBM量子设备)上执行RL算法,并研究模拟和物理量子系统之间因实施不足而进行的行为差异。我们的实验表明,与文献中相反的主张相反,与经典方法相比,即使在没有物理缺陷的情况下进行模拟,也不能最终决定是否已知量子方法,也可以提供优势。最后,我们提供了VQ-DQN作为可再现的测试床的强大,通用且经过充分测试的实现,以实现未来的实验。
translated by 谷歌翻译
Quantum Computing在古典计算机上解决困难的计算任务的显着改进承诺。然而,为实际使用设计量子电路不是琐碎的目标,并且需要专家级知识。为了帮助这一努力,提出了一种基于机器学习的方法来构建量子电路架构。以前的作品已经证明,经典的深度加强学习(DRL)算法可以成功构建量子电路架构而没有编码的物理知识。但是,这些基于DRL的作品不完全在更换设备噪声中的设置,从而需要大量的培训资源来保持RL模型最新。考虑到这一点,我们持续学习,以提高算法的性能。在本文中,我们介绍了深度Q-Learning(PPR-DQL)框架的概率策略重用来解决这个电路设计挑战。通过通过各种噪声模式进行数值模拟,我们证明了具有PPR的RL代理能够找到量子栅极序列,以比从划痕训练的代理更快地生成双量标铃声状态。所提出的框架是一般的,可以应用于其他量子栅极合成或控制问题 - 包括量子器件的自动校准。
translated by 谷歌翻译
在这项工作中,我们利用量子深的增强学习作为方法,以在三个模拟的复杂性的模拟环境中为简单的,轮式机器人学习导航任务。我们显示了与经典基线相比,在混合量子古典设置中训练有良好建立的深钢筋学习技术的参数化量子电路的相似性能。据我们所知,这是用于机器人行为的量子机学习(QML)的首次演示。因此,我们将机器人技术建立为QML算法的可行研究领域,此后量子计算和量子机学习是自治机器人技术未来进步的潜在技术。除此之外,我们讨论了当前的方法的限制以及自动机器人量子机学习领域的未来研究方向。
translated by 谷歌翻译
Deep Reinforcement Learning is emerging as a promising approach for the continuous control task of robotic arm movement. However, the challenges of learning robust and versatile control capabilities are still far from being resolved for real-world applications, mainly because of two common issues of this learning paradigm: the exploration strategy and the slow learning speed, sometimes known as "the curse of dimensionality". This work aims at exploring and assessing the advantages of the application of Quantum Computing to one of the state-of-art Reinforcement Learning techniques for continuous control - namely Soft Actor-Critic. Specifically, the performance of a Variational Quantum Soft Actor-Critic on the movement of a virtual robotic arm has been investigated by means of digital simulations of quantum circuits. A quantum advantage over the classical algorithm has been found in terms of a significant decrease in the amount of required parameters for satisfactory model training, paving the way for further promising developments.
translated by 谷歌翻译
Quantum computing (QC) promises significant advantages on certain hard computational tasks over classical computers. However, current quantum hardware, also known as noisy intermediate-scale quantum computers (NISQ), are still unable to carry out computations faithfully mainly because of the lack of quantum error correction (QEC) capability. A significant amount of theoretical studies have provided various types of QEC codes; one of the notable topological codes is the surface code, and its features, such as the requirement of only nearest-neighboring two-qubit control gates and a large error threshold, make it a leading candidate for scalable quantum computation. Recent developments of machine learning (ML)-based techniques especially the reinforcement learning (RL) methods have been applied to the decoding problem and have already made certain progress. Nevertheless, the device noise pattern may change over time, making trained decoder models ineffective. In this paper, we propose a continual reinforcement learning method to address these decoding challenges. Specifically, we implement double deep Q-learning with probabilistic policy reuse (DDQN-PPR) model to learn surface code decoding strategies for quantum environments with varying noise patterns. Through numerical simulations, we show that the proposed DDQN-PPR model can significantly reduce the computational complexity. Moreover, increasing the number of trained policies can further improve the agent's performance. Our results open a way to build more capable RL agents which can leverage previously gained knowledge to tackle QEC challenges.
translated by 谷歌翻译
近年来,机器学习的巨大进步已经开始对许多科学和技术的许多领域产生重大影响。在本文的文章中,我们探讨了量子技术如何从这项革命中受益。我们在说明性示例中展示了过去几年的科学家如何开始使用机器学习和更广泛的人工智能方法来分析量子测量,估计量子设备的参数,发现新的量子实验设置,协议和反馈策略,以及反馈策略,以及通常改善量子计算,量子通信和量子模拟的各个方面。我们重点介绍了公开挑战和未来的可能性,并在未来十年的一些投机愿景下得出结论。
translated by 谷歌翻译
随着真实世界量子计算的出现,参数化量子计算可以用作量子古典机器学习系统中的假设家庭的想法正在增加牵引力的增加。这种混合系统已经表现出潜力在监督和生成学习中解决现实世界任务,最近的作品已经在特殊的人工任务中建立了他们可提供的优势。然而,在加强学习的情况下,可以说是最具挑战性的,并且学习提升将是极为有价值的,在解决甚至标准的基准测试方面没有成功地取得了成功,也没有在典型算法上表达理论上的学习优势。在这项工作中,我们均达到两者。我们提出了一种使用很少的Qubits的混合量子古典强化学习模型,我们展示了可以有效地培训,以解决若干标准基准环境。此外,我们展示和正式证明,参数化量子电路解决了用于古典模型的棘手的某些学习任务的能力,包括当前最先进的深神经网络,在离散对数问题的广泛的经典硬度下。
translated by 谷歌翻译
本文介绍了用于交易单一资产的双重Q网络算法,即E-MINI S&P 500连续期货合约。我们使用经过验证的设置作为我们环境的基础,并具有多个扩展。我们的贸易代理商的功能不断扩展,包括其他资产,例如商品,从而产生了四种型号。我们还应对环境条件,包括成本和危机。我们的贸易代理商首先接受了特定时间段的培训,并根据新数据进行了测试,并将其与长期策略(市场)进行了比较。我们分析了各种模型与样本中/样本外性能之间有关环境的差异。实验结果表明,贸易代理人遵循适当的行为。它可以将其政策调整为不同的情况,例如在存在交易成本时更广泛地使用中性位置。此外,净资产价值超过了基准的净值,代理商在测试集中的市场优于市场。我们使用DDQN算法对代理商在金融领域中的行为提供初步见解。这项研究的结果可用于进一步发展。
translated by 谷歌翻译
With the development of experimental quantum technology, quantum control has attracted increasing attention due to the realization of controllable artificial quantum systems. However, because quantum-mechanical systems are often too difficult to analytically deal with, heuristic strategies and numerical algorithms which search for proper control protocols are adopted, and, deep learning, especially deep reinforcement learning (RL), is a promising generic candidate solution for the control problems. Although there have been a few successful applications of deep RL to quantum control problems, most of the existing RL algorithms suffer from instabilities and unsatisfactory reproducibility, and require a large amount of fine-tuning and a large computational budget, both of which limit their applicability. To resolve the issue of instabilities, in this dissertation, we investigate the non-convergence issue of Q-learning. Then, we investigate the weakness of existing convergent approaches that have been proposed, and we develop a new convergent Q-learning algorithm, which we call the convergent deep Q network (C-DQN) algorithm, as an alternative to the conventional deep Q network (DQN) algorithm. We prove the convergence of C-DQN and apply it to the Atari 2600 benchmark. We show that when DQN fail, C-DQN still learns successfully. Then, we apply the algorithm to the measurement-feedback cooling problems of a quantum quartic oscillator and a trapped quantum rigid body. We establish the physical models and analyse their properties, and we show that although both C-DQN and DQN can learn to cool the systems, C-DQN tends to behave more stably, and when DQN suffers from instabilities, C-DQN can achieve a better performance. As the performance of DQN can have a large variance and lack consistency, C-DQN can be a better choice for researches on complicated control problems.
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
机器学习算法中多个超参数的最佳设置是发出大多数可用数据的关键。为此目的,已经提出了几种方法,例如进化策略,随机搜索,贝叶斯优化和启发式拇指规则。在钢筋学习(RL)中,学习代理在与其环境交互时收集的数据的信息内容严重依赖于许多超参数的设置。因此,RL算法的用户必须依赖于基于搜索的优化方法,例如网格搜索或Nelder-Mead单简单算法,这对于大多数R1任务来说是非常效率的,显着减慢学习曲线和离开用户的速度有目的地偏见数据收集的负担。在这项工作中,为了使RL算法更加用户独立,提出了一种使用贝叶斯优化的自主超参数设置的新方法。来自过去剧集和不同的超参数值的数据通过执行行为克隆在元学习水平上使用,这有助于提高最大化获取功能的加强学习变体的有效性。此外,通过紧密地整合在加强学习代理设计中的贝叶斯优化,还减少了收敛到给定任务的最佳策略所需的状态转换的数量。与其他手动调整和基于优化的方法相比,计算实验显示了有希望的结果,这突出了改变算法超级参数来增加所生成数据的信息内容的好处。
translated by 谷歌翻译
量子机学习(QML)被认为是近术语量子设备最有前途的应用之一。然而,量子机器学习模型的优化呈现出众多挑战,从硬件的缺陷和导航指数上缩放的希尔伯特空间中的缺陷产生了巨大的挑战。在这项工作中,我们评估了深度增强学习中的当代方法的潜力,以增加量子变分电路中的增强基于梯度的优化例程。我们发现强化学习增强了优化器,始终突出噪声环境中的渐变血统。所有代码和备用重量都可用于复制结果或在https://github.com/lockwo/rl_qvc_opt上部署模型。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
In recent years, reinforcement learning (RL) has become increasingly successful in its application to science and the process of scientific discovery in general. However, while RL algorithms learn to solve increasingly complex problems, interpreting the solutions they provide becomes ever more challenging. In this work, we gain insights into an RL agent's learned behavior through a post-hoc analysis based on sequence mining and clustering. Specifically, frequent and compact subroutines, used by the agent to solve a given task, are distilled as gadgets and then grouped by various metrics. This process of gadget discovery develops in three stages: First, we use an RL agent to generate data, then, we employ a mining algorithm to extract gadgets and finally, the obtained gadgets are grouped by a density-based clustering algorithm. We demonstrate our method by applying it to two quantum-inspired RL environments. First, we consider simulated quantum optics experiments for the design of high-dimensional multipartite entangled states where the algorithm finds gadgets that correspond to modern interferometer setups. Second, we consider a circuit-based quantum computing environment where the algorithm discovers various gadgets for quantum information processing, such as quantum teleportation. This approach for analyzing the policy of a learned agent is agent and environment agnostic and can yield interesting insights into any agent's policy.
translated by 谷歌翻译
FIG. 1. Schematic diagram of a Variational Quantum Algorithm (VQA). The inputs to a VQA are: a cost function C(θ), with θ a set of parameters that encodes the solution to the problem, an ansatz whose parameters are trained to minimize the cost, and (possibly) a set of training data {ρ k } used during the optimization. Here, the cost can often be expressed in the form in Eq. ( 3), for some set of functions {f k }. Also, the ansatz is shown as a parameterized quantum circuit (on the left), which is analogous to a neural network (also shown schematically on the right). At each iteration of the loop one uses a quantum computer to efficiently estimate the cost (or its gradients). This information is fed into a classical computer that leverages the power of optimizers to navigate the cost landscape C(θ) and solve the optimization problem in Eq. ( 1). Once a termination condition is met, the VQA outputs an estimate of the solution to the problem. The form of the output depends on the precise task at hand. The red box indicates some of the most common types of outputs.
translated by 谷歌翻译
Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.
translated by 谷歌翻译
在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
我们确定和研究政策流失的现象,即基于价值的强化学习中贪婪政策的快速变化。政策流失以惊人的快速步伐运作,改变了少数学习更新(在Atari上的DQN等典型的深层RL设置中)中大量州的贪婪行动。我们从经验上表征了现象,验证它不限于特定算法或环境特性。许多消融有助于削弱关于为什么流失仅与深度学习有关的少数相关的合理解释。最后,我们假设政策流失是一种有益但被忽视的隐性探索形式,它以新鲜的方式铸造了$ \ epsilon $ greedy探索,即$ \ epsilon $ - noise的作用比预期的要小得多。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
In this thesis, we consider two simple but typical control problems and apply deep reinforcement learning to them, i.e., to cool and control a particle which is subject to continuous position measurement in a one-dimensional quadratic potential or in a quartic potential. We compare the performance of reinforcement learning control and conventional control strategies on the two problems, and show that the reinforcement learning achieves a performance comparable to the optimal control for the quadratic case, and outperforms conventional control strategies for the quartic case for which the optimal control strategy is unknown. To our knowledge, this is the first time deep reinforcement learning is applied to quantum control problems in continuous real space. Our research demonstrates that deep reinforcement learning can be used to control a stochastic quantum system in real space effectively as a measurement-feedback closed-loop controller, and our research also shows the ability of AI to discover new control strategies and properties of the quantum systems that are not well understood, and we can gain insights into these problems by learning from the AI, which opens up a new regime for scientific research.
translated by 谷歌翻译