As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
尽管当前的视觉算法在许多具有挑战性的任务上都表现出色,但尚不清楚他们如何理解现实世界环境的物理动态。在这里,我们介绍了Physion,一种数据集和基准,用于严格评估预测物理场景如何随着时间而发展的能力。我们的数据集具有对各种物理现象的现实模拟,包括刚性和软体体碰撞,稳定的多对象配置,滚动,滑动和弹丸运动,因此比以前的基准提供了更全面的挑战。我们使用Physion来基准一套模型,其体系结构,学习目标,投入输出结构和培训数据各不相同。同时,我们在同一场景上获得了人类预测行为的精确测量,从而使我们能够直接评估任何模型能够近似人类行为的效果。我们发现,学习以对象为中心的表示的视觉算法通常优于那些没有人的表现,但仍未达到人类绩效。另一方面,绘制具有直接访问物理状态信息的神经网络的表现效果更好,并且做出与人类制作的预测更相似。这些结果表明,提取场景的物理表征是在视力算法中实现人类水平和类似人类的物理理解的主要瓶颈。我们已公开发布了所有数据和代码,以促进使用物理以完全可重现的方式对其他模型进行基准测试,从而使对视觉算法的进度进行系统的评估,这些算法像人们一样坚固地了解物理环境。
translated by 谷歌翻译
The world currently offers an abundance of data in multiple domains, from which we can learn reinforcement learning (RL) policies without further interaction with the environment. RL agents learning offline from such data is possible but deploying them while learning might be dangerous in domains where safety is critical. Therefore, it is essential to find a way to estimate how a newly-learned agent will perform if deployed in the target environment before actually deploying it and without the risk of overestimating its true performance. To achieve this, we introduce a framework for safe evaluation of offline learning using approximate high-confidence off-policy evaluation (HCOPE) to estimate the performance of offline policies during learning. In our setting, we assume a source of data, which we split into a train-set, to learn an offline policy, and a test-set, to estimate a lower-bound on the offline policy using off-policy evaluation with bootstrapping. A lower-bound estimate tells us how good a newly-learned target policy would perform before it is deployed in the real environment, and therefore allows us to decide when to deploy our learned policy.
translated by 谷歌翻译
In this work, we demonstrate the offline FPGA realization of both recurrent and feedforward neural network (NN)-based equalizers for nonlinearity compensation in coherent optical transmission systems. First, we present a realization pipeline showing the conversion of the models from Python libraries to the FPGA chip synthesis and implementation. Then, we review the main alternatives for the hardware implementation of nonlinear activation functions. The main results are divided into three parts: a performance comparison, an analysis of how activation functions are implemented, and a report on the complexity of the hardware. The performance in Q-factor is presented for the cases of bidirectional long-short-term memory coupled with convolutional NN (biLSTM + CNN) equalizer, CNN equalizer, and standard 1-StpS digital back-propagation (DBP) for the simulation and experiment propagation of a single channel dual-polarization (SC-DP) 16QAM at 34 GBd along 17x70km of LEAF. The biLSTM+CNN equalizer provides a similar result to DBP and a 1.7 dB Q-factor gain compared with the chromatic dispersion compensation baseline in the experimental dataset. After that, we assess the Q-factor and the impact of hardware utilization when approximating the activation functions of NN using Taylor series, piecewise linear, and look-up table (LUT) approximations. We also show how to mitigate the approximation errors with extra training and provide some insights into possible gradient problems in the LUT approximation. Finally, to evaluate the complexity of hardware implementation to achieve 400G throughput, fixed-point NN-based equalizers with approximated activation functions are developed and implemented in an FPGA.
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译
学习协作对于多机构增强学习(MARL)至关重要。以前的作品通过最大化代理行为的相关性来促进协作,该行为的相关性通常以不同形式的相互信息(MI)为特征。但是,我们揭示了次最佳的协作行为,也出现了强烈的相关性,并且简单地最大化MI可以阻碍学习的学习能力。为了解决这个问题,我们提出了一个新颖的MARL框架,称为“渐进式信息协作(PMIC)”,以进行更有效的MI驱动协作。 PMIC使用全球国家和联合行动之间MI测量的新协作标准。基于此标准,PMIC的关键思想是最大程度地提高与优越的协作行为相关的MI,并最大程度地减少与下等方面相关的MI。这两个MI目标通过促进更好的合作,同时避免陷入次级优势,从而扮演互补的角色。与其他算法相比,在各种MARL基准测试的实验表明,PMIC的表现出色。
translated by 谷歌翻译
神经网络的可解释性及其潜在的理论行为仍然是一个开放的学习领域,即使在实际应用的巨大成功之后,特别是在深度学习的出现。在这项工作中,提出了NN2Poly:一种理论方法,允许获得提供已经训练的深神经网络的替代表示的多项式。这扩展了ARXIV中提出的先前想法:2102.03865,其仅限于单个隐藏层神经网络,以便在回归和分类任务中使用任意深度前馈神经网络。本文的目的是通过在每层的激活函数上使用泰勒膨胀来实现,然后使用若干组合性质,允许识别所需多项式的系数。讨论了实现本理论方法时的主要计算限制,并介绍了NN2POLY工作所必需的神经网络权重的约束的示例。最后,呈现了一些模拟,得出结论,使用NN2Poly可以获得给定神经网络的表示,并且在所获得的预测之间具有低误差。
translated by 谷歌翻译
基于像素的控制的学习表示,最近在加固学习中获得了重大关注。已经提出了广泛的方法来实现高效学习,导致类似于完整状态设置中的复杂性。然而,超越仔细策划的像素数据集(以居中作物,适当的照明,清晰的背景等)仍然具有挑战性。在本文中,我们采用更困难的环境,纳入背景干扰者,作为解决这一挑战的第一步。我们提出了一种简单的基线方法,可以学习有意义的表示,没有基于度量的学习,没有数据增强,没有世界模型学习,也没有对比学习。然后,我们分析何时何种以及为什么先前提出的方法可能会失败或减少与此更难设置中的基线相同的表现,以及为什么我们应该仔细考虑扩展在井策良好环境之外的这种方法。我们的研究结果表明,基于奖励密度,问题的规划地平线,任务 - 无关组件等的规划等的粮食基准,对评估算法至关重要。基于这些观察,我们提出了在评估基准任务的算法时考虑不同的指标。我们希望在调查如何最佳地将RL应用于现实世界任务时激励研究人员对重新思考代表学习来激发研究人员。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译