This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
预测编码网络(PCN)旨在学习世界的生成模型。给定观察结果,可以倒入该生成模型以推断这些观察结果的原因。但是,当训练PCNS时,通常会观察到明显的病理学,而推理精度峰值峰值,然后通过进一步的训练下降。这不能通过过度拟合来解释,因为训练和测试准确性同时降低。在这里,我们对这种现象进行了彻底的研究,并表明它是由PCN层面各个层之间的速度之间的不平衡引起的。我们证明,可以通过在每一层的重量矩阵正规化:限制矩阵奇异值的相对大小来防止这一点,我们允许重量矩阵改变,但限制了一层可以对其邻居产生的整体影响。我们还证明,通过仅限制权重的更加合理和简单的方案,可以实现类似的效果。
translated by 谷歌翻译
最近的工作发现了经典的加固学习算法,贝叶斯过滤和主动推断之间的紧密联系,这使我们可以从贝叶斯后期来理解价值功能。一种替代方案但较少探索的无模型RL算法是后继表示,它以预期未来状态占领的后继矩阵来表达价值函数。在本文中,我们根据贝叶斯过滤得出了对后继表示的概率解释,从而设计了一种新型的主动推理代理体系结构,利用后继表示而不是基于模型的计划。我们证明,积极推理后继表示在计划范围和计算成本方面,与当前主动推理代理相比具有显着优势。此外,我们演示了继任代理如何推广到改变奖励功能(例如预期自由能的变体)。
translated by 谷歌翻译
有效推论是一种数学框架,它起源于计算神经科学,作为大脑如何实现动作,感知和学习的理论。最近,已被证明是在不确定性下存在国家估算和控制问题的有希望的方法,以及一般的机器人和人工代理人的目标驱动行为的基础。在这里,我们审查了最先进的理论和对国家估计,控制,规划和学习的积极推断的实现;描述当前的成就,特别关注机器人。我们展示了相关实验,以适应,泛化和稳健性而言说明其潜力。此外,我们将这种方法与其他框架联系起来,并讨论其预期的利益和挑战:使用变分贝叶斯推理具有功能生物合理性的统一框架。
translated by 谷歌翻译
预测性编码提供了对皮质功能的潜在统一说明 - 假设大脑的核心功能是最小化有关世界生成模型的预测错误。该理论与贝叶斯大脑框架密切相关,在过去的二十年中,在理论和认知神经科学领域都产生了重大影响。基于经验测试的预测编码的改进和扩展的理论和数学模型,以及评估其在大脑中实施的潜在生物学合理性以及该理论所做的具体神经生理学和心理学预测。尽管存在这种持久的知名度,但仍未对预测编码理论,尤其是该领域的最新发展进行全面回顾。在这里,我们提供了核心数学结构和预测编码的逻辑的全面综述,从而补充了文献中最新的教程。我们还回顾了该框架中的各种经典和最新工作,从可以实施预测性编码的神经生物学现实的微电路到预测性编码和广泛使用的错误算法的重新传播之间的紧密关系,以及对近距离的调查。预测性编码和现代机器学习技术之间的关系。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
translated by 谷歌翻译
在硅组织模型中,可以评估磁共振成像的定量模型。这包括对成像生物标志物和组织微结构参数的验证和灵敏度分析。我们提出了一种新的方法来生成心肌微结构的现实数值幻影。我们扩展了以前的研究,该研究考虑了心肌细胞的变异性,心肌细胞(插入式椎间盘)之间的水交换,心肌微结构混乱和四个钣金方向。在该方法的第一阶段,心肌细胞和钣金是通过考虑心肌到骨膜细胞连接的形状变异性和插入式椎间盘而产生的。然后,将薄板汇总和定向在感兴趣的方向上。我们的形态计量学研究表明,数值和真实(文献)心肌细胞数据的体积,长度以及一级和次要轴的分布之间没有显着差异($ p> 0.01 $)。结构相关性分析证实了硅内组织与实际组织的混乱类别相同。此外,心肌细胞的模拟螺旋角(HA)和输入HA(参考值)之间的绝对角度差($ 4.3^\ Circ \ PM 3.1^\ Circ $)与所测量HA之间的绝对角差有很好的一致性使用实验性心脏扩散张量成像(CDTI)和组织学(参考值)(Holmes等,2000)($ 3.7^\ Circ \ PM6.4^\ Circ $)和(Scollan等,1998)($ 4.9) ^\ circ \ pm 14.6^\ circ $)。使用结构张量成像(黄金标准)和实验性CDTI,输入和模拟CDTI的特征向量和模拟CDTI的角度之间的角度距离小于测量角度之间的角度距离。这些结果证实,所提出的方法比以前的研究可以为心肌产生更丰富的数值幻象。
translated by 谷歌翻译
在本文中,我们提出了一个紧密耦合的视觉惯性对象级多效性动态大满贯系统。即使在极其动态的场景中,它也可以为摄像机姿势,速度,IMU偏见并构建一个密集的3D重建对象级映射图。我们的系统可以通过稳健的传感器和对象跟踪,可以强牢固地跟踪和重建任意对象的几何形状,其语义和运动的几何形状,其语义和运动的几何形状,并通过逐步融合相关的颜色,深度,语义和前景对象概率概率。此外,当对象在视野视野外丢失或移动时,我们的系统可以在重新观察时可靠地恢复其姿势。我们通过定量和定性测试现实世界数据序列来证明我们方法的鲁棒性和准确性。
translated by 谷歌翻译
本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译