AI的一个关键挑战是构建体现的系统,该系统在动态变化的环境中运行。此类系统必须适应更改任务上下文并持续学习。虽然标准的深度学习系统实现了最先进的静态基准的结果,但它们通常在动态方案中挣扎。在这些设置中,来自多个上下文的错误信号可能会彼此干扰,最终导致称为灾难性遗忘的现象。在本文中,我们将生物学启发的架构调查为对这些问题的解决方案。具体而言,我们表明树突和局部抑制系统的生物物理特性使网络能够以特定于上下文的方式动态限制和路由信息。我们的主要贡献如下。首先,我们提出了一种新颖的人工神经网络架构,该架构将活跃的枝形和稀疏表示融入了标准的深度学习框架中。接下来,我们在需要任务的适应性的两个单独的基准上研究这种架构的性能:Meta-World,一个机器人代理必须学习同时解决各种操纵任务的多任务强化学习环境;和一个持续的学习基准,其中模型的预测任务在整个训练中都会发生变化。对两个基准的分析演示了重叠但不同和稀疏的子网的出现,允许系统流动地使用最小的遗忘。我们的神经实现标志在单一架构上第一次在多任务和持续学习设置上取得了竞争力。我们的研究揭示了神经元的生物学特性如何通知深度学习系统,以解决通常不可能对传统ANN来解决的动态情景。
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
Quantum computing (QC) promises significant advantages on certain hard computational tasks over classical computers. However, current quantum hardware, also known as noisy intermediate-scale quantum computers (NISQ), are still unable to carry out computations faithfully mainly because of the lack of quantum error correction (QEC) capability. A significant amount of theoretical studies have provided various types of QEC codes; one of the notable topological codes is the surface code, and its features, such as the requirement of only nearest-neighboring two-qubit control gates and a large error threshold, make it a leading candidate for scalable quantum computation. Recent developments of machine learning (ML)-based techniques especially the reinforcement learning (RL) methods have been applied to the decoding problem and have already made certain progress. Nevertheless, the device noise pattern may change over time, making trained decoder models ineffective. In this paper, we propose a continual reinforcement learning method to address these decoding challenges. Specifically, we implement double deep Q-learning with probabilistic policy reuse (DDQN-PPR) model to learn surface code decoding strategies for quantum environments with varying noise patterns. Through numerical simulations, we show that the proposed DDQN-PPR model can significantly reduce the computational complexity. Moreover, increasing the number of trained policies can further improve the agent's performance. Our results open a way to build more capable RL agents which can leverage previously gained knowledge to tackle QEC challenges.
translated by 谷歌翻译
Training a neural network requires choosing a suitable learning rate, involving a trade-off between speed and effectiveness of convergence. While there has been considerable theoretical and empirical analysis of how large the learning rate can be, most prior work focuses only on late-stage training. In this work, we introduce the maximal initial learning rate $\eta^{\ast}$ - the largest learning rate at which a randomly initialized neural network can successfully begin training and achieve (at least) a given threshold accuracy. Using a simple approach to estimate $\eta^{\ast}$, we observe that in constant-width fully-connected ReLU networks, $\eta^{\ast}$ demonstrates different behavior to the maximum learning rate later in training. Specifically, we find that $\eta^{\ast}$ is well predicted as a power of $(\text{depth} \times \text{width})$, provided that (i) the width of the network is sufficiently large compared to the depth, and (ii) the input layer of the network is trained at a relatively small learning rate. We further analyze the relationship between $\eta^{\ast}$ and the sharpness $\lambda_{1}$ of the network at initialization, indicating that they are closely though not inversely related. We formally prove bounds for $\lambda_{1}$ in terms of $(\text{depth} \times \text{width})$ that align with our empirical results.
translated by 谷歌翻译
Language models can be prompted to perform a wide variety of zero- and few-shot learning problems. However, performance varies significantly with the choice of prompt, and we do not yet understand why this happens or how to pick the best prompts. In this work, we analyze the factors that contribute to this variance and establish a new empirical hypothesis: the performance of a prompt is coupled with the extent to which the model is familiar with the language it contains. Over a wide range of tasks, we show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. As a result, we devise a method for creating prompts: (1) automatically extend a small seed set of manually written prompts by paraphrasing using GPT3 and backtranslation and (2) choose the lowest perplexity prompts to get significant gains in performance.
translated by 谷歌翻译
Keyless entry systems in cars are adopting neural networks for localizing its operators. Using test-time adversarial defences equip such systems with the ability to defend against adversarial attacks without prior training on adversarial samples. We propose a test-time adversarial example detector which detects the input adversarial example through quantifying the localized intermediate responses of a pre-trained neural network and confidence scores of an auxiliary softmax layer. Furthermore, in order to make the network robust, we extenuate the non-relevant features by non-iterative input sample clipping. Using our approach, mean performance over 15 levels of adversarial perturbations is increased by 55.33% for the fast gradient sign method (FGSM) and 6.3% for both the basic iterative method (BIM) and the projected gradient method (PGD).
translated by 谷歌翻译
Real engineering and scientific applications often involve one or more qualitative inputs. Standard Gaussian processes (GPs), however, cannot directly accommodate qualitative inputs. The recently introduced latent variable Gaussian process (LVGP) overcomes this issue by first mapping each qualitative factor to underlying latent variables (LVs), and then uses any standard GP covariance function over these LVs. The LVs are estimated similarly to the other GP hyperparameters through maximum likelihood estimation, and then plugged into the prediction expressions. However, this plug-in approach will not account for uncertainty in estimation of the LVs, which can be significant especially with limited training data. In this work, we develop a fully Bayesian approach for the LVGP model and for visualizing the effects of the qualitative inputs via their LVs. We also develop approximations for scaling up LVGPs and fully Bayesian inference for the LVGP hyperparameters. We conduct numerical studies comparing plug-in inference against fully Bayesian inference over a few engineering models and material design applications. In contrast to previous studies on standard GP modeling that have largely concluded that a fully Bayesian treatment offers limited improvements, our results show that for LVGP modeling it offers significant improvements in prediction accuracy and uncertainty quantification over the plug-in approach.
translated by 谷歌翻译
我们表明,具有“低稳定器复杂性”的量子状态可以有效地与HAAR随机区分开。具体而言,给定$ n $ qubit的纯状态$ | \ psi \ rangle $,我们给出了一种有效的算法,以区分$ | \ psi \ rangle $是(i)haar-random或(ii)具有稳定器保真度的状态至少$ \ frac {1} {k} $(即,具有一些稳定器状态的保真度至少$ \ frac {1} {k} $),保证就是其中之一。使用Black-box访问$ | \ psi \ rangle $,我们的算法使用$ o \!\ left(k^{12} \ log(1/\ delta)\ right)$ copies $ | \ psi \ rangle $和$ o \!\ left(n k^{12} \ log(1/\ delta)\ right)$ $时间以概率至少$ 1- \ delta $成功,并且随着访问状态准备统一,以$ | | \ psi \ rangle $(及其倒数),$ o \!\ left(k^{3} \ log(1/\ delta)\ right)$ queries和$ o \!\! log(1/\ delta)\ right)$时间就足够了。作为推论,我们证明$ \ omega(\ log(n))$ $ t $ - 盖特对于任何Clifford+$ t $ circile都是必不可少的,以准备计算上的pseudorandom Quantum Quantum state,这是一种首要的下限。
translated by 谷歌翻译
高维计算(HDC)是用于数据表示和学习的范式,起源于计算神经科学。HDC将数据表示为高维,低精度向量,可用于学习或召回等各种信息处理任务。高维空间的映射是HDC中的一个基本问题,现有方法在输入数据本身是高维时会遇到可伸缩性问题。在这项工作中,我们探索了一个基于哈希的流媒体编码技术。我们正式表明,这些方法在学习应用程序的性能方面具有可比的保证,同时比现有替代方案更有效。我们在一个流行的高维分类问题上对这些结果进行了实验验证,并表明我们的方法很容易扩展到非常大的数据集。
translated by 谷歌翻译
两视图知识图(kgs)共同表示两个组成部分:抽象和常识概念的本体论观点,以及针对本体论概念实例化的特定实体的实例视图。因此,这些kg包含来自实例视图的本体学和周期性的分层的异质结构。尽管KG中有这些不同的结构,但最新的嵌入KG的作品假设整个KG仅属于两个观点之一,但并非同时属于。对于寻求将KG视为两种视图的作品,假定实例和本体论的观点属于相同的几何空间,例如所有嵌入在同一欧几里得空间中的节点或非欧盟产品空间,不再是合理的。对于两视图kg,图表的不同部分显示出不同的结构。为了解决这个问题,我们定义并构建了一个双几何空间嵌入模型(DGS),该模型通过将KG的不同部分嵌入不同的几何空间中,该模型使用复杂的非欧盟几何几何空间进行对两视图KGS进行建模。 DGS利用球形空间,双曲线空间及其在统一框架中学习嵌入的框架中的相交空间。此外,对于球形空间,我们提出了直接在球形空间中运行的新型封闭的球形空间操作员,而无需映射到近似切线空间。公共数据集上的实验表明,DGS在KG完成任务上的先前最先进的基线模型明显优于先前的基线模型,这表明了其在KGS中更好地建模异质结构的能力。
translated by 谷歌翻译