最近的预训练的语言模型(PLM)通过学习语言特征和上下文化的句子表示,在许多自然语言处理任务上取得了巨大成功。由于未清楚地识别出在PLM的堆叠层中捕获的属性,因此通常首选嵌入最后一层的直接方法,而不是从PLM中得出句子表示。本文介绍了基于注意力的合并策略,该策略使该模型能够保留每一层中捕获的图层信号,并学习下游任务的消化语言特征。对比度学习目标可以使层面上的注意力汇集到无监督和监督的举止。它导致预先训练嵌入的各向异性空间并更均匀。我们评估我们的模型关于标准语义文本相似性(STS)和语义搜索任务。结果,我们的方法改善了基础对比度的BERT_BASE和变体的性能。
translated by 谷歌翻译
A central problem in computational biophysics is protein structure prediction, i.e., finding the optimal folding of a given amino acid sequence. This problem has been studied in a classical abstract model, the HP model, where the protein is modeled as a sequence of H (hydrophobic) and P (polar) amino acids on a lattice. The objective is to find conformations maximizing H-H contacts. It is known that even in this reduced setting, the problem is intractable (NP-hard). In this work, we apply deep reinforcement learning (DRL) to the two-dimensional HP model. We can obtain the conformations of best known energies for benchmark HP sequences with lengths from 20 to 50. Our DRL is based on a deep Q-network (DQN). We find that a DQN based on long short-term memory (LSTM) architecture greatly enhances the RL learning ability and significantly improves the search process. DRL can sample the state space efficiently, without the need of manual heuristics. Experimentally we show that it can find multiple distinct best-known solutions per trial. This study demonstrates the effectiveness of deep reinforcement learning in the HP model for protein folding.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
In collider-based particle and nuclear physics experiments, data are produced at such extreme rates that only a subset can be recorded for later analysis. Typically, algorithms select individual collision events for preservation and store the complete experimental response. A relatively new alternative strategy is to additionally save a partial record for a larger subset of events, allowing for later specific analysis of a larger fraction of events. We propose a strategy that bridges these paradigms by compressing entire events for generic offline analysis but at a lower fidelity. An optimal-transport-based $\beta$ Variational Autoencoder (VAE) is used to automate the compression and the hyperparameter $\beta$ controls the compression fidelity. We introduce a new approach for multi-objective learning functions by simultaneously learning a VAE appropriate for all values of $\beta$ through parameterization. We present an example use case, a di-muon resonance search at the Large Hadron Collider (LHC), where we show that simulated data compressed by our $\beta$-VAE has enough fidelity to distinguish distinct signal morphologies.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
现有的数据驱动和反馈流量控制策略不考虑实时数据测量的异质性。此外,对于缺乏数据效率,传统的加固学习方法(RL)方法通常会缓慢收敛。此外,常规的最佳外围控制方案需要对系统动力学的精确了解,因此对内源性不确定性会很脆弱。为了应对这些挑战,这项工作提出了一种基于不可或缺的增强学习(IRL)的方法来学习宏观交通动态,以进行自适应最佳周边控制。这项工作为运输文献做出了以下主要贡献:(a)开发连续的时间控制,并具有离散增益更新以适应离散时间传感器数据。 (b)为了降低采样复杂性并更有效地使用可用数据,将体验重播(ER)技术引入IRL算法。 (c)所提出的方法以“无模型”方式放松模型校准的要求,该方式可以稳健地进行建模不确定性,并通过数据驱动的RL算法增强实时性能。 (d)通过Lyapunov理论证明了基于IRL的算法和受控交通动力学的稳定性的收敛性。最佳控制定律被参数化,然后通过神经网络(NN)近似,从而缓解计算复杂性。在不需要模型线性化的同时,考虑了状态和输入约束。提出了数值示例和仿真实验,以验证所提出方法的有效性和效率。
translated by 谷歌翻译
位置识别是可以协助同时定位和映射(SLAM)进行循环闭合检测和重新定位以进行长期导航的基本模块。在过去的20美元中,该地点认可社区取得了惊人的进步,这吸引了在计算机视觉和机器人技术等多个领域的广泛研究兴趣和应用。但是,在复杂的现实世界情景中,很少有方法显示出有希望的位置识别性能,在复杂的现实世界中,长期和大规模的外观变化通常会导致故障。此外,在最先进的方法之间缺乏集成框架,可以应对所有挑战,包括外观变化,观点差异,对未知区域的稳健性以及现实世界中的效率申请。在这项工作中,我们调查针对长期本地化并讨论未来方向和机会的最先进方法。首先,我们研究了长期自主权中的位置识别以及在现实环境中面临的主要挑战。然后,我们回顾了最新的作品,以应对各种位置识别挑战的不同传感器方式和当前的策略的认可。最后,我们回顾了现有的数据集以进行长期本地化,并为不同的方法介绍了我们的数据集和评估API。本文可以成为该地点识别界新手的研究人员以及关心长期机器人自主权的研究人员。我们还对机器人技术中的常见问题提供了意见:机器人是否需要准确的本地化来实现长期自治?这项工作以及我们的数据集和评估API的摘要可向机器人社区公开,网址为:https://github.com/metaslam/gprs。
translated by 谷歌翻译
镜像检测旨在识别给定输入图像中的镜像区域。现有作品主要集中于整合语义特征和结构特征,以挖掘镜像和非摩尔区域之间的相似性和不连续性,或者引入深度信息以帮助分析镜像的存在。在这项工作中,我们观察到一个真实的对象通常与镜子中的相应反射形成松散的对称关系,这有助于区分镜子和真实对象。基于此观察结果,我们提出了一个基于双路对称性变压器的镜像检测网络(SATNET),其中包括两个新型模块:对称性吸引注意的注意模块(SAAM)以及对比度和融合解码器模块(CFDM)。具体而言,我们首先引入了变压器主干,以模拟图像中的全局信息聚合,并在两条路径中提取多尺度特征。然后,我们将高级双路径特征喂给Saams以捕获对称关系。最后,我们融合了双路径功能,并使用CFDM逐渐完善我们的预测图,以获得最终的镜面掩码。实验结果表明,在所有可用的镜像检测数据集上,Satnet优于RGB和RGB-D镜检测方法。
translated by 谷歌翻译
非凸AC-OPF问题的多个负载分解映射的存在对深神经网络(DNN)方案构成了根本挑战。由于训练数据集可能包含与不同负载分解映射相对应的数据点的混合物,因此DNN可能无法学习合法的映射并生成劣质解决方案。我们建议DeepOpf-al作为解决此问题的增强学习方法。这个想法是训练DNN,以学习从增强输入(即(负载,初始点))的唯一映射到由具有负载和初始点作为进气口的迭代OPF求解器生成的解决方案。然后,我们将学习的增强映射应用于求解AC-OPF问题的速度要快得多。与最近的DNN方案相比,IEEE测试案例的模拟结果表明,DeepOPF-AL可以明显地取得更好的最优性和相似的可行性和加速性能,具有相同的DNN大小却提高了训练的复杂性。
translated by 谷歌翻译
We provide results that exactly quantify how data augmentation affects the convergence rate and variance of estimates. They lead to some unexpected findings: Contrary to common intuition, data augmentation may increase rather than decrease the uncertainty of estimates, such as the empirical prediction risk. Our main theoretical tool is a limit theorem for functions of randomly transformed, high-dimensional random vectors. The proof draws on work in probability on noise stability of functions of many variables. The pathological behavior we identify is not a consequence of complex models, but can occur even in the simplest settings -- one of our examples is a ridge regressor with two parameters. On the other hand, our results also show that data augmentation can have real, quantifiable benefits.
translated by 谷歌翻译