我们考虑统计逆学习问题,任务是根据$ AF $的嘈杂点评估估算函数$ F $,其中$ a $是一个线性运算符。函数$ AF $在I.I.D评估。随机设计点$ u_n $,$ n = 1,...,n $由未知的一般概率分布生成。我们认为Tikhonov正规用一般凸起和$ P $-Homenecous罚款功能,并在由惩罚功能引起的对称BREGMAN距离中测量的地面真理的正则化解决方案的集中率。我们获得了Besov Norm处罚的具体率,并在数值上展示了与X射线断层扫描的背景下的观察到的率的对应。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在过去的几年中,关于分类,检测和分割问题的3D学习领域取得了重大进展。现有的绝大多数研究都集中在规范的封闭式条件上,忽略了现实世界的内在开放性。这限制了需要管理新颖和未知信号的自主系统的能力。在这种情况下,利用3D数据可以是有价值的资产,因为它传达了有关感应物体和场景几何形状的丰富信息。本文提供了关于开放式3D学习的首次广泛研究。我们介绍了一种新颖的测试床,其设置在类别语义转移方面的难度增加,并且涵盖了内域(合成之间)和跨域(合成对真实)场景。此外,我们研究了相关的分布情况,并开放了2D文献,以了解其最新方法是否以及如何在3D数据上有效。我们广泛的基准测试在同一连贯的图片中定位了几种算法,从而揭示了它们的优势和局限性。我们的分析结果可能是未来量身定制的开放式3D模型的可靠立足点。
translated by 谷歌翻译
探测方法允许人们使用外部分类器和统计分析获得存储在神经网络内层中的语言现象的部分表示。预先训练的基于变压器的语言模型被广泛用于自然语言理解(NLU)和自然语言生成(NLG)任务,使其最常用于下游应用程序。但是,几乎没有进行分析,无论这些模型是否已进行了足够的培训或包含与语言理论相关的知识。我们介绍了变压器英语模型(例如Multibert和T5)的时间顺序探测研究。我们依次比较模型在语料库培训过程中学到的语言的信息。结果表明,1)在培训的早期阶段获得语言信息2)两种语言模型都表明能够捕获各种语言的各种功能,包括形态学,语法甚至话语,而它们也可能不一致地失败。被认为很容易。我们还介绍了与其他基于变压器的模型兼容的时间顺序探测研究的开源框架。 https://github.com/ekaterinavoloshina/Chronology_probing
translated by 谷歌翻译
Artificial neural networks for motor control usually adopt generic architectures like fully connected MLPs. While general, these tabula rasa architectures rely on large amounts of experience to learn, are not easily transferable to new bodies, and have internal dynamics that are difficult to interpret. In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution; this innate circuitry acts synergistically with learning mechanisms to provide inductive biases that enable most animals to function well soon after birth and learn efficiently. Convolutional networks inspired by visual circuitry have encoded useful biases for vision. However, it is unknown the extent to which ANN architectures inspired by neural circuitry can yield useful biases for other AI domains. In this work, we ask what advantages biologically inspired ANN architecture can provide in the domain of motor control. Specifically, we translate C. elegans locomotion circuits into an ANN model controlling a simulated Swimmer agent. On a locomotion task, our architecture achieves good initial performance and asymptotic performance comparable with MLPs, while dramatically improving data efficiency and requiring orders of magnitude fewer parameters. Our architecture is interpretable and transfers to new body designs. An ablation analysis shows that constrained excitation/inhibition is crucial for learning, while weight initialization contributes to good initial performance. Our work demonstrates several advantages of biologically inspired ANN architecture and encourages future work in more complex embodied control.
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
与人类的比较是基准的基本要求是它是一种可靠的模型能力测量。然而,模型比较方法可能具有基础缺陷 - 单独度量的算术平均值用于不同复杂性的所有任务,测试和训练集的不同大小。在本文中,我们根据其报告的结果,检查流行的NLP基准测试的整体评分方法,并通过几何和谐波(适于平均率)重新排列模型。我们分析了几个流行的基准,包括胶水,超级格,XGLUE和Xtreme。分析表明,例如,SuperGlue上的人类水平仍未到达,目前模型还有改进的余地。
translated by 谷歌翻译
为了保护热带森林生物多样性,我们需要能够可靠,便宜地和规模地检测它。通过机器学习方法从被动录制的SoundScapes检测自动化物种是对此目标的有希望的技术,但它受到大型训练数据集的必要性。在婆罗洲的热带森林中使用Soundcapes和通过转移学习创建的卷积神经网络模型(CNN),我们调查I)最低可行训练数据集规模,用于准确预测呼叫类型('Sonotypes')和II)的程度数据增强可以克服小型训练数据集的问题。我们发现甚至相对较高的样本尺寸(每个呼叫类型)导致平庸的精度,然而,无论分类学组或呼叫特征如何,数据增强都会显着提高。我们的研究结果表明,即使对于具有许多罕见物种的小型Sountscape的项目,转移学习和数据增强可以使用CNN来分类物种的发声。我们的开源方法有可能使节约计划能够通过在生物多样性的自适应管理中使用Soundscape数据来实现更有证据。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译