Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
超越地球轨道的人类空间勘探将涉及大量距离和持续时间的任务。为了有效减轻无数空间健康危害,数据和空间健康系统的范式转移是实现地球独立性的,而不是Earth-Reliance所必需的。有希望在生物学和健康的人工智能和机器学习领域的发展可以解决这些需求。我们提出了一个适当的自主和智能精密空间健康系统,可以监控,汇总和评估生物医学状态;分析和预测个性化不良健康结果;适应并响应新累积的数据;并提供对其船员医务人员的个人深度空间机组人员和迭代决策支持的预防性,可操作和及时的见解。在这里,我们介绍了美国国家航空航天局组织的研讨会的建议摘要,以便在太空生物学和健康中未来的人工智能应用。在未来十年,生物监测技术,生物标志科学,航天器硬件,智能软件和简化的数据管理必须成熟,并编织成精确的空间健康系统,以使人类在深空中茁壮成长。
translated by 谷歌翻译
空间生物学研究旨在了解太空飞行对生物的根本影响,制定支持深度空间探索的基础知识,最终生物工程航天器和栖息地稳定植物,农作物,微生物,动物和人类的生态系统,为持续的多行星寿命稳定。要提高这些目标,该领域利用了来自星空和地下模拟研究的实验,平台,数据和模型生物。由于研究扩展到低地球轨道之外,实验和平台必须是最大自主,光,敏捷和智能化,以加快知识发现。在这里,我们介绍了由美国国家航空航天局的人工智能,机器学习和建模应用程序组织的研讨会的建议摘要,这些应用程序为这些空间生物学挑战提供了关键解决方案。在未来十年中,将人工智能融入太空生物学领域将深化天空效应的生物学理解,促进预测性建模和分析,支持最大自主和可重复的实验,并有效地管理星载数据和元数据,所有目标使生活能够在深空中茁壮成长。
translated by 谷歌翻译
Automated machine learning (AutoML) algorithms have grown in popularity due to their high performance and flexibility to adapt to different problems and data sets. With the increasing number of AutoML algorithms, deciding which would best suit a given problem becomes increasingly more work. Therefore, it is essential to use complex and challenging benchmarks which would be able to differentiate the AutoML algorithms from each other. This paper compares the performance of four different AutoML algorithms: Tree-based Pipeline Optimization Tool (TPOT), Auto-Sklearn, Auto-Sklearn 2, and H2O AutoML. We use the Diverse and Generative ML benchmark (DIGEN), a diverse set of synthetic datasets derived from generative functions designed to highlight the strengths and weaknesses of the performance of common machine learning algorithms. We confirm that AutoML can identify pipelines that perform well on all included datasets. Most AutoML algorithms performed similarly without much room for improvement; however, some were more consistent than others at finding high-performing solutions for some datasets.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
在生物医学数据中寻求预测模型时,人们通常会想到一个目标,例如,具有高精度和低复杂性(以促进可解释性)。我们在此研究是否可以通过我们最近提出的协调算法,安全(解决方案和健身进化)动态调整多个目标。我们发现,与配子工具生成的复杂模拟遗传数据集相比,与标准进化算法相比,Safe能够自动调整精度和复杂性,而无需损失,而没有性能损失。
translated by 谷歌翻译
我们最近提出了安全的 - 解决方案和健身进化 - 一种相应的协调算法,该算法维持两个共同发展的人群:候选解决方案和候选目标函数的种群。我们表明,安全在机器人迷宫领域内发展溶液的成功。本文中,我们介绍了Safe的适应和对多目标问题的应用的研究,其中候选目标功能探索了每个目标的不同权重。尽管初步的结果表明,安全以及共同发展的解决方案和目标功能的概念可以识别一组类似的最佳多物镜解决方案,而无需显式使用帕累托前锋进行健身计算和父母选择。这些发现支持我们的假设,即安全算法概念不仅可以解决复杂的问题,而且可以适应多个目标问题的挑战。
translated by 谷歌翻译
最近,我们强调了一个基本问题,该问题被认为是混淆算法优化的,即\ textit {Confing}与目标函数的目标。即使前者的定义很好,后者也可能并不明显,例如,在学习一种策略来导航迷宫以找到目标(客观)时,有效的目标函数\ textit {评估}策略可能不是一个简单的功能到目标的距离。我们建议自动化可能发现良好的目标功能的手段 - 此处得到的建议。我们提出\ textbf {s} iolution \ textbf {a} nd \ textbf {f} itness \ textbf {e} volution(\ textbf {safe}),a \ textit {comensalistic} coovolutionary algorithm候选解决方案和一系列候选目标功能。作为此概念原理的证明,我们表明安全不仅成功地发展了机器人迷宫领域内的解决方案,而且还可以在进化过程中衡量解决方案质量所需的目标函数。
translated by 谷歌翻译
通过替换嵌入式弱学习者以强有力(ER)的一个来修改标准梯度提升,我们提出锡尔博:符号回归的提升。超过98个回归数据集的实验表明,通过将少量的增强阶段(在2--5之间)添加到符号回归器中,通常可以实现统计学上显着的改进。我们注意到,在任何符号回归器之上的编码锡尔布都很简单,而增加的成本只是更多的进化回合。锡尔博本质上是一个简单的附加组件,可以很容易地添加到现存的符号回归器中,通常会带有有益的结果。
translated by 谷歌翻译
扩展语言模型已被证明可以预测提高各种下游任务的性能和样本效率。相反,本文讨论了一种不可预测的现象,我们将其称为大语言模型的新兴能力。如果在较小的模型中不存在,而是在较大的模型中存在,那么我们认为它可以突然出现。因此,不仅可以通过推断较小模型的性能来预测紧急能力。这种出现的存在意味着额外的扩展可以进一步扩大语言模型的能力范围。
translated by 谷歌翻译