在处理机器学习模型(例如图形神经网络(GNN))中的一批图表时,通常将几个小图组合到一个整体图中以加速处理并减少填充的开销。例如,这是PYG库中支持的。但是,小图的尺寸对于节点和边缘的数量可能会有很大的变化,因此,组合图的大小仍然可能有很大差异,尤其是对于小批量大小而言。因此,仍然产生过多的填充和浪费计算的成本。本文提出了一种新方法 - 元组包装 - 用于生成导致最小开销的批次。该算法扩展了最近引入的序列填料方法,以在(| nodes |,| edges |)的2D元组上工作。单调启发式词被应用于元组值的2D直方图,以定义填充直方图箱的优先级,以及目标以达到节点数量和边缘数量的限制。实验验证了多个数据集上算法的有效性。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program. This technique is considered as the de-facto standard for computing the differentiation in many machine learning and optimisation software tools. Despite the practicality of this technique, the performance of the differentiated programs, especially for functional languages and in the presence of vectors, is suboptimal. We present an AD system for a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.
translated by 谷歌翻译
Exploring the climate impacts of various anthropogenic emissions scenarios is key to making informed decisions for climate change mitigation and adaptation. State-of-the-art Earth system models can provide detailed insight into these impacts, but have a large associated computational cost on a per-scenario basis. This large computational burden has driven recent interest in developing cheap machine learning models for the task of climate model emulation. In this manuscript, we explore the efficacy of randomly wired neural networks for this task. We describe how they can be constructed and compare them to their standard feedforward counterparts using the ClimateBench dataset. Specifically, we replace the serially connected dense layers in multilayer perceptrons, convolutional neural networks, and convolutional long short-term memory networks with randomly wired dense layers and assess the impact on model performance for models with 1 million and 10 million parameters. We find average performance improvements of 4.2% across model complexities and prediction tasks, with substantial performance improvements of up to 16.4% in some cases. Furthermore, we find no significant difference in prediction speed between networks with standard feedforward dense layers and those with randomly wired layers. These findings indicate that randomly wired neural networks may be suitable direct replacements for traditional dense layers in many standard models.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
通用数据模型解决了标准化电子健康记录(EHR)数据的许多挑战,但无法将其集成深度表型所需的资源。开放的生物学和生物医学本体论(OBO)铸造本体论提供了可用于生物学知识的语义计算表示,并能够整合多种生物医学数据。但是,将EHR数据映射到OBO Foundry本体论需要大量的手动策展和域专业知识。我们介绍了一个框架,用于将观察性医学成果合作伙伴关系(OMOP)标准词汇介绍给OBO铸造本体。使用此框架,我们制作了92,367条条件,8,615种药物成分和10,673个测量结果的映射。域专家验证了映射准确性,并且在24家医院进行检查时,映射覆盖了99%的条件和药物成分和68%的测量结果。最后,我们证明OMOP2OBO映射可以帮助系统地识别可能受益于基因检测的未诊断罕见病患者。
translated by 谷歌翻译
刻板印象,偏见和歧视已在机器学习(ML)方法(例如计算机视觉(CV)[18,80],自然语言处理(NLP)[6]或两者兼有大图像和大图像和两者兼而有之)标题模型,例如OpenAI剪辑[14]。在本文中,我们评估了ML偏差如何在世界内部和自主作用的机器人中表现出来。我们审核了最近发表的几种剪贴式机器人操纵方法之一,向其呈现在表面上有人脸的图片,这些物体在种族和性别之间各不相同,以及包含与常见刻板印象相关的术语的任务说明。我们的实验明确表明机器人对性别,种族和科学持有的较大的构成观念的作用,并大规模地划分了。此外,经过审核的方法不太可能认识有色人种和有色人种。我们的跨学科社会技术分析跨越了科学技术与社会(STS),批判性研究,历史,安全,机器人技术和AI等领域和应用。我们发现,由大型数据集和溶解模型提供动力的机器人(有时称为“基础模型”,例如剪辑),其中包含人类风险在物理上放大恶性刻板印象;而且,仅纠正差异将不足以使问题的复杂性和规模不足。取而代之的是,我们建议机器人学习方法在适当的时候暂停,重新设计甚至损坏,直到结果被证明是安全,有效和公正的,才能暂停,重新工作甚至损坏其他有害结果。最后,我们讨论了有关身份安全评估框架和设计正义等主题的新的跨学科研究的全面政策变化,以及更好地理解和解决这些危害的主题。
translated by 谷歌翻译
机器学习(ML)研究通常集中在模型上,而最突出的数据集已用于日常的ML任务,而不考虑这些数据集对基本问题的广度,困难和忠诚。忽略数据集的基本重要性已引起了重大问题,该问题涉及现实世界中的数据级联以及数据集驱动标准的模型质量饱和,并阻碍了研究的增长。为了解决此问题,我们提出Dataperf,这是用于评估ML数据集和数据集工作算法的基准软件包。我们打算启用“数据棘轮”,其中培训集将有助于评估相同问题的测试集,反之亦然。这种反馈驱动的策略将产生一个良性的循环,该循环将加速以数据为中心的AI。MLCommons协会将维护Dataperf。
translated by 谷歌翻译
对于涉及连续的,半监督的学习以进行长期监测的应用程序,高维计算(HDC)作为机器学习范式非常有趣。但是,其准确性尚未与其他机器学习(ML)方法相提并论。允许快速设计空间探索以找到实用算法的框架对于使高清计算与其他ML技术竞争是必要的。为此,我们介绍了HDTORCH,这是一个开源的,基于Pytorch的HDC库,其中包含用于HyperVector操作的CUDA扩展名。我们通过使用经典和在线HD培训方法来分析四个HDC基准数据集,从而证明了HDTORCH的实用程序。我们为经典/在线HD的平均(训练)/推理速度分别为(111x/68x)/87x。此外,我们分析了不同的超参数对运行时和准确性的影响。最后,我们演示了HDTORCH如何实现对大型现实世界数据集应用的HDC策略的探索。我们对CHB-MIT EEG癫痫数据库进行了首个高清训练和推理分析。结果表明,在一部分数据子集上训练的典型方法不一定会推广到整个数据集,这是开发医疗可穿戴设备的未来HD模型时的重要因素。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译