Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
translated by 谷歌翻译
近年来,深度学习已成为遥感科学家最有效的计算机视觉工具之一。但是,遥感数据集缺乏培训标签,这意味着科学家需要解决域适应性问题,以缩小卫星图像数据集之间的差异。结果,随后训练的图像分割模型可以更好地概括并使用现有的一组标签,而不需要新的标签。这项工作提出了一个无监督的域适应模型,该模型可在样式转移阶段保留图像的语义一致性和每个像素质量。本文的主要贡献是提出了SEMI2I模型的改进体系结构,该模型显着提高了所提出的模型的性能,并使其与最先进的Cycada模型具有竞争力。第二个贡献是在遥感多波段数据集(例如Worldview-2和Spot-6)上测试Cycada模型。提出的模型可在样式传递阶段保留图像的语义一致性和每个像素质量。因此,与SEMI2I模型相比,经过适应图像的训练的语义分割模型显示出可观的性能增长,并达到与最先进的Cycada模型相似的结果。所提出方法的未来开发可能包括生态领域转移,{\ em先验}对数据分布的质量评估,或探索域自适应模型的内部体系结构。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
在监督机器学习的背景下,学习曲线描述了模型在看不见的数据上的性能如何与用于训练模型的样本数量有关。在本文中,我们介绍了植物图像的数据集,其中包括不同生长阶段的曼尼托巴省大草原共有的农作物和杂草的代表。我们通过Resnet体系结构确定该数据上的分类任务的学习曲线。我们的结果与以前的研究一致,并增加了以下证据:学习曲线受大规模,应用和模型的权力关系的约束。我们进一步研究标签噪声和可训练参数的减少如何影响该数据集的学习曲线。这两种效应都导致模型需要过多的较大训练集,以实现与没有这些效果的相同分类性能。
translated by 谷歌翻译
最近,开发了EAGL-I系统是为了迅速创建大量标记的植物数据集,该数据集旨在被农民和研究人员普遍使用,以创建农业中的AI驱动解决方案。结果,由40,000张图像组成的公开植物识别数据集与系统一起创建了由8种植物物种组成的不同尺寸的图像,以证明其能力。本文提出了一种新颖的方法,称为可变重叠的时间连续滑动窗口(fotcsw),该方法将由图像组成的图像转换为具有可变大小的图像的数据集,为3D表示,具有适合卷积神经网络的固定大小,并证明了此表示形式是比将数据集的图像调整到给定尺寸的信息更丰富。我们从理论上正式化了该方法的用例及其固有的属性,我们证明了它对数据具有过采样和正则化效果。通过将Fotcsw方法与最近提出的称为1维多项式神经网络的机器学习模型的3D扩展相结合,我们能够创建一个模型,该模型在数据集中创建的数据集中达到了99.9%的最新精度, EAGL-I系统超过了众所周知的建筑,例如重新系统和启动。此外,我们创建了一种启发式算法,该算法能够降低任何预先训练的N维多项式神经网络,并在不改变其性能的情况下压缩它,从而使模型更快,更轻。此外,我们确定当前可用的数据集无法以目前的形式用于机器学习,这是因为训练集和测试集之间存在很大的类不平衡。因此,我们创建了一个特定的预处理和模型开发框架,使我们能够将准确性从49.23%提高到99.9%。
translated by 谷歌翻译
被称为超声心动图的心脏成像是一种非侵入性工具,用于生成包括图像和视频的数据,心脏病专家用来诊断心脏异常,尤其是心肌梗死(MI)。超声心动图机可以提供大量数据,需要由心脏病专家快速分析,以帮助他们做出诊断和治疗心脏病。但是,获得的数据质量取决于购置条件以及患者对设置说明的响应能力。这些限制对医生的挑战尤其是当患者面对MI并且他们的生命受到威胁时。在本文中,我们提出了一种基于卷积神经网络(CNN)的创新实时端到端全自动模型,以根据由左心室(LV)的区域壁运动异常(RWMA)检测到MI,该模型是由左心室(LV)的视频中的。超声心动图。我们的模型是由2D CNN组成的管道实现Mi。我们在由165个超声心动图视频组成的数据集上培训了两个CNN,每个CNN从一个独特的患者中获得。 2D CNN在数据分割方面达到了97.18%的精度,而3D CNN获得了90.9%的精度,100%的精度和95%的召回率。我们的结果表明,创建一个完全自动化的MI检测系统是可行且有利的。
translated by 谷歌翻译
除了极其非线性的情况外,如果不是数十亿个参数来解决或至少要获得良好的解决方案,并且众所周知,众所周知,众所周知,并且通过深化和扩大其拓扑来实现复杂性的神经网络增加更好近似所需的非线性水平。然而,紧凑的拓扑始终优先于更深的拓扑,因为它们提供了使用较少计算单元和更少参数的优势。这种兼容性以减少的非线性的价格出现,因此有限的解决方案搜索空间。我们提出了使用自动多项式内核估计的1维多项式神经网络(1DPNN)模型,用于1维卷积神经网络(1dcnns),并且从第一层引入高度的非线性,这可以补偿深度的需要和/或宽拓扑。我们表明,这种非线性使得模型能够产生比与音频信号相关的各种分类和回归问题的常规1dcnn的计算和空间复杂性更好的结果,即使它在神经元水平上引入了更多的计算和空间复杂性。实验在三个公共数据集中进行,并证明,在解决的问题上,所提出的模型可以在更少的时间内从数据中提取比1dcnn更多的相关信息,并且存储器较少。
translated by 谷歌翻译
Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train stateof-the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
We introduce a novel framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios where targets correspond to passengers and their baggage items. We propose a Self-Supervised Learning (SSL) technique to provide the model information about instance segmentation uncertainty from overhead images. Our SSL approach improves object detection by employing a test-time data augmentation and a regression-based, rotation-invariant pseudo-label refinement technique. Our pseudo-label generation method provides multiple geometrically-transformed images as inputs to a Convolutional Neural Network (CNN), regresses the augmented detections generated by the network to reduce localization errors, and then clusters them using the mean-shift algorithm. The self-supervised detector model is used in a single-camera tracking algorithm to generate temporal identifiers for the targets. Our method also incorporates a multi-view trajectory association mechanism to maintain consistent temporal identifiers as passengers travel across camera views. An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach. Our results show that self-supervision improves object detection accuracy by up to $42\%$ without increasing the inference time of the model. Our multi-camera association method achieves up to $89\%$ multi-object tracking accuracy with an average computation time of less than $15$ ms.
translated by 谷歌翻译